Out-Law News 7 min. read
13 Jan 2025, 4:59 pm
The UK should move to a more centralised system for the regulation of AI if sector regulators fail to sufficiently promote innovation in the way they carry out their duties, a prominent businessman has recommended.
The recommendation was contained in a new report published on Monday, the AI opportunities action plan. Tech entrepreneur Matt Clifford was commissioned to produce the report, which explores how AI can be harnessed to support economic growth in the UK. The government has endorsed all 50 of its recommendations and committed to a series of actions – mainly over the next 12 months – to implement them.
A series of recommendations made by Clifford seek to ensure that the UK’s sector regulators “are fit for the age of AI”. The recommendations include that the government “commit to funding regulators to scale up their AI capabilities” and that government departments “include a focus on enabling safe AI innovation” in the strategic guidance they provide to the regulators they sponsor.
Clifford further recommended new pro-innovation regulatory initiatives like sandboxes be “targeted in areas with regulatory challenges but high-growth potential, such as products which integrate AI into the physical world like autonomous vehicles, drones and robotics”. On that, the government said the new Regulatory Innovation Office (RIO) “will identify priority sectors with high-growth potential and work with relevant regulators to identify pro-innovation initiatives”. A progress report will be issued this summer.
New disclosure duties were also recommended for UK regulators, to require them to “publish annually how they have enabled innovation and growth driven by AI in their sector”. Clifford said disclosure should be in line with “transparent metrics”, to ensure accountability. Examples of those metrics could include “timelines to publish guidance, make licence decisions and report on resources allocated to AI-focused work”, he said.
However, Clifford went on to highlight that further actions might be needed to ensure UK regulators sufficiently promote AI in the way they regulate.
“Even with these initiatives, individual regulators may still lack the incentives to promote innovation at the scale of the government’s ambition,” Clifford said. “If evidence demonstrates that is the case, government should consider more radical changes to our regulatory model for AI, for example by empowering a central body with a mandate and higher risk tolerance to promote innovation across the economy. Such a body could have expertise and statutory powers to issue pilot sandbox licences for AI products that override sector regulations, taking on liability for all related risks. This approach could initially be explored and piloted for specific AI applications at small scale.”
Malcolm Dowden of Pinsent Masons said implementation of Clifford’s recommendations would result in a significant shift in the role of the UK’s sectoral regulators – from one of regulating to that of positively promoting innovation – and could see a dilution of the powers currently enjoyed by cross-sector authorities like the Information Commissioner’s Office (ICO).
“There is a clear tension between a regulator exercising its protective function in a way that is necessary and proportionate, but without stifling growth, and the AI action plan’s proposal for a new obligation positively to promote innovation,” Dowden said. “If a regulator were to resolve an issue in favour of protecting rights rather than promoting innovation ‘at the scale of the government’s ambition’, then the action plan suggests they would face the possibility of being sidelined and overridden.”
Currently in the UK, a range of legislation and regulation applies to AI – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use.
The approach to AI regulation involves retaining the sector-based approach to regulation, but regulators must fulfil their regulatory functions as they relate to AI with due regard to five cross-sector principles – safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
To promote AI development, the AI opportunities action plan further called on the government to “reform the UK text and data mining regime so that it is at least as competitive as the EU”. Clifford said uncertainty around intellectual property is hindering innovation and undermining the UK’s broader AI ambitions and the growth of the creative industries. “This has gone on too long and needs to be urgently resolved,” he said.
Intellectual property law expert Cerys Wyn Davies of Pinsent Masons said: “Last month, the government opened a consultation on AI and copyright where it outlined various options for reform, including the potential recalibration of the existing data mining exception in UK copyright law to facilitate the training of AI models using copyrighted material. It appears to favour imposing conditions on that activity – rights holders would be able to opt their content out from being used in that way – and underpin it with new transparency obligations, so that AI developers would need to inform rights holders over their use of copyrighted material for AI training.”
“Similar tensions to those applicable in the regulatory sphere apply, in that the government faces a tricky job in balancing the promotion of AI development in pursuit of economic growth while protecting against potential harms – in this case, potentially depriving content creators of what they might see as their rights to be remunerated for producing high quality data. If the government gets the balance wrong, there is a risk that content creators will be disincentivised from producing those datasets, which are exactly the kind AI developers need to create fair, accurate and otherwise robust AI models that can help deliver the productivity and innovation gains the economy needs,” she said.
Also included in the AI opportunities action plan were recommendations aimed at scaling up data processing capacity in the UK, to account for the AI age. In response, the government has committed to outlining a long-term plan for the UK’s AI infrastructure needs and related funding, further pledged to establish so-called ‘AI growth zones’ “to facilitate the accelerated build out of AI data centres”. It also said it will invest in a new supercomputing facility.
The government further endorsed Clifford’s proposal for the creation of a new National Data Library to support AI development. Clifford said: “We should seek to responsibly unlock both public and private data sets to enable innovation by UK startups and researchers and to attract international talent and capital. As part of this, government needs to develop a more sophisticated understanding of the value of the data it holds, how this value can be responsibly realised, and how to ensure the preservation of public trust across all its work to unlock its data assets.”
Bella Phillips of Pinsent Masons, an expert in intellectual property law and data commercialisation, said: “The action plan places a lot of importance on the development of high-value data sets, access to which the action plan says will make the UK an attractive place for start-ups and researchers to establish themselves. Whilst some of the proposals focus on public data sets, there are wider mentions of collecting data in strategically important areas, and of shaping the market of data set curation, including contributions from the private sector.”
“Both points highlight the importance of organisations, including those in the private sector, investing in data governance strategy and data readiness, now, to be able to engage meaningfully with the government as its AI strategy develops,” she said.
Further recommendations are aimed at boosting AI use within government, with a ‘scan, pilot and scale’ approach endorsed in this regard. Examples of the various initiatives that will be taken forward include the building of a cross-government technical horizon scanning and market intelligence capability that understands AI capabilities and the piloting of a framework for sourcing AI, while the government said it would also “scope options to improve AI procurement with mission-focused national AI tenders”.
Technology law expert Sarah Cameron of Pinsent Masons said: “This action plan generates a real and energising sense of optimism. Whereas the infrastructure objectives will inevitably take time to achieve, the scan, pilot and scale recommendations, the ‘move fast and learn things’ agenda, along with light touch, but necessarily multi-stage gated, approach to procurement could really drive some highly impactful productivity and efficiency gains so desperately needed in education and the NHS, for example.”
Clifford also called on the government to do more to ensure more companies at the forefront of so-called ‘frontier AI’ – the next generation of AI technology – are “UK national champions” amidst stiff international competition. In that regard, the government said it would act on Clifford’s recommendation, which is that it establish and empower a new unit to “partner with the private sector to deliver the clear mandate of maximising the UK’s stake in frontier AI”.
“It is great to see recognition that the UK can learn from some policies and actions in other countries, particularly around skills, as well as the continued support of the international collaboration around safety and the importance of the AI Safety Institute, which will be put on statutory footing,” Cameron said.
Public policy expert Mark Ferguson of Pinsent Masons said the commissioning of the AI opportunities action plan was one of the first acts of the UK government when taking office in July 2024.
“The government is seeking to tread a path between the EU and US approach to AI regulation, a ’distinctively British approach’, as the prime minister puts it.” Ferguson said. “There is no commitment to an overarching framework as with the EU’s AI Act, but there is a greater emphasis on the role of sector-based regulation, unlike the US.”
“For businesses, today’s announcement should be a call to step up their engagement with the government and sector regulators on the development of AI policy. Many of the recommendations still require the Department for Science, Innovation and Technology to take action, and attention will now turn to the spring when the government will complete its spending review, set out its wider approach to AI in the industrial strategy’s digital and technologies sector plan, and publish the compute strategy, which will outline plans to ensure the UK has the AI infrastructure and compute capacity to deliver on its AI ambitions,” he said.