Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

‘Inevitable’ UK AI copyright deal would include compensation, say MPs


AI developers are likely to have to pay copyright holders compensation in recognition of “past infringements” as part of any voluntary agreement that would allow them to use copyrighted content to train and operate their AI systems in future, according to a prominent group of UK law makers.

The Science, Innovation and Technology Committee in the House of Commons suggested that a “financial settlement” of that nature would be an “inevitable” component of such a voluntary agreement.

The quality of output generative AI systems produce depends in large part on the quality of the data used for training and developing those systems. The extent to which AI developers access high-quality datasets lawfully has been a contentious issue. Some content creators – like Getty Images – have brought legal action against AI developers, while others have stepped up their lobbying of policymakers in a bid to achieve legislative change.

The UK government had sought to broker an agreement between representatives from the creative and technology industries on a AI copyright code of practice, but it was forced to abandon those plans earlier this year after the talks broke down. The government is now pursuing “a workable alternative” to an industry-led code and is engaging with representatives from the AI and creative sectors to help it identify what that alternative might look like. However, there is growing pressure on it to realise an effective outcome to that process, with UK law makers in both the House of Commons and the House of Lords having called on the government to set a deadline for industry talks and bring forward legislation if no voluntary agreement is reached by then.

The latest such call was made by the Science, Innovation and Technology Committee in a report published after the UK parliament was dissolved for the forthcoming UK general election. The Committee’s report outlined potential components of a voluntary agreement on AI and copyright in the UK and included a call to action for the next UK government.

The Committee said: “The growing volume of litigation relating to alleged use of works protected by copyright to train AI models and tools, and the value of high-quality data needed to train future models, has underlined the need for a sustainable framework that acknowledges the inevitable trade-offs and establishes clear, enforceable rules of the road. The status quo allows developers to potentially benefit from the unlimited, free use of copyrighted material, whilst negotiations are stalled.”

“The current government, or its successor administration, should ensure that discussions regarding the use of copyrighted works to train and run AI models are concluded and an implementable approach agreed. It seems inevitable that this will involve the agreement of a financial settlement for past infringements by AI developers, the negotiation of a licensing framework to govern future uses, and in all likelihood the establishment of a new authority to operationalise the agreement. If this cannot be achieved through a voluntary approach, it should be enforced by the government, or its successor administration, in co-operation with its international partners,” it said.

The Science, Innovation and Technology Committee’s report also addressed a range of other topics pertaining to AI governance – including recommendations pertaining to the UK government’s proposed approach to regulating AI.

The government’s approach currently involves tasking existing UK sectoral regulators with regulating the use of AI using their existing powers, but with reference to a set of cross-sectoral principles. In tandem with this, the government has secured voluntary commitments from leading AI developers pertaining to AI safety. This provides for oversight and testing of next-generation AI systems, so-called ‘frontier AI’, by the UK’s AI Safety Institute before those systems are put into public use.

The Committee said, though, that “the next government should be ready to introduce new AI-specific legislation, should the current approach … prove insufficient to address current and potential future harms associated with the technology”. It said the success of the current approach “will be determined to a significant extent by the ability of our sectoral regulators to put the government’s high-level principles into practice as AI continues to develop at pace” and identified three factors – powers; coordination; and resourcing – that would have an influence on that.

The Committee said the next government should conclude its planned regulatory gap analysis and act where new powers are identified as needed, as a priority. It said centralised guidance should also be produced to give direction and help to regulators with overlapping remits, and further suggested that the government could make suggestions on how regulators can undertake joint investigations, make regulatory referrals, and improve information sharing. The Committee also said that the £10 million of public funding pledged to regulators to support with their AI work is “clearly insufficient”, and it added that reports that the AI Safety Institute is experiencing challenges in accessing AI developers’ future models are “a major concern”.

According to the Committee, businesses should also be given guidance by UK regulators to help resolve the question of who is liable when decisions made, or information presented, by AI systems cause harm.

It said: “Nobody who uses AI to inflict harm should be exempted from the consequences, whether they are a developer, deployer, or intermediary. The next government together with sectoral regulators should publish guidance on where liability for harmful uses of AI falls under existing law. This should be a cross-government undertaking. Sectoral regulators should ensure that guidance on liability for AI-related harms is made available to developers and deployers as and when it is required. Future administrations and regulators should also, where appropriate, establish liability via statute rather than simply relying on jurisprudence.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.