Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

AI adoption linked to risk understanding and regulation


Gaps in the UK legislative and regulatory frameworks pose challenges for holding AI developers to account for risks arising from the way their systems operate – and may be exposing business users to risks they don’t fully understand, causing a negative impact on the technology’s adoption across sectors, according to the UK government.

The observations reflect what the government said are the “early findings” of a review it is undertaking into existing liability frameworks and accountability through the value chain in the context of AI. Those observations were shared by the government in its response to a report published earlier this year on large language models (LLMs) and generative AI by the House of Lords Communications and Digital Committee.

“Businesses using AI may be ‘on the hook’ for risks that they can’t fully understand or mitigate since those risks emerge further up the supply chain during the training and development of the models,” the government said. “We think that this could be impacting adoption across the economy as well as leaving risks unmitigated.”

“Initial findings suggest that the scope of existing laws and regulatory powers means that it can be hard to hold AI developers to account for some of the risks their models can create or contribute to,” it said.

Higgins Meghan

Meghan Higgins

Senior Associate

As more disputes arising from contracts for AI systems enter the courts, there may be gaps in liability that require legislative action to clarify where an AI system has failed to meet an appropriate standard and which party in the supply chain should be responsible for that

The government has committed to developing its position in relation to AI regulation and issues of liability. A focus of its work has been risks or harms from highly capable general-purpose models. In its new response, the government confirmed that it has engaged legal experts to review liability for such risks and harms. It committed in its response to its own AI white paper, published in February, to provide an update on its work on new responsibilities for developers of highly capable general-purpose AI systems by the end of the year – it has said it is “considering introducing targeted binding requirements on developers of highly capable general-purpose AI systems which may involve creating or allocating new regulatory powers”.

The Communications and Digital Committee report was published just days before the government’s AI white paper response. At the time, it recommended that the government ask the Law Commission “to review legal liability across the LLM value chain, including open access models”. In relation to AI safety specifically, the committee said there is a “current absence of benchmarks with legal standing and lack of clarity on liability” and that this “suggests there are limited options to issue market recall directives to the developer, or platform take‑down notices at websites hosting dangerous open access models”.

Meghan Higgins of Pinsent Masons, who advises businesses in complex technology disputes and investigations, said: “In the UK, the courts are generally reluctant to intervene where commercial parties have agreed to particular terms. There are limited instances in which the courts will address a structural issue in the contracting process, such as lack of capacity by one of the parties, or a significant inequality in bargaining power leading to terms that are unreasonable. This deferential approach may be difficult to reconcile with the realities of AI.”

“The government’s comments acknowledge the difficulties businesses have in accurately understanding and allocating risk across the supply chain as they rapidly adopt AI systems given the complexity and unpredictability of the underlying technology. The performance of an AI model may also be impacted by activities of other entities in the supply chain in ways the original developers could not have predicted. As more disputes arising from contracts for AI systems enter the courts, there may be gaps in liability that require legislative action to clarify where an AI system has failed to meet an appropriate standard and which party in the supply chain should be responsible for that,” she said.

While it is engaged in an ongoing review and considering specific reform in respect of highly capable general-purpose models, the government has not set out any plans to reform existing liability frameworks to account for AI nor develop a bespoke new AI liability law, unlike policymakers in the EU.

According to feedback it received to its AI white paper proposals, there is broad support for it to “clarify AI-related liability” – the government said just under a third of respondents backed that idea. However, the government cited a lack of “clear agreement” among respondents on “where liability should sit”. It said about a quarter of the respondents said “new legislation and regulatory powers would be necessary to effectively allocate liability across the [AI] life cycle” and that there was also support for identifying “a legally responsible person for AI” within organisations under a model similar to that which already requires some organisations to appoint a dedicated data protection officer.

The government suggested at the time that further iterations of its regulatory approach towards AI could entail “measures to effectively allocate accountability and fairly distribute legal responsibility to those in the life cycle best able to mitigate AI-related risks”.

Recently, Higgins and other experts at Pinsent Masons highlighted how there is growing AI litigation risk in different areas of law that requires a business response. In one recent case that came before a tribunal in Canada, airline Air Canada was held liable for a negligent misrepresentation made to a customer by one of its chatbots.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.