Out-Law / Your Daily Need-To-Know

Out-Law Analysis 10 min. read

Growing AI litigation risk requires business response


The risk businesses face from litigation is changing as artificial intelligence (AI) tools become more popular. Businesses need to recognise this and consider the different kinds of risks associated with AI technology and how they might give rise to liability.

Risks associated with AI can arise with the data those systems use, with use of the systems themselves, and from the systems’ outputs. By understanding the risks, and the types of claims that have already come before the courts, businesses can determine whether they need to build additional safeguards into their use of AI systems.

Intellectual property disputes

Perhaps the most well-established stream of litigation relating to the use of AI is composed of claims alleging that the use of data by AI systems has infringed intellectual property rights.

In the UK, a trial is expected to take place in the case of Getty Images v Stability AI. Getty has accused Stability AI of copyright infringement, claiming that the AI developer used its copyright-protected images, without permission, to train its AI system that automatically generates images.

Stability AI has also been sued in the US. Three artists have filed a class action lawsuit against Stability AI and Midjourney alleging that the use of their work infringed copyright and other laws and threatens to put the artists out of a job.

Also in the US, the New York Times has lodged legal action against OpenAI and Microsoft, accusing the AI developers of seeking to “free-ride” on its “massive investment in its journalism”, by using the content it publishes to “build substitutive products without permission or payment”.

Open AI, Microsoft and its subsidiary, GitHub, have also been named in a class-action lawsuit raised before district court in Northern California in which it is alleged that AI-code generating software, Copilot, violates US copyright laws.

Several other copyright-related cases have emerged too, but AI use can also engage other intellectual property rights, such as trade marks and patents. A ruling by the High Court in London late last year provided insight into the patentability of trained ‘artificial neural networks’, while a number of courts globally – including the UK Supreme Court – have considered whether AI systems can be named as an inventor under patent laws.

Automated decision-making and data claims

AI models learn from the ingestion of large quantities of data. As well as this raising IP risks, business need to consider their data protection obligations as principles of data protection legislation will apply when the data being used to train AI systems constitutes personal data.

Some courts have already considered how rights that data subjects enjoy under the EU, and UK, General Data Protection Regulation (GDPR), apply in the context of AI systems.

In the Netherlands, the Amsterdam Court of Appeal considered, amongst other data protection issues, drivers’ right to access their personal data and receive information about issues such as the use of automated decision-making, including profiling, from Uber and Ola. Article 15 of the GDPR provides data subjects with a right to obtain confirmation about whether their personal data is being processed, and if so, a right to access their data and obtain information such as whether automated decision-making is being carried out. Subject to certain exclusions, Article 22 provides them with a right not to be subject to decisions that produce a legal, or other significant effect on them if they are based solely on automated processing, like profiling.

In a series of three decisions, the court held, amongst other things, that Uber and Ola had to provide the drivers with access to information about automated decision-making. The court noted that some of the automated decision-making processes undertaken by both companies, such as creating “fraud probability scores” in relation to Ola and the “batched matching system” which links Uber drivers to passengers, affected drivers to a considerable extent. The court explained that in the circumstances, the rights of access to information about automated decision-making were greater than the companies’ desire to protect its trade secrets. With regards to Uber, the court explained that while Uber did not have to give a complicated explanation of its algorithms it did need to explain which factors were assessed and how they were weighted in order to come to decisions about how the average ratings of drivers are derived.

The court further held, in relation to Uber, that while humans were involved in reviewing the decisions made by the AI tool, the process was not meaningful. In relation to Ola, the court noted that Ola did not argue that there was any human intervention in the creation of “fraud probability scores” for the drivers.

The EU’s highest court, the Court of Justice of the EU (CJEU), also considered how Article 22 applies in a case involving German credit reference agency SCHUFA.

SCHUFA establishes a prognosis on the likelihood of future behaviour of a person which is based on that person’s characteristics and mathematical and statistical processes. It then groups people to others with similar characteristics who have behaved in a similar way in order to predict behaviour. An individual who had been refused a loan based on a credit score SCHUFA had shared with the would-be lender, challenged SCHUFA’s decision-making process.

The CJEU considered the extent to which the activity engaged the Article 22 right not to be subject to decisions derived from automated processing. It determined that the refusal of an online credit application does engage Article 22 and that SCHUFA’s activity did constitute profiling. However, Article 22 does provide for exceptions to the right not to be subject to decisions derived from automated processing, like profiling – including where the activity is provided for under EU law or the national law of a member state. In those circumstances, the GDPR further requires that a series of safeguards are in place in respect of the profiling. The CJEU said it was for the German courts to determine whether the German legislation providing for SCHUFA’s profiling complies with the GDPR’s requirements in that regard.

The CJEU’s ruling in the SCHUFA case may spur companies to seek explicit consent from individuals to the use of automated processing or to carry out more extensive human reviews to check decisions made on an automated basis.

Breach of contract

AI has also been considered in a recent breach of contract claim raised in the UK.

Although it settled before trial, the case of Tyndaris SAM v MMWWVWM Limited illustrates the potential contractual claims which may be raised when an AI system goes wrong.

Tyndaris SAM (Tyndaris) developed an AI system which used a supercomputer to apply AI and machine learning to sources such as real-time news and social media in order to identify investment trends, without any human intervention. Tyndaris and MMWWVWM Limited (VWM) entered into a contract in which VWM invested in an account managed by Tyndaris which used this AI system. Tyndaris brought a claim against VWM for unpaid management fees. VWM disputed that management fees were due and brought a counterclaim alleging that it had entered into the contract in reliance upon Tyndaris misrepresentations about the capabilities of the AI system, which were either untrue at the time of contracting, or later became untrue.

Amongst other allegations, VWM alleged that the system had not been rigorously tested, the testing hadn’t been “properly designed or analysed by professionals experienced in systematic trading” and that the AI system didn’t enter the market at the best times of the day.

Although no decision was reached on the merits of the claims in the case, it illustrates another avenue for challenging decisions by an AI system.

In another case, Leeway Services Ltd alleged that Amazon Payments UK Limited was responsible for a breach of contract after it was suspended from trading on Amazon’s marketplace. Leeway claims its suspension was caused by AI and that it has missed out on online sales it could have made as a result. Amazon has filed a defence against the claims.

To try to mitigate the contractual risks arising out of the development and use of AI systems, business should consider including specific clauses about the level of testing which the AI system has been subject to and should consider including clauses attributing liability for AI-related failures. However, given the pace at which these systems are being developed, it may be difficult for businesses to understand and address all of the risks that could emerge.

Human rights claims and discrimination

In the Netherlands, in November 2021, the Dutch minister of finance was fined €2.75 million by the country’s data protection authority after the Tax and Customs Administration automatically categorised the risk of certain applications for benefits using an algorithm that used the nationality of applicants as an indicator of risk. The Hague District Court had ruled on 5 February 2020 that the Dutch government could no longer use its System Risk Indication (SyRI) system to identify individuals at risk of engaging in fraud because it violated Article 8 of the European Convention on Human Rights (ECHF), which protects the right to respect for private and family life.

In a report on the decision, the UN’s Office of the High Commissioner for Human Rights noted that the SyRI tool was not unique to the Netherlands but was part of a global trend toward the introduction and expansion of digital technologies in welfare states.

In the UK, the Court of Appeal considered human rights in the context of facial recognition systems – which can be powered by AI – in the case of R (Bridges) v Chief Constable of South Wales Police (Respondent) and others in 2020. That case was about the use of automated facial recognition technology in a pilot project by the South Wales Police Force. The Court of Appeal held that such use was not "in accordance with the law" as far as it enshrines the right to respect for private and family life under the ECHR. Among other things, the court censured the police force over its data protection impact assessment and considered that reasonable steps had not been taken to investigate whether the technology had a racial or gender bias, as required by the public sector equality duty that applies in the UK.

On 20 October 2022, the State of Michigan announced that it had reached a $20 million settlement to resolve a class action lawsuit alleging that the state’s Unemployment Insurance Agency used an auto-adjudication system to falsely accuse recipients of fraud, resulting in the seizure of their property without due process. The claimants were recipients of unemployment compensation benefits and attributed the seizure of their property to the state’s use of the Michigan Integrated Data Automated System (MiDAS) to detect and adjudicate suspected instances of fraud.

Product liability claims

New legislation is expected in the EU and UK which will have a major bearing on claims pertaining to AI liability in future.

Both EU and UK policymakers are in the process of updating product liability laws. The EU proposals include provisions to allocate liability for the use of AI systems. The product liability rules only regulate business-to-consumer relationships, and so are not relevant to business-to-business claims, but for manufacturers the rules are relevant in the context of potential mass claims risk.

A new EU AI liability directive is envisaged to sit alongside the new product liability regime. Under that directive, there would be rebuttable presumptions to make it easier for individuals seeking compensation due to harms caused by AI to meet the required burden of proof. The directive would apply to claims brought under fault-based liability regimes, which are regimes that provide for a statutory responsibility to compensate for damage which has been caused either intentionally or by a negligent act or omission. Under the proposals, there would be a mechanism for defendants to rebut the presumptions.

The EU AI Act will separately provide a new system of regulation for AI in the EU. The UK is also planning a new regulatory regime for AI use. In its AI white paper, the UK government recognised “the need to consider which actors should be responsible and liable for complying with the principles” but said it was “too soon to make decisions about liability as it is a complex, rapidly evolving issue”.

The paradigm example that people discuss when considering how the use of AI may impact traditional legal principles around fault, causation, and foreseeability of loss is the use of self-driving cars. In the UK, a new Automated Vehicles Bill is currently making its way through parliament. It will build on the existing Automated Vehicles Act, which states that where an insured automated vehicle causes an accident and causes damage, the insurer will be held liable. This, to-date, is the only clear guidance provided in UK law as to how liability for harms caused by AI will be determined.

Actions for businesses

To mitigate litigation risk, and if using AI, busineses should:

  • choose and test AI systems which are secure and robust and ensure adequate training is provided to staff on what is a safe and acceptable use of any AI systems. This should include having adequate security and governance procedures in place to ensure systems are adequately supervised and responsibility is not outsourced to third party providers;
  • inform customers and clients if AI is being used to deliver services and provide disclaimers and warnings on limitations, if necessary. Document each stage of the design and running of the AI system so that businesses are best placed to explain how these systems work, where necessary;
  • consider the introduction of contractual carve outs excluding liability for defective AI, and/or caps on liabiltiy. The extent to which businesses may be able to enforce such provisions has yet to be tested by the courts, however;
  • embed robust processes to safeguard risks to consumers, including introducing policies to guarantee the review of AI outputs for biases or errors, confidential information, and compliance with data protection principles.
  • provide routes for contestability and redress in the event that customers or individuals disagree with the use of AI or its decisions.

Co-written by Lucia Doran and Laura Gallagher of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.