It is possible for AI providers to look to exclude their liability by appropriately drafted contractual disclaimers. Guidance from several organisations makes plain that limitations to AI systems should be pointed out to implementers, particularly for predictive models outputs that will be statistically based. Reliance on the outputs needs to be judged accordingly in the relevant context of its application.
In some cases, AI providers may seek to make no representations as to outputs in their contract drafting and put the emphasis and all responsibility on the implementer for subsequent decision making. However, providers adopting this approach should be careful to balance their desire to minimise their own risk with assuring the customer that its AI-enabled product represents some value addition to its business. Businesses must also be wary of how an AI system will in fact be applied in the market. Although there is little precedent, English courts may strike down overly broad disclaimers or exclusions of warranty under the Unfair Contract Terms Act 1977, especially where an AI system is making decisions without any human cross-check or intervention, such as in the case of online credit reviews.
Attributing liability for AI may be simpler in certain sectors than others, such as in life sciences where it is unlikely that AI systems will be allowed to function autonomously without any human oversight in the near future. Businesses in life sciences and other 'high-risk' sectors are therefore likely to encounter more traditional models of duties of care and attribution of liability.
Currently, the health care professional, or the relevant hospital trust, is held liable for mistakes that happen in medical care given to patients, whether or not with the assistance of AI systems. However, AI suppliers feeding into the health care system could be held to appropriately drafted performance obligations in their contractual arrangements: warranties around the exercise of reasonable skill and care in developing, testing and monitoring the AI system, fitness for purpose, and data quality and sufficiency. This will enable hospital trusts and health care professionals to be assured that an AI system can be safely deployed in a health care setting. Current practice in this sector could, in time, evolve towards a more realistic apportionment of risks between the AI supplier and the hospitals.
Setting up and strengthening AI governance frameworks
Robust and detailed governance mechanisms around the use of AI can help businesses adopting AI systems to address future risk of liability.
High-level ethical principles expected of AI systems, such as fairness, explainability, transparency and accountability, are now beginning to take a more granular form that can be implemented by businesses, thanks to the work of several governmental, non-governmental and international organisations. For instance, the UK Information Commissioner's Office (ICO) and the Alan Turing Institute have collaborated on detailed guidance to help organisations explain processes, services and decisions delivered by AI systems. The ICO has also published guidance on AI and data protection to help organisations mitigate data protection risks that may arise from the deployment of AI systems.
Ultimately, governance will be at the heart of managing risk and enabling adoption. The more granularity and detail around legislative and best practice principles, the more businesses will be guided as to what the principles mean from an organisational and technical perspective.
Best practice guidance is currently focused on the public sector and where use of personal data is involved. However, it is only a short step for guidance to cover private sector and general industrial application. More collaboration and cross-fertilisation of implementable governance actions across different sectors would benefit businesses.
Resolving contentious AI liability issues
In many ways, there is little difference between the complex multi-party ecosystem in which AI systems are deployed and the potentially multi-jurisdictional disputes that can arise and the highly technical systems integration disputes that many businesses in the IT sector will already be familiar with. However, AI disputes are likely to be more technically complex. This is because AI systems are constantly learning and changing. Disputes will involve detailed forensic investigations into AI development programmes and other technical subject matter. Given the technical and the jurisdictional complexities, it may be beneficial for businesses contracting with one another over the provision of AI systems to opt for arbitration proceedings to settle any disputes that may arise.