Out-Law Analysis 6 min. read
04 Sep 2024, 10:32 am
Government departments and public bodies in the UK are increasingly incorporating artificial intelligence (AI) technology to automate their processes and support decision-making. In doing so, they need to ensure they act compatibly with principles of public law; if they fail to, judicial review proceedings could be raised in the courts to scrutinise how they are using AI.
There has been much debate in recent years as to whether the UK needs new legislation to regulate AI, to ensure that it is developed and used safely and ethically. In July, the new UK government confirmed it is working on new AI legislation, but a new AI Bill was not included within its legislative programme for the year ahead.
However, existing legal obligations on the UK public sector already create a regulatory system governing how public bodies may use AI. Long-standing principles of UK public law, as set out in statutes and common law, set constraints in general terms as to how public bodies must perform their functions lawfully, including through the use of AI technology or otherwise. The UK courts may step in to regulate these functions, where claims are brought under their judicial review jurisdiction.
One challenge in the use of AI technology is that it is often developed by training it on data sets, so that it can apply the approach it has learned to reach the ‘right’ decision when presented with new data. If there is bias in the training data, then this may lead to bias in the decision making.
This was a central issue in the Bridges case in 2020, after South Wales Police used AI facial recognition technology at public events to identify crime suspects. Research suggested that this type of technology was at risk of bias, producing results that were less accurate in respect of ethnic minorities. The police were unable to satisfy the Court of Appeal that they had complied with the public sector equality duty under the Equality Act 2010, because they had not taken all reasonable steps to ascertain that the technology they used had been adequately trained on unbiased data. Police forces have since then sought to address the procedural shortcomings identified in the Bridges case, when deploying these technologies.
In addition to promoting equality, public bodies need to ensure that their decisions and actions do not actually discriminate against persons impacted, on the basis of their personal characteristics. If there is discrimination, claims alleging a breach of Article 14 of the European Convention on Human Rights (ECHR) could be brought under the Human Rights Act 1998. In such cases, the courts will examine if any discrimination is justified and proportionate, something that is likely to require some investigation of the AI technology used and any training data on which it is based.
This is one reason why the UK government reversed its controversial decision to use an algorithm to downgrade a large number of A-level results in 2020. Due to exams not being possible during the Covid-19 pandemic, teacher-predicted grades were used instead. The initial government decision had applied the algorithm to adjust predicted grades for a number of factors, including past performance of a student’s school. The adjustments had a disproportionate and discriminatory impact on students from disadvantaged backgrounds attending state schools, and after widespread complaints about unfair outcomes and the threatened judicial review, the government swiftly agreed to abandon the adjustments.
Public bodies often deal with large volumes of personal data, where AI can offer considerable efficiency savings in processing the data. However they will need to comply with the General Data Protection Regulation (GDPR) when doing so.
The GDPR prohibits decisions based on solely automated processing, subject to very limited exceptions that require a clear legal basis and the provision of "meaningful information” about the workings of the AI to data subjects. This is supplemented by the Data Protection Act 2018, which requires a controller to notify the data subject about a significant decision based solely on automated processing, giving the person the right to request a new decision involving a human in the decision-making process. In practice, however, those provisions bite only where decisions are made without any human intervention. For this reason, many AI systems are configured to produce recommendations rather than decisions, with the final decision to be made by a human.
The Bridges case also illustrated the broader pitfalls that a public body must avoid when using AI to process large volumes of personal data. In that case, the police had prepared a data protection impact assessment (DPIA) which largely met its legal obligations. However, the Court of Appeal found that the DPIA did not fully comply with the Data Protection Act, because the police had not fully assessed how the privacy rights of the public under Article 8 of the ECHR were restricted by the use of the facial recognition technology, and whether the restriction was justified and proportionate.
Similarly, the Information Commissioner’s Office (ICO) in 2024 sanctioned a school which had installed facial recognition technology in its canteen for the identification of pupils, without proper data protection safeguards.
The common law principle of rationality can take various forms that need to be considered when a public body uses AI in its decision-making.
One aspect of the principle is that, where a body is exercising a discretion, it must not fetter its discretion. For example, if a decision maker with a discretionary power to decide on a variety of outcomes institutes a simplified ‘yes/no’ process following recommendations made by an algorithm, this would be fettering the decision-maker’s discretion to decide on alternatives to a simple yes or no.
Similarly, rationality requires that irrelevant factors are not considered in the decision-making process. The example of the 2020 A-level results illustrates this principle, in that when determining the grade that is merited by any individual student’s performance, the past performance of other students cannot be a relevant consideration.
Common law principles of fair consultation also regulate how public bodies may use AI to support their analysis of large-scale public consultations. The principles require that when decisions are taken following a consultation, the responses to the consultation must be conscientiously taken into account. Whilst this does not mean that decision-makers are obliged to read every one of thousands of public responses, if they instead rely on an AI-generated summary of responses, care needs to be taken that the AI is smart enough to produce a fair summary which gives sufficient prominence to the most important points made.
Transparency is an important safeguard in the use of AI, and in public law and judicial review it is a principle that the courts enforce rigorously.
A public body may sometimes be under a legal duty to give reasons for its decisions, so that an individual affected by a decision can understand the basis on which it has been made. Moreover, once judicial review proceedings start, or even in pre-litigation correspondence, the public body must comply with its so-called ‘duty of candour’. This is a duty to provide the claimant and court with all information and materials relevant to the issues in the case, to ensure they have a true and comprehensive picture of the decision-making process in issue.
So, if a claimant has reasonable grounds for believing that AI software has led to a decision that discriminated against them, or which took irrelevant factors into account, the public bodies will be expected to disclose sufficient details about the AI that was used in order for the court to ascertain whether or not that was the case.
This is likely to pose a real challenge in some cases for public bodies, particularly if they have taken a ‘black-box’ approach – purchasing and deploying an AI software solution from a commercial software provider, without a full understanding of how the software was developed and how it operates. Commercial sensitivity may also be an obstacle to disclosure, given the significant commercial value in maintaining the confidentiality of AI software development. In other contexts, the government has stated that it is not prepared to disclose full details of AI software that it uses to detect fraud, because doing so would enable individuals to circumvent its fraud detection measures more easily.
Reconciling the tension between AI confidentiality and the high expectations of the courts on public bodies’ duty of candour is likely to be a central battle ground in judicial review proceedings over the next few years.
There are practical steps that public bodies can take to follow best practice in navigating these legal issues. Important ones to consider include:
Co-written by Malcolm Dowden of Pinsent Masons. Pinsent Masons is hosting a webinar on the topic of judicial review and AI, on Tuesday 17 September. The event is free to attend – registration is open.