Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

UK guidance issued on explaining decisions made with artificial intelligence


The UK's Information Commissioner’s Office (ICO) and the Alan Turing Institute have published guidance on explaining decisions made with artificial intelligence (AI).

The guidance looks at how organisations can provide users with a better understanding of how AI systems work and how decisions are made. It is intended to give organisations practical advice to help explain the processes and services that go into AI decision-making so that individuals will be better informed about the risks and rewards of AI.

The guidance follows the public consultation launched by the ICO and the Alan Turing Institute last year under their ‘Project ExplAIn’ collaboration, and is part of a wider industry effort to improve accountability and transparency around AI.

Technology law expert Priya Jhakra of Pinsent Masons, the law firm behind Out-Law, said: “The ICO's guidance will be a helpful tool for organisations navigating the challenges of explaining AI decision making. The practical nature of the guidance not only helps organisations understand the issues and risks associated with unexplainable decisions, but will also get organisations thinking about what they have to do at each level of their business to achieve explainability and demonstrate best practice.”

The guidance is split into three parts, explaining the basics of AI before going on to give examples of explaining AI in practice, and looking at what explainable AI means for an organisation.

It includes detail on the roles, policies, procedures and documentation, that required by the EU’s General Data Protection Regulation, that firms can put in place to ensure they are set up to provide meaningful explanations to affected individuals.

The guidance offers practical examples which put the recommendations into context and checklists to help organisations keep track of the processes and steps they are taking when explaining decisions made with AI. The ICO emphasises that the guidance is not a statutory code of practice under the Data Protection Act 2018.

The first section is aimed primarily at an organisation’s data protection officer (DPO) and compliance teams, but relevant to anyone involved in the development of AI systems. The second is aimed at technical teams and the last section at senior management. However, it suggests that DPOs and compliance teams may also find the last two sections helpful.

The guidance notes that using explainable AI can give an organisation better assurance of legal compliance, mitigating the risks associated with non-compliance. It suggests using explainable AI can help improve trust with individual customers.

The ICO acknowledged that organisations are concerned that explainability may disclose commercially sensitive material about how their AI systems and models work. However, it said the guidance did not require the disclosure of in-depth information such as an AI tool’s source code or algorithms.

Organisations which limit the detail of any disclosures should justify and document the reasons for this, according to the guidance.

The ICO recognises that use of third-party personal data could be a concern for organisations, but suggests this may not be an issue where they assess the risk to third-party personal data as part of a data protection impact assessment, and make “justified and documented choices” about the level of detail they should provide.

The guidance also recognises the risks associated in not explaining AI decisions, including regulatory action, reputational damage and disengaged customers.

The guidance recommends that organisations should divide explanations of AI into two categories: process-based explanations, giving information on the governance of an AI system across its design and deployment; and outcome-based explanations which outline what happened in the case of a particular decision.

It identifies six ways of explaining AI decisions, including giving explanations in an accessible and non-technical way and noting who customers should contact for a human review of a decision.

The guidance also recommends explanations which cover issues such as fairness, safety and performance, what data has been used in a particular decision, and what steps have been taken during the design and implementation of an AI system to consider and monitor the impacts that its use and decisions may have on an individual, and on wider society.

The guidance also identifies four principles for organisations to follow, and how they relate to each decision type. Organisations should be transparent, accountable, consider the context they operate in, and reflect on what impact of the AI system may have on affected individuals and wider society.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.