Out-Law News 3 min. read

UK government outlines actions to facilitate AI assurance market growth


A new report from the UK government highlights the steps it needs to take to facilitate significant expansion of the market for artificial intelligence (AI) assurance, which is projected to grow six-fold to reach £6.5 billion by 2035.

The report (26-page PDF/9.1MB), published by the Department for Science, Technology and Innovation (DSTI), has outlined the importance and growth potential of the AI assurance market, which plays a crucial role in verifying that AI systems operate as intended, focusing on fairness, transparency, and privacy protection. Currently, around 524 firms in the UK AI sector employ over 12,000 people and generate more than £1bn. These firms provide essential tools for the safe development and use of AI, a service increasingly in demand as AI adoption rises across various sectors.

According to the government’s research, the emerging assurance market could move beyond £6.53bn by 2035 with appropriate support. In the report, the government sets out four actions to help seize opportunities for future growth of this emerging industry.

First, the government said that it will drive demand for Al assurance tools and services by developing an ‘Al assurance platform’. This platform will offer a comprehensive resource for businesses, particularly startups and small-to-medium sized companies, to identify and mitigate AI-related risks. This will include an ‘AI management essentials’ self-assessment tool, providing a simple, free baseline of organisational good practice and supporting private sector organisations to engage in the development of ethical, robust and responsible AI.

In the medium term, the government plans to embed this tool in government procurement policy and frameworks to drive the adoption of assurance techniques and standards in the private sector.

Another action the government will take focuses on increasing the supply of independent, trusted third-party AI assurance service providers, and ensuring confidence and trust in the UK’s AI assurance market. The DSTI plans to work with industry stakeholders to develop a roadmap on how to realise this vision by the end of the year. It recognises the importance of having independent, robust professional bodies that provide specific training and uphold professional standards. The government will also consider the use of kitemarks, such as visual symbols or marks, to communicate the trustworthiness of technology to the end user of AI systems.

The government has also pledged to provide more funding to scale up the supply and drive adoption of new safety and assurance techniques, in response to the rapid development and advancement of the capabilities of AI. The Responsible Technology Adoption Unit and the AI Safety Institute, both part of the DSTI, will work together to achieve that. For example, the government will allocate additional funding for the AI Safety Institute’s Systemic Safety Grant programme, as well as extra funding to expand work to stimulate the AI assurance ecosystem.

The report has found that there are currently notable differences in the way AI assurance is understood across different sectors in the UK and different jurisdictions internationally. In response, the government is developing a terminology tool for responsible AI, which will define key terminology used in the UK and internationally and explain the relationships between them. The aim of the action is to help industry and assurance service providers to navigate key concepts and terms in different AI governance frameworks, so they can communicate effectively with consumers and trading partners both in and outside of the UK.

In the context of helping UK businesses unlock the commercial potential of the US AI market, this tool is expected to aid interoperability across the UK-US governance frameworks and promote common understanding between AI governance regimes in key international AI markets.

Technology law expert Malcolm Dowden of Pinsent Masons said: “This is particularly important in view of the significant challenges faced by both AI developers and by businesses looking to deploy AI across jurisdictions and legislative frameworks”.

“Organisations deploying AI to make decisions about individuals might be required by GDPR/UK GDPR to provide ‘meaningful information about the logic involved’ in the decision-making process, while organisations deploying AI for purposes classed as ‘high risk’ under the EU AI Act must be confident that the provider’s instructions for use include ‘information to enable deployers to interpret the output of the high-risk AI system and use it appropriately’.”

He added: “Organisations are rightly concerned to ensure that AI development and deployment does not bring unmanaged risks such as bias and discrimination. However, from an AI developer’s perspective, ‘explainable AI’ is an elusive concept. There is an inevitable trade-off between the complexity required for statistically reliable outputs and the comprehensibility required by law. In practice, ‘explainable AI often depends on the development of secondary models allowing analysis of the ways in which algorithms and models interact. It is rarely possible to show precisely what information has been taken into account or disregarded, or how factors have been weighted. Even if possible, revealing those details might breach commercial confidentiality or trade secrets. Consequently, there is a pressing need for the development of common, cross-jurisdictional understanding of how key concepts such as assurance, explainability and interpretability will be applied.”

The growth of AI assurance as an industry is part of the UK government’s broader strategy to leverage AI for economic reform and public service enhancement, while driving the responsible and trustworthy use of AI and ensuring public trust in these technologies. Innovation secretary Peter Kyle said that the steps set out in the report would help give businesses “the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise”.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.