Out-Law News 4 min. read

UK Finance acknowledges tension over AI system transparency


The publication of new regulatory guidance could help financial services firms satisfy themselves that the artificial intelligence (AI) systems they procure from third parties can be used in compliance with UK law and regulations, a prominent UK trade body has said.

UK Finance, which represents more than 300 firms providing finance, banking, markets and payments-related services in or from the UK, said AI developers can be reluctant to share information about their AI systems because of concerns about “intellectual property, security and model integrity” and suggested new regulatory guidance as one solution which could help address the problem.

“Regulatory guidance on the risk levels of different categories of use cases – potentially with indicative chief risks – could help developers and deployers of AI systems to have a shared understanding of regulatory expectations regarding risk and risk mitigation, and the corresponding information needs of the deploying firm,” UK Finance said.

The development of ‘targeted use’ classifications for AI models, clarifying the intended use cases for a given AI product, and the use of ‘model cards’, which would set out information needed for deployers of the model to responsibly make use of it, were also recommended by the body as tools that could help “bridge the gap” between what financial firms need to understand about the AI systems they procure and what developers of those systems are willing to disclose. It also called on the government to promote the development of optional AI assurance techniques, such as those promoted by the UK’s Centre for Data Ethics and Innovation (CDEI), in this regard.

UK Finance made the suggestions in a response (21-page / 306KB PDF) it submitted to the UK government’s AI white paper consultation.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

It is imperative that senior managers at financial entities upskill for AI

Currently, a range of legislation and regulation applies to AI – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use. The government has said this is a problem because “some AI risks arise across, or in the gaps between, existing regulatory remits”, and has acknowledged some businesses have concerns over “conflicting or uncoordinated requirements from regulators [that] create unnecessary burdens” and “unmitigated” risks left by regulatory gaps that could harm public trust in AI and slow adoption of the technology as a result.

To address this, the government issued a white paper in which it proposed to retain the existing sector-by-sector approach to regulation but introduce a cross-sector framework of overarching principles that regulators will have to “interpret and apply to AI within their remits”. The five principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The five principles would be issued on a non-statutory basis, but the government has proposed to place regulators under a statutory duty to have due regard to the principles when exercising their functions.

The government intends to provide “central support” for the new system of regulation, including monitoring and evaluating the framework’s effectiveness and adapting it if necessary; assessing risks arising from AI across the economy; conducting horizon scanning and gap analysis to inform a coherent response to emerging AI technology trends; and establishing a regulatory sandbox for AI to help AI innovators get new technologies to market.

UK Finance said it supports the “sectoral, risk-based approach [to the regulation of AI], based around regulatory guidance”, but it cautioned against an unnecessary “AI overlay” of existing regulation where it “addresses risks adequately”.

UK Finance said: “It is currently unclear whether the government expects: each regulator to produce an ‘AI guidance book’ with AI-specific guidance on each of the principles, or each regulator to reflect on the AI principles and ensure that their rules and guidance adequately cover AI risks. In our view, it should be made clear that [the latter] is an acceptable approach. This would enable authorities to rely on their generic (technology neutral) rules and guidance when these are sufficient, leveraging existing laws to avoid duplication and confusion.”

Citing examples of existing guidance relevant to AI, UK Finance referred to guidance produced by the Financial Conduct Authority (FCA) and Prudential Regulation Authority on matters such as model risk management, fairness and protecting vulnerable customers, and it also mentioned the ‘consumer duty’ and associated guidance recently introduced across UK financial services as relevant to AI too.

“It should not be necessary for each regulator to layer on top an additional AI guidance book when the existing tech-neutral guidance and rules adequately address key risks,” UK Finance said. “Similarly, when cross-sectoral guidance covers an AI risk effectively, it should not be expected that individual regulators apply their own layer. This would contradict the outcomes-focused approach, and risks duplication and unnecessary complexity. Of course, where there are specific gaps or points of uncertainty relating to certain types of model, system or use case, it would be logical to fill these.”

In its consultation response, UK Finance also highlighted risks arising from the proliferation of new generative AI systems that are open to the public.

For example, UK Finance said that there is a risk that developers of generative AI systems would be “responsible for consumer outcomes” but not necessarily aware of “exactly how the tool is being used by each consumer and what requirements therefore apply”. It further warned of the risk that generative AI systems are used by companies for purposes not anticipated by the developer and where no “normal procurement and due diligence process” has been applied. It also cited the risk of generative AI systems being used by “bad actors” to “create misinformation, to generate materials for defrauding consumers, or to defeat legitimate firms’ customer authentication tools”.

UK Finance said generative AI systems that are open to the public therefore “warrant accelerated and focused attention from policy makers and regulators”.

Luke Scanlon of Pinsent Masons, who specialises in technology law and contracts in financial services, said: “It is imperative that senior managers at financial entities upskill for AI to ensure they have the necessary knowledge and understanding of how the technology works to sufficiently satisfy themselves that those systems are being implemented in a manner that complies with relevant law and regulation. There is a particular challenge in this since the development and use of AI will need to be mapped to the fragmented regulatory framework as it currently applies – such as data protection and consumer protection rules, not just financial services rulebooks – and this means senior managers will need regular training and other briefings on the evolving technology and regulatory framework.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.