Out-Law Guide 13 min. read
10 Jan 2025, 4:16 pm
The UK has chosen to take a less centralised approach to the regulation of artificial intelligence (AI) than the EU. Financial services regulators are monitoring the growth in the adoption of AI within financial services sectors and working within their existing regulatory frameworks to supervise it.
For insurers, the use of AI offers scope to offer personalised premiums in a more efficient way, assess risk, process claims, enhance customer-facing marketing and services, and better address financial crime. Adoption of the technology raises compliance issues, however, with some regulators already identifying concerns around bias and discrimination, the potential for financial exclusion, and the need for accountability and explainability in respect of use of algorithms in underwriting.
In this guide, we explore regulators’ expectations around the use of AI in insurance in the UK.
In the UK, the Bank of England (the Bank) has been monitoring the use of machine learning across the financial services sector, including insurers, since 2019, identifying that deployment of machine learning is most advanced in the banking and insurance sectors and that it is most commonly used in anti-money laundering and fraud detection as well as in customer-facing applications, such as customer services and marketing. That survey confirmed that some firms also use machine learning in areas such as general insurance pricing and underwriting.
In its most recent survey, the Bank found that 75% of firms are already using AI, with foundation models accounting for 17% of all use cases. Among the fastest growing use cases are customer support and regulatory compliance and reporting. 55% of all AI use cases have some degree of automated decision-making, with 24% of those being semi-autonomous, but only 2% of use cases have fully autonomous decision-making. Only 16% of firms rated their use cases as high materiality.
Another 2024 survey, carried out on behalf of the European Insurance and Occupational Pensions Authority, found that AI is used by 50% of the respondents in non-life insurance and 24% in life insurance, with most current solutions having been developed in-house for simpler tasks with more explainable algorithms that retain human oversight. The survey also found that other technologies, such as the Internet of Things, blockchain and parametric insurance, are currently only used by a small number of insurers.
Our observations are that insurers are hesitant to adopt forms of artificial intelligence, including machine learning, in underwriting. However, other uses, where AI simply provides inputs into decision making without making any fully autonomous decisions, such as fraud detection, monitoring of calls for quality and helping identify vulnerable customers, have been flourishing.
Discrimination risk is not new and is as relevant to AI as it is to algorithms more generally.
In its early work on general insurance pricing practices, the UK Financial Conduct Authority (FCA) looked into whether there was evidence of direct discrimination in the sector on the basis of protected characteristics under the Equality Act 2010. It found no evidence of direct discrimination but found that firms were using datasets – including datasets purchased from third parties – within their pricing models which may contain factors that could implicitly or potentially explicitly relate to race or ethnicity. The FCA’s clear expectation was that insurers need to gain assurance that the third-party data they use in pricing does not discriminate against certain customers based on any of the protected characteristics.
More recently, the Citizens Advice Bureau sounded the alarm on a so-called “ethnicity penalty”, whereby people of minority ethnic backgrounds pay more for the same insurance. Similarly, Fair by Design, a charity, alleges there is a “poverty premium”. Both have called on the FCA to intervene in the general insurance market. The FCA’s position is generally that it is not a discrimination regulator – that is the remit of the Equality and Human Rights Commission (EHRC) in the UK. The EHRC has flagged it has limited resources and its remit will therefore need to be focused. Its current priorities in terms of AI do not include the insurance market.
Although the FCA is careful to respect the remit of the EHRC in light of the memorandum of understanding in place between the two regulators, the memorandum makes clear that breach of discrimination law is likely to also be a breach of the FCA’s principles for business.
In a joint discussion paper on AI, the FCA and another UK financial services regulator, the Prudential Regulation Authority (PRA), highlighted a concern that personalisation through the use of AI could lead to some financial products not being offered to certain groups, potentially resulting in unlawful discrimination. In an update on its approach to the regulation of AI published in 2024, the FCA reiterated that “firms using AI technologies in a way that embeds or amplifies bias, leading to worse outcomes for some groups of consumers, might not be acting in good faith for their consumers, unless differences in outcome can be justified objectively”.
The FCA has also shown a tendency to view discrimination issues through the lens of vulnerability – so, even where a breach of equality law is not established, the FCA can still intervene if it identifies that vulnerable customers have suffered harm as a result of potentially discriminatory practices.
In order to support firms in avoiding or mitigating bias, the FCA has launched a series of research notes, the first of which is a literature review on bias in supervised machine learning. The research note sets out different ways to measure bias and various methodologies to help mitigate it.
Following Brexit, the law on discrimination has considerably evolved, in particular by importing the concept of associative indirect discrimination, an EU law concept, into the Equality Act 2010. Associative indirect discrimination protects those who do not have a protected characteristic but also suffer a disadvantage, alongside individuals with a protected characteristic.
When carrying out third-party data assurance exercises, insurers should be mindful of not only protected characteristics, but also correlations. For example, while receipt of benefits, asylum seeker status, or unemployment are not protected characteristics per se, consumer groups argue that they correlate with protected characteristics in a way that meets the legal tests of the Equality Act 2010.
In their joint discussion paper on AI, the FCA and PRA highlighted a concern that “AI-based insurance screening or credit provision could enable greater segmentation between ‘low-risk’ and ‘high-risk’ consumer groups, and exclusion”. In a September 2024 speech on financial inclusion, the FCA’s chief executive, Nikhil Rathi, repeated these concerns by stating “AI-enabled hyper-personalisation of insurance could benefit many by providing more tailored premiums, but at the same time runs the risk of rendering some customers ‘uninsurable’, or even potential discrimination”.
In its update on its approach to the regulation of AI, the FCA made it clear that it will use the FCA principles for business, in particular the consumer duty, to intervene where the use of AI leads to financial exclusion and poor outcomes.
In a consultation, the International Association of Insurance Supervisors (IAIS) warned that supervisors need to consider the broad societal impacts of granular risk pricing on the principle of risk pooling and suggests that banning differential pricing, facilitating easier policy cancellations and restricting price optimisation techniques used by insurers are remedies regulators may wish to consider. Another recommendation is supporting collaborative efforts amongst insurers to develop AI systems that consider social equity and accessibility.
In March 2024, the FCA published a report on using synthetic data in financial services, which explores how synthetic data can be used to mitigate bias in data and for system testing and model validation. Synthetic data works by generating statistically realistic but artificial data that can be used to create advanced modelling techniques and train AI models without compromising individual privacy or risking a breach of data protection laws. The report will be useful to practitioners looking to mitigate bias in existing data lakes or to test or train an AI model.
In 2024, a House of Lords committee published a report on large language models and generative AI, in which it flagged that these models pose challenges to establishing liability and accountability, in particular that businesses using AI may be ‘on the hook’ for risks that they cannot fully understand or mitigate since those risks emerge further up the supply chain during the training and development of the models. The Bank’s most recent survey of AI in UK financial services found that 46% of respondent firms reported having only ‘partial understanding’ of the AI technologies they use, because of the adoption of third-party models, which account for a third of all use cases.
However, the FCA has stated that it will use the senior managers and certification regime (SM&CR) to supervise the use of AI.
The FCA explained that, in the joint discussion paper on AI with the PRA, the regulators had explicitly sought feedback on whether there should be a dedicated senior manager responsible for AI within firms but that respondents had highlighted that existing firm governance structures, and regulatory frameworks such as the SM&CR, are sufficient to address AI risks.
In PRA-authorised SM&CR insurance firms, technology systems are normally under the responsibility of SMF24 (the chief operations function). Separately, the SMF4 (chief risk function) normally has responsibility for overall management of the risk controls of a firm, including the setting and managing of its risk exposures. These dual-regulated firms must also ensure that one or more of their senior management function (SMF) managers have overall responsibility for each of the activities, business areas, and management functions of the firm, to the extent that responsibility is not already covered by one of the other SMFs. The FCA concluded that “that means any use of AI in relation to an activity, business area, or management function of a firm would fall within the scope of a SMF manager’s responsibilities”.
Firms developing AI models internally or using off-the-shelf products should allocate responsibility for this to a senior manager and ensure that that manager has the qualifications and tools at their disposal to effectively supervise what is a highly technical area.
The Artificial Intelligence Public-Private Forum was set up by the FCA and PRA in 2020 to further the dialogue between the public sector, the private sector, and academia on AI. The Forum published its final report in 2022, in which it explained that “one of the defining aspects of some AI models, such as deep neural networks, is the lack of clear understanding of their inner workings i.e. the black-box problem”.
The Forum commented on how regulators might approach explainability: “There is a difference between regulating models themselves with a high-level of explainability versus treating them as black-boxes and regulating their inputs, outputs, and outcomes. Which approach is more appropriate depends on the context and materiality of the use-case and, potentially, the regulatory context. Having clear guidelines on appropriate degrees of explainability for specific use-cases could increase confidence when using the technology in financial services, but it could equally hamper desirable innovation. Where explainability is not possible, the ability to explain the safeguards that are put in place to protect against negative outcomes should be considered. Striking the right balance is a key consideration for regulators and policy makers, as well as firms and third-parties.”
The Forum also reasoned that if the focus is on customer experience, “explainability, while still important, becomes part of a much broader requirement on firms to communicate decisions in meaningful and actionable ways”. In particular, the Forum suggested that: “while there may be many design principles and guidelines within firms’ governance procedures, these are not usually communicated to end-users”. It added: “Communicating internal principles more effectively could be a useful adjunct to model explainability.” The IAIS, in a consultation on the supervision of AI, went further and recommended that insurers provide a clear breakdown of the factors that have influenced premium calculations.
The FCA has not indicated a desire to draft regulations on explainability. It has referred to the cross-cutting obligation under the consumer duty to act in good faith and the consumer duty outcome on consumer understanding. Given this uncertainty, insurers should tread carefully when exploring the use of AI for autonomous decision-making.
The Data (Use and Access) Bill will, as drafted, instead relax some existing restrictions applicable to automated decision-making. Automated decision-making in most circumstances would be permitted as long as the organisation using the relevant AI or other technology implements safeguards, allowing individuals affected by those decisions to make representations, obtain meaningful human intervention, and to challenge decisions made by solely automated means. More restrictive rules are provided for in the Bill around the use of personal data for making “significant” automated decisions where highly sensitive data is processed.
The July 2024 King's Speech also signalled that the government plans to “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.
Until detailed rules are in place, firms may wish to consider using FCA resources such as the AI Lab to test innovative uses of AI. The AI Lab will consist of AI Spotlight, a space for firms and innovators to share real-world examples of how they are leveraging AI, and to share emerging AI solutions; an AI Sprint, which will bring together experts to inform policy decisions; AI Input Zone, where the FCA will receive feedback from stakeholders; and additional capabilities to its Digital Sandbox.
Firms may also wish to turn to The Digital Regulation Cooperation Forum (DRCF) AI and Digital Hub, which has been operational since April 2024 and is a free information service spanning the regulatory remits of four UK authorities: the Competition and Markets Authority (CMA), Ofcom, the Information Commissioner's Office (ICO) and the Financial Conduct Authority (FCA).
In addition, the Bank of England has established the Artificial Intelligence Consortium to provide a platform for public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of AI in UK financial services. The Consortium is another useful forum through which insurers can remain close to the thinking of regulators.
The FCA has flagged the significant role of data protection law in the deployment of AI, in particular the principle of fairness, which requires all processing of personal data to be fair and not lead to unfair outcomes, and the safeguards on automated decision making under article 22 of the UK GDPR, which provides data subjects with the right not to be subject to decisions based solely on automated processing, including profiling, which produce legal or similarly significant effects.
Firms looking to use AI should have a holistic compliance strategy since meeting the data protection requirements will also go a long way in meeting the various aspects of the consumer duty.
In 2024, the ICO provided an update on its strategic approach to the regulation of AI and it has consulted on the allocation of responsibilities under data protection law within the gen-AI supply chain, on the lawful basis for web scraping to train gen-AI models, on the purpose limitation in the generative AI lifecycle and on the accuracy of training data and model outputs.
Firms should also be mindful where collaboration with AI model builders results in them sharing customer data with those third parties for the purposes of model training. In an October 2024 judgment, the Court of Justice of the EU (CJEU) suggested that the claimant in the underlying case ought to have informed its members that their data would be shared with third parties for marketing purposes and given those members an opportunity to opt out of receiving the marketing materials. The ruling may serve as a precedent in the context of potential data sharing with AI developers for the purposes of training AI models.
In an opinion, the European Data Protection Board (EDPB), said that data “‘absorbed’ in the parameters of a model would often constitute ‘personal data’, in particular where information relating to identified or identifiable individuals whose personal data was used to train the model may be obtained from an AI model with means reasonably likely to be used. The EDPB also said that both the likelihood of data about a person used in the training of the model being extracted from the model and the likelihood of obtaining the data from running queries through that model, would need to be “insignificant for any data subject”.
In their joint discussion paper, the FCA and PRA flagged other AI-related risks. They include:
The CMA has published its views on the use of algorithms and a joint statement on the regulation of AI together with the European Commission, the US Federal Trade Commission and US Department of Justice. The three overarching principles for protecting competition in the AI ecosystem stipulated in the joint statement – fair dealing, interoperability, and choice – mirror the core principles that already underpin competition rules for digital markets in the EU and UK.
Although AI is not itself listed as a “core platform service” in the EU’s Digital Markets Act, the European Commission considers that the EU’s Digital Markets Act can be used to regulate AI because “AI is covered where it is embedded in designated core platform services such as search engines, operating systems and social networking services”.
In the UK, the CMA has said that it will consider “AI and its deployment by firms” when deciding which technology firms to designate under the Digital Markets, Competition and Consumers Act 2024 (DMCCA).