Out-Law Analysis 4 min. read
25 Nov 2024, 4:01 pm
Financial services firms must reflect on the barriers and risks related to the deployment of their finite resources in rolling out AI technologies as their appetite increases for using those tools and those tools become more sophisticated and subject to EU regulation.
Early uses of AI in financial services were reserved for back-office operations but there is now a marked push towards the development of applications to assist with customer-facing tasks.
Of the initial successes registered, which have enabled adopters to achieve efficiency and productivity gains, AI machine learning algorithms have proved extremely efficient for fraud detection – the ability of AI applications to analyse large amounts of data and to detect patterns and anomalies that may indicate fraudulent activity are particularly effective; faster than humans can process such data.
The automation of routine tasks is another area where there has been early adoption of AI technologies, such as in data entry, transaction processing, and document verification. This enables employees to allocate increased time to more complex and strategic activities. Automation has not to-date led to the replacement of workers. Financial institutions have also looked to harness AI from a risk management perspective by using AI tools to analyse and predict market trends, to assess credit risks and to identify potential investment opportunities. Many investment funds now use their own in-house algorithmic models to assist their stock-selection process.
Customer-facing AI has been slower in its adoption of AI as it is significantly more exposed to regulatory considerations. In its financial stability review released in May 2024, the European Central Bank’s (ECB) noted that AI will unlock multiple new applications in customer-facing activities in areas such as communication, onboarding and complaints management – where the use of automated chatbots may have some utility, for example – as well as customer segmentation and targeting and in advisory functions, where digital assistants or robo-advisers could be deployed. However, whilst these use cases may improve economic efficiency for both the institution and the customer, they could also lead to customer discrimination if the AI systems themselves are not robust, correctly validated and free from bias.
As well as being subject to financial services regulatory framework in Ireland, in general, the use of AI by Irish firms is regulated by the EU AI Act (the AI Act) which entered into force on 1 August 2024. However. all obligations arising under the AI Act are not currently in effect as they will be phased in over the next three years.
The objective of the AI Act is to ensure safe AI systems that respect fundamental human rights, while also fostering innovation. The AI Act is intended to complement the existing body of EU law in place for financial institutions that ensures healthy financial markets and helps foster transparency, market integrity, investor protection and financial stability.
Under the AI Act, AI systems will be categorised into four different levels: unacceptable risk, high-risk, limited risk and minimal risk, depending on what they are used for. Obligations will vary depending on the category of the AI system, with most of the regulation focused on addressing ‘high-risk’ AI. Pinsent Masons has developed a guide to help businesses understand which AI systems will, and which AI systems will not, be regulated as ‘high-risk’ AI systems under the EU AI Act. There are two high-risk use cases for the financial sector: AI systems used to evaluate the creditworthiness of a person, and for risk assessments and pricing for life and health insurances of a person.
A targeted European Commission stakeholder consultation for the financial services sector closed on 13 September 2024. The aim of this consultation was to assist the Commission in getting an overview of how and for which purposes AI applications are used in the financial sector. The output of this consultation will assist the Commission in shaping future policy for the use of AI in the financial services sector.
As a relatively new area that is often misunderstood by those not familiar with the application of AI, allocation of budgetary resources for research and innovation and the implementation of AI projects is a particular area that is viewed as an obstacle for the more widespread adoption of AI within the financial services sector.
Identifying the appropriate department within a financial service firm to implement a new AI project is a particular challenge for firms. AI pilots can mistakenly be viewed as an IT rollout whereas placing the responsibility within the appropriate business unit may be more beneficial to embed its use. Any skills gap in AI may lead to slow adoption of AI technologies. Without proficient skills in AI and an adequate level of general data literacy, financial service firms may struggle to leverage AI technologies effectively, which may lead to inefficiencies and reduced competitiveness.
AI models are much more complicated than traditional models for parsing data and it is very difficult for humans to comprehend and reconstruct the predictions and decisions that AI models make. This raises the so called “black-box” problem in the heavily regulated financial services sector where entities are expected to be able to explain their decision making processes to auditors and regulators.
AI models not only may contain certain biases, but the use of generative AI may also lead to “hallucinations” whereby false or misleading information may be presented as facts. Many AI models are only as good as the data used to train them and where a bias exists in the input data, the output of the systems may be unreliable. This increases the operational and reputational risk exposure for those relying on poor models and those lacking the data literacy skills to identify the poor outputs of such models.
The benefits and risks of using AI depend entirely on the specific problem or opportunity that has been identified. The use of AI may not be appropriate in all cases. Firms should place thought as to what business challenges may be addressed by the use of AI and why AI is considered an appropriate response to that particular challenge. It is important that firms understand the technology they are using.
With the AI Act now in force, it is important for those in the financial services sector to properly document the AI applications used by the business and the purpose behind their use. Appropriate polices for the use of AI applications should also be put in place. With the Digital Operational Resilience Act due to take effect on 27 January 2025, financial service firms now more than ever must be mindful of the risk involved in not having appropriate processes and procedures in place for their use of technological applications.
Pinsent Masons is hosting an event in Dublin on 27 November 2024 exploring the impact of AI on the financial services, technology and energy sectors in Ireland.