A financial service provider may find that the AI design involves different trade offs, for example, between explainability and statistical accuracy. While a regulator may accept that some AI systems such as those that are based on deep learning can make it hard to follow the logic of the system, they may take the view that the circumstances in which these competing interests cannot be reconciled is limited.
Providers will, in these circumstances, need to balance the extent to which an AI system is explainable with concerns around accuracy. Similar risk decisions will need to be made around other trade offs, such as between accuracy and privacy and explainability and security.
An appropriate audit trail at the design stage evidencing the decisions which have been made in respect of trade-offs will provide greater assurance of compliance with the expectations of regulators. Where a bespoke solution is being developed, including contractual requirements that require the financial service provider be regularly consulted on these design decisions, and to have the ability to input into such decisions, may be what is needed.
Auditability prior to deployment
As part of assessing whether the AI system is ready for deployment, financial service providers should satisfy themselves of the types of audit and assurance that the system has gone through. The World Economic Forum guidelines for AI procurement recommend using a "a process log that gathers the data across the modelling, training, testing, verifying and implementation phases of the project life cycle". A supplier can therefore be required to demonstrate the auditability of a system before it is submitted for testing and to provide copies of reports showing the outcomes of all testing and evaluation they have conducted.
Financial service providers should also consider conducting their own tests of the AI system to ensure it is suitable prior to deployment. As a component of this test, consideration should be given to whether or not the system will continue to be auditable over its life.
The financial service provider will also need to consider whether it has the required skills to be able to properly assess any testing of the AI system. In some circumstances it may be appropriate to hire specialist skills, upskill existing personnel, or engage an independent expert to help evaluate and test the system.
Training and knowledge transfer is critical for financial service providers, to ensure they are able to properly understand and use the AI system, and properly discharge their legal responsibility for the AI system once deployed. This should be reflected in the contractual requirements.
Auditability during use
Financial service providers need the continued ability to audit the AI system once it has been deployed. Audit rights obtained at the contractual stage should enable effective scrutiny of the AI system itself, which may include reviewing its underlying algorithms. It may not always be possible to access the model code, particularly where the code is commercially sensitive, but sufficient information to shed light on the relationship between the model, its input data and model outputs to show how the model is working will likely be required.