Out-Law / Your Daily Need-To-Know

Financial services firms are to receive help from the UK's Centre for Data Ethics and Innovation (CDEI) on how to address bias stemming from using algorithms to inform decision-making about consumers.

The CDEI confirmed the plans in a new interim report into bias in algorithmic decision-making. The report highlighted that bias in decision-making is inevitable and it is incumbent on organisations to recognise this and take steps to mitigate it.

According to the CDEI, data itself can often be the source of bias. It said that there is a risk that data held by financial services institutions being fed into artificial intelligence (AI) and algorithmic systems may reflect "embedded historical biases". The challenge of identifying where bias has originated from becomes greater when algorithms become more complex and independent, it said.

The CDEI said, though, that "effective human accountability for the use and performance of algorithmic tools" is "critical" regardless of the context in which those tools are used, but it outlined specific steps it will take to mitigate the risk of bias in algorithmic decision-making in financial services in its report.

"We are carrying out structured interviews with key stakeholders in financial services to identify the main barriers faced in identifying and mitigating bias," the CDEI said. "We then plan to conduct a survey of algorithmic bias identification tools currently available and assess the strengths and weaknesses of these approaches to begin to establish best practice standards. We will also consider how tools may need to develop to deal with the application of emerging data- driven technology, including the use of machine learning algorithms. Finally, we will work with stakeholders to identify potential governance arrangements to oversee the mitigation of bias across the financial services sector."

The CDEI recognised that the financial services sector is currently "exploring the disruptive change that will come from incorporating larger datasets, new sources of data (such as data from social media profiles), and more sophisticated machine learning into its decision-making processes".

As firms use more data and better algorithms it could "yield better risk prediction and mean fewer people are denied loans because of inaccurate credit scoring", and further "enable population groups who have historically found it difficult to access credit (because of a paucity of data about them from traditional sources) to gain better access in future", it said. The CDEI warned, though, that using more and more complex algorithms "increases the potential for the introduction of indirect bias via proxy as well as the ability to detect and mitigate it".

The CDEI's interim report said that, more generally than just in financial services, "new approaches to identifying and mitigating bias are required". It also pointed out that there is limited guidance and a lack of consensus about how to decide on mitigating bias or even how to have constructive and open conversations about doing so.

The report further explored the concept of 'fairness' in algorithmic decision-making, and said developers should have the opportunity to test algorithms against standard datasets or to benchmark performance against industry standards.

Approaches to evaluating algorithmic decision making are starting to be proposed within organisations and within academic literature, though, with both commercial solutions and in-house tools being developed, the CDEI said.

Research undertaken by Pinsent Masons, the law firm behind Out-Law, in partnership with Innovate Finance found that consumers are ready to embrace the use of AI in financial services, but that businesses need to make sure that they have implemented ethical processes to give their customers comfort. 

Christopher Woolard, the Financial Conduct Authority's executive director of strategy and competition, recently confirmed that the regulator is to partner with the UK's Alan Turing Institute with the aim of providing financial services firms with greater clarity over the extent to which they need to explain to consumers how their AI tools work.

"Thought also needs to be given as to what is expected of financial institutions in terms of the level of oversight they are required to have in place when using third parties that develop and implement AI systems on their behalf," said financial services and technology law expert Luke Scanlon of Pinsent Masons. "Standardisation bodies have begun work on establishing common AI terminologies, however not much has been finalised around standards. Progressing work in this area is necessary so that both technology providers and financial institutions can get a clearer picture of what will be required of them when looking to address concerns around bias."

"It is important that this work is undertaken as the European Commission expert group and others are suggesting that auditing arrangements need to be put in place as part of ethical AI frameworks – but if there are no expectations as to how to conduct an audit of machine learning and other technologies, not much progress will be made in helping institutions put effective protections in place against unwanted biases," he said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.