Out-Law Analysis 3 min. read

How AI can deliver cheaper, more effective compliance as regulation proliferates


Artificial intelligence (AI) is helping companies get their arms around the fast-growing range of regulation they face and is becoming an essential way for them to make their compliance more consistent, more efficient and more effective.

Regulation governs more aspects of an organisation’s activity than ever before, relating to health and safety, data protection, cyber security; bribery; fraud, and consumer protection.

It is essential that organisations reduce their risk in these areas; catalogue their efforts in case of a breach, and co-ordinate activity between many different functions and their compliance department. But that is becoming too big a job to do manually with any kind of efficiency.

Large language models (LLMs), the incredibly powerful learning computer systems at the heart of generative AI, can help with the process of gathering and collating the information that business units must provide to identify and quantify risk as a precursor to lessening it.

Kirsop Jonathan

Jonathan Kirsop

Partner, Head of Technology, Media, and Telecoms

Data quality may be one of the biggest barriers to effective creation of these systems

This is currently a highly manual process in most organisations. This means lots of forms with free text fields being seen and handled by several people, all of whom are required to read and amend that text. What the text says is entirely up to those people, meaning that standardisation is rare and the reliability of the information variable.

The first job AI can help with is reviewing the tools used to gather information. It can help hone questions, surveys and forms so that they contain more standardised information and fewer free text fields, increasing the standardisation of the output.

This in itself is a major bonus because that output, once standardised, is far easier for AI systems themselves to organise, cross reference and interrogate. AI will be able to see patterns in the data that teams of humans reading long sentences couldn’t, and in time may be able to cross reference risk in one area against risk in another.

This work also makes information gathering less human-intensive, as what the system seeks is their judgement rather than hours of their label repeating written information.

What this looks like will differ by subject area, but it could include the categorisation of risk. So instead of asking someone to describe a health and safety risk it might ask them to identify its broad category, then lead them down increasingly specific categorisations to a specific, standard term that describes the risk. This term itself will be understood by the machine as being connected to lots of other risks, which will guide its information gathering further.

For example, instead of asking someone to describe a process by which someone is working at height on a construction site, it might lead them to a specific term related to falls from height. This might then connect to questions about insurance; about the apportionment of liability through subcontractors and even to weather information related to dangerous winds or a training database where staff skills and certifications are recorded.

Until now the function of the AI is almost like a seconded member of the compliance team working with colleagues processing data; financial information; or IT systems to ensure risks are identified, recorded, understood and mitigated.

But standardisation of the collected data is equally important, and here AI can operate as a kind of overseeing intelligence of the whole organisation’s risks.

If it has a codified, standardised record of all compliance activity then it will be able to work backwards any time a risk turns into a liability and characterise the compliance activity leading up to that negative event, compared to the compliance activity that did not lead to risks being realised.

Organisations can then constantly amend and improve these AI-enabled processes to increase the quality of their compliance programmes.

AI will also be able, ultimately, to cross reference kinds of risk, perhaps understanding that some data protection risks are closely tied to cyber security ones, or that bribery and corporate fraud risks have links at a systemic level that would have been invisible to humans processing forms.

No AI system can run all these systems without intervention – the benefit will be that the AI system can perform lots of lower level tasks and identify the areas that do need the expertise of a human compliance professional.

The best way to go about implementing a system like this is to start small. Pick a discrete area of operation where you think the quality of the underlying data is good and invest the resource in making the compliance system more robust, ensure its inputs and outputs are standardised and seek the help of the system in analysing outcomes.

Data protection is one good area to start – it is relatively independent of other functions and the information your organisation already has is likely to be accurate.

Data quality may be one of the biggest barriers to effective creation of these systems. An AI system trained on poor quality data – inconsistent or factually wrong or incomplete – will compound errors faster than humans can fix them.

But investing resource now to create a high quality system to both gather and process compliance data is an investment that will pay off in increased efficiency and effectiveness long into the future.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.