The Information Commissioner’s Office ICO) has published its findings from a series of audits examining the use of AI in recruitment processes – they’ve examined how personal data is being handled by AI tools used for sourcing, screening, and selection. While the ICO acknowledges that AI has the potential to streamline recruitment and even reduce unconscious bias in screening, it has identified what it describes as “considerable areas for improvement” in current practices. We’ll take a closer look and speak to a data protection expert about implementing AI tools into the recruitment process.
The ICO’s 50-page report identified what it regards as the key issues and provides examples of good practice. It proposes nearly 300 recommendations to the audited organisations all of which have been accepted which shows both the seriousness of the concerns raised by the regulator and the willingness of organisations to address them.
Data protection expert Steph Paton comments on the findings in her article for Out-Law and points out how the multitude of recommendations made by the ICO demonstrates the complexity of legal compliance for employers using AI tools in recruitment. She says: ‘Employers should understand that an AI solution is not a quick efficiency fix. However, if compliance work is put in at the outset with a trusted AI provider, with good ongoing monitoring, the efficiencies and other benefits of AI can be reaped in a properly regulated environment. This is what the ICO wants to encourage.’
There is strong agreement among participants in the project that the ICO’s audit and recommendation process improved their understanding of data protection requirements. On that point Steph says: “It seems unlikely that project participants are alone in conceding that their understanding of AI and data protection needed to be improved. Employers looking at this report may also want to be reflective on their current understanding. Carrying out a similar internal audit process may be helpful to assess not only legal risks, but whether tools are working in the best way for the business.”
The report is 50 pages long but we’ve been through it to come up with the three most important action points which align with the ICO’s recommendations. They are:
1 Conduct Data Protection Impact Assessments
2 Establish a clear human oversight mechanism; and
3 Communicate your use of AI clearly
So, let's hear more about each one. Earlier I spoke on the Steph Paton who joined me by phone from Edinburgh:
Stephanie Paton: “They’re essentially a tool for identifying risks—like data privacy breaches or the risk of bias which is often hard to spot. It’s then a case of putting the right measures in place to address them. What we say to clients is HR teams should treat DPIAs as a continuous process, not a one-time task. That means conducting them before implementing any AI tool you might be thinking of using and revisiting them regularly, particularly after updates to the AI system. So, we’re saying to clients start by mapping out how your AI tool uses personal data at every stage – whether it’s collecting, processing, or storing candidate information – then make an assessment of where things could go wrong, such as with over-collection of data or unintended bias. Importantly, we’re telling clients to document every step of the DPIA process. This isn’t just about legal compliance; it’s about showing candidates and stakeholders that you’re taking privacy and fairness seriously.”
Joe Glavina: “The ICO stresses time and again how important it is to have human oversight when using AI. Why is that?”
Stephanie Paton: “Human oversight is absolutely essential when using AI in recruitment. AI can be incredibly efficient, but it’s not perfect—it can make mistakes or replicate biases that are baked into its training data. That’s why you need robust processes in place to review AI outputs before any decisions are finalised.”
Joe Glavina: “Can you give me an example, Steph?”
Stephanie Paton: “So, take recruitment which is where AI is having the biggest impact at the moment, if you imagine you’ve got an AI tool that generates a shortlist of candidates, you’re going to need someone to review that list to ensure it’s fair and accurate. This isn’t just about spotting errors, It’s also about bringing human judgment into decisions that require context, like understanding a candidate’s unique experience or potential. That’s not only safe legally, it’s also the best way to demonstrate to candidates that they can trust your hiring process. You don’t want them feeling they’ve been rejected by a machine without a human ever looking at their application. Quite rightly they’d think that was unfair and it wouldn’t reflect well on the client who has a reputation to protect.”
Joe Glavina: “The ICO places great emphasis to transparency when it comes to using AI in recruitment. How can HR help with that?”
Stephanie Paton: “Transparency is key to this. It’s all part and parcel of building trust in your recruitment processes and whenever you start introducing AI to help manage the process you’re introducing something that’s inherently difficult to understand, hence the need to be absolutely clear about what’s going on and why. Candidates deserve to know if their application is being assessed by an AI system, how it works, and what data it’s using. Without that clarity, there’s a risk of alienating candidates or even facing legal challenges under data protection laws. So what we’re saying to HR teams is you need to be proactive and explain openly AI’s role at every stage of the recruitment process. So for example, after a candidate submits their CV you could respond by sending them an email which acknowledges receipt but goes on to say that you will be using AI to help you identify candidates whose experience and skills most closely match the requirements of the role but, crucially, you explain in clear terms that someone, an HR managers or trained line manager, will review all shortlisted applications before any decisions are made. It’s also a good idea to add that if the candidate has any concerns about the process they are welcome to contact you. That’s what I mean by being transparent and open.”
Joe Glavina: “What if the candidate wants to opt out completely?”
Stephanie Paton: “It’s a good point. If someone isn’t comfortable with AI, they should be able to opt out and have their application reviewed manually and that’s something else I’d make clear to them from the outset. So not only are you giving them the option to challenge an AI decision, you’re going further by letting them opt-out completely and request a human review of their application. That’s important because not only does it help minimise the risk of a challenge to your decision, it also demonstrates that you value a fair and open recruitment process which is the sort of message you want to be sending out.”
If you would like to read Steph’s article commenting in detail on the ICO’s report and recommendations you can. It’s called ‘ICO makes data protection recommendations on AI recruitment tools’ and is available from the Out-Law website. We’ve put a link to it in the transcript of this programme for you, along with a link to the report itself.
- ICO report: AI tools in recruitment: Audit Outcomes Report
- Link to ICO report: ‘ICO makes data protection recommendations on AI recruitment tools’