Using AI in the recruitment process
Actions to take, measures and safeguards to have in place
AI tools offer dynamic, efficient solutions for streamlining recruitment processes. AI tools are capable of speedily sourcing potential candidates, summarising their applications and CVs and scoring their suitability for the role. AI tools can be used to assessed candidates’ skills and suitability through behaviour games or psychometric assessments. But these processes must be fair and lawful, and comply with data protection law.
Key risks
There’s the potential for bias, such as reinforcing social inequalities. Inaccurate outputs can produce unfair outcomes with people unfairly excluded from potential jobs. So-called ‘invisible processing’ may be taking place if candidates are unaware of the use of AI and how decisions are made. And the potential negative impacts don’t stop there.
Regulator calls for action
The Information Commissioner’s Office (ICO) recognises the benefits AI can bring, but is urging those using AI in the recruitment process to make sure the right protections are in place.
Over the past year the ICO has run public focus groups to understand participants concerns and engaged with thirty employers to delve into how they’re using AI for recruitment. The regulator found employers are likely to be relying on automated decisions as part of the recruitment process with no meaningful human involvement, and the decisions these systems take can have legal or similarly significant effects on people. They fall within the scope of solely automated decision-making under UK GDPR.
Data protection legislation has recently been updated to somewhat relax the rules on using automation to make decisions without human involvement. However, the ICO is stressing proper safeguards must be in place to take advantage of these changes.
Following its engagement with employers, the ICO has published a report in which it sets out the regulatory expectations for any organisation using automated-decision making (ADM) for hiring decisions. Any text in italics below represents the ICO’s words.
The ICO says it’s key findings suggest a lack of transparency is being provided to candidates about the use of ADM. Meaningful human involvement may not be applied consistently and more efforts are required to monitor to ensure fairness and prevent bias.
How the law has changed
Previously, UK GDPR (Article 22) prohibited the use of ADM and set out three exceptions under which this prohibition didn’t apply:
■ When the decision was necessary for a contract.
■ When the decision was required or authorised by law.
■ When the decision was based on the explicit consent.
Organisations were required to take suitable measures to safeguard people’s rights and freedoms, and at least give people the opportunity to exercise their right to obtain human intervention and to contest an automated decision. Additional restrictions on using special category data also applied.
The UK Data (Use and Access Act) 2025 has now introduced amendments to Article 22. These reshape the law and remove the prohibition with limited exceptions, introducing instead the requirement to have safeguards in place. The ICO says: This means organisations can make solely automated decisions with legal or similarly significant effects on almost any of the lawful bases in article 6. But for each decision, they must apply the ADM safeguards in article 22C.
It’s important to note there are still stricter rules when using special category data to carry out ADM.
These amendments make it more straightforward for organisations which, for example, can now look to potentially rely on legitimate interests to carry out ADM – subject to meeting the requirements of the legitimate interests three-part test. But the ICO stresses: If firms wish to take advantage of the new laws, proper safeguards must be in place to protect jobseeker’s data protection rights and ensure any automated decisions are transparent, fair and easy to understand.
Action organisations need to take
At a top-level the ICO expects organisations wishing to use ADM in the recruitment process to cover off the following:
1. Fairness, bias and discrimination
Proactively monitor for bias. This means regularly testing and ongoing monitoring of outputs and taking steps to mitigate bias. When procuring AI tools, this extends to asking developers about their own testing. Read more from the ICO here.
2. Transparency and safeguards
It must be clear to candidates that ADM is being used and how this works. Candidates must be told how they can challenge a decision and request human review they think this is inaccurate. Read more here.
3. Risk assessment
A Data Protection Impact Assessment (DPIA) is mandatory under UK GDPR where processing includes ADM. The ICO found employers were either not completing a DPIA before implementing AI tools, or ‘completed’ DPIAs did not cover minimum requirements. Read more here.
4. Lawful basis
Each purpose for processing personal data requires a lawful basis under UK GDPR. The ICO findings showed these lawful bases were often unclear in employers’ privacy notices, DPIAs and Records of Processing (RoPA) for automated recruitment, and/or it was not clear at which stage of the process a certain lawful basis was applied. It found the lawful bases of consent and performance of a contract were being used inappropriately. Read more here.
5. Meaningful human involvement
The ICO says it’s findings suggest an inconsistent approach is being taken to human involvement. For example, some candidates have their score, profile and responses weighed up by a hiring manager. But others may be rejected simply by a click of a button by a human based on an automatic AI-generated low score. The ICO is clear under the new amendments to the law – human involvement in a decision must be active and not just a token gesture or ‘rubber stamp’. Read more here.
Other key considerations
Along with the above organisations should take account of other key data protection principles and obligations for example:
■ Data minimisation
One of the core data protection principles is data minimisation. We should only collect and use personal information which is necessary for our purposes. A previous ICO audit of AI providers found some AI tools collected far more personal information than necessary and retained it indefinitely to build large databases of potential candidates without their knowledge. What might make perfect sense to the programmers creating such technology might not be compliant with data protection law!
■ Data Retention
The ‘storage limitation’ principle means not keeping personal data longer than necessary for the purpose(s) it was collected for. It’s therefore important to assess how long the data inputted into and generated from AI tools will be kept for. Information about retention periods should be provided in relevant privacy information provided to job applicants (e.g. in an Applicant Privacy Notice provided on your AI platform). When using AI providers, retention periods should be covered in contracts with action the provider must take at the end of the retention period.
■ Processor, controller or joint controller?
When using a third-party AI provider, both parties have responsibility for data protection compliance. It should be clear who is the controller or processor of the personal information involved. If the provider is acting as a processor and your organisation is the controller, you should be able to direct the means and purpose of the processing (including how AI is used) and tailor it to your requirements. If not, the AI provider is likely to be acting as a controller or joint-controller.
■ Information security and integrity
Alongside internal security controls and standards, if organisations are using third-party AI-enabled tools with their recruitment process, the procurement process should involve meaningful due diligence. This means asking the provider for evidence appropriate technical and organisational controls are in place. These technical and organisational controls should also be documented in the contract. Regular compliance checks should be undertaken while a contract is in place to make sure controls remain effective.
In summary, the ICO appreciates the benefits of AI and doesn’t want to stand in the way of those seeking to use AI driven solutions. It does, however, ask organisations to consider the technology’s compatibility with data protection law.
AI is a complex area for many and it’s easy to see how unintended misuse of personal data, or unfairness or bias in candidate selection could ‘slip through the cracks’. Organisations can avoid problems later down the line by addressing these in the planning stage when considering the use of AI within their recruitment process.
Fairness and respect for candidate privacy are central principles of HR best practice, as well as necessary for data protection compliance. Including your data protection team in the planning stage can help to mitigate and possibly eliminate some risks. A win-win which would leave organisations more confident in reaping the benefits AI offers.