Using AI tools for recruitment

November 2024

How to comply with GDPR

AI tools offer dynamic, efficient solutions for streamlining recruitment processes. AI is capable of speedily identifying and sourcing potential candidates, summarising their CVs and scoring their suitability for the role.

What’s not to like?

Nonetheless, these processes must be fair and lawful. Is there a potential for bias and/or inaccurate outputs? How else will AI providers use jobseekers’ personal details? What data protection compliance considerations are baked into the AI’s architecture?

The Information Commissioner’s Office (ICO) is calling on AI providers and recruiters to do more to make sure AI tools don’t adversely impact on applicants. People could be unfairly excluded from potential jobs and/or have their privacy comprised. Why undo the good work HR professionals undertake to satisfy legal and best practice by using questionable technology?

The ICO recently ran a consensual audit of several developers and providers of AI recruitment tools. Some of the findings included;

Excessive personal data being collected
Data being used for incompatible purposes
A lack of transparency for jobseekers about how AI uses their details

The AI Tools in Recruitment Audit Report provides several hundred recommendations. The unambiguous message is using AI in the recruitment processes shouldn’t be taken lightly. Of course, this doesn’t mean recruiters shouldn’t embrace new technologies, but does mean sensible checks and balances are required. Here’s a summary of key ICO recommendations, with some additional information and thoughts.

10 key steps for recruiters looking to engage AI providers

1. Data Protection Impact Assessment (DPIA)

DPIAs are mandatory under GDPR where a type of processing is likely to result in high risk. The ICO says ‘processing involving the use of innovative technologies, or the novel application of existing technologies (including AI)’ is an example of processing they would consider likely to result in a high risk.

Using AI tools for recruitment purposes squarely meets these criteria. A DPIA will help you to better understand, address and mitigate any potential privacy risks or harms to people. It should help you to ask the right questions of the AI provider. It’s likely your DPIA will need to be agile; revisited and updated as the processing and its potential impacts evolve.

ICO DPIA recommendations for recruiters:

Complete a DPIA before commencing processing that is likely to result in a high risk to the people’s rights and freedoms such as procuring an AI recruitment tool or other innovative technology.
Ensure DPIAs are comprehensive and detailed, including:
– the scope and purpose of the processing;
– a clear explanation of relationships and data flows between each party;
– how processing will comply with UK GDPR principles; and consideration of alternative approaches.
– Assess the risks to people’s rights and freedoms clearly in a DPIA, and identify and implement measures to mitigate each risk.
Follow a clear DPIA process that follows the recommendations above.

2. Lawful basis for processing

When recruiting organisations need to identify a lawful basis for this processing activity. You need to choose the most appropriate of the six lawful bases such as consent or legitimate interests.

To rely on legitimate interests you will need to:
1. Identify a legitimate interest
2. Assess the necessity
3. Balance your organisation’s interests with the interests, rights and freedoms of individuals.

This is known as the ‘3-stage test’. We’d highly recommend you conduct and document a Legitimate Interests Assessment. Our recently updated Legitimate Interests Guidance includes a LIA temple (in Excel). Your DPIA can be referenced in this assessment.

3. Special category data condition

If you will be processing special category data, such as health information or Diversity, Equity and Inclusion data (DE&I), alongside a lawful basis you’ll need to meet a specific special category condition (i.e. an Article 9 condition under UK GDPR).

It’s worth noting, some AI providers may infer people’s characteristics from candidate profiles rather than directly collecting it. This can include predicting gender and ethnicity. This type of information even if inferred, will be special category data. It also raises questions about ‘invisible’ processing (i.e. processing the individual is not aware of) and a lack of transparency. The ICO recommends not using inferred information in this way.

4. Controller, processor or joint controller

Both recruiters and AI providers have a responsibility for data protection compliance. It should be clear who is the controller or processor of the personal information. Is the AI provider a controller, joint-controller or processor? The ICO recommends this relationship is carefully scrutinised and clearly recorded in a contract with the AI provider.

If the provider is acting as a processor, the ICO says ‘explicit and comprehensive instructions must be provided for them to follow’. The regulator says this should include establishing how you’ll make sure the provider is complying with these instructions. As a controller your organisation should be able to direct the means and purpose of the processing and tailor it to your requirements. If not, the AI provider is likely to be a controller or joint-controller.

5. Data minimisation

One of the core data protection principles is data minimisation. We should only collect and use personal information which is necessary for our purpose(s). The ICO’s audit found some AI tools collected far more personal information than necessary and retained it indefinitely to build large databases of potential candidates without their knowledge. What might make perfect sense to AI or the programmers creating such technology might not be compliant with data protection law!

Recruiters need to make sure the AI tools they use only collect the minimum amount of personal information required to achieve your purpose(s). (A purpose/purposes which should be clearly defined in your DPIA and, where relevant, your LIA).

There is also an obligation to make sure the personal details candidates are providing are not used for other incompatible purposes. Remember, if the AI provider is retaining data and using this information for its own purposes, it will not be a processor.

6. Information security and integrity

As part of the procurement process, recruiters need to undertake meaningful due diligence. This means asking the AI provider for evidence that appropriate technical and organisational controls are in place. These technical and organisational controls should also be documented in the contract. The ICO recommends regular compliance checks are undertaken while the contract is in place, to make sure effective controls remain in place.

7. Fairness and mitigating bias risks

Recruiters need to be confident the outputs from AI tools are accurate, fair and unbiased. The ICO’s audit of AI recruitment providers found evidence tools were not processing personal information fairly. For example, in some cases they allowed for recruiters to filter out candidates with protected characteristics. (Protected characteristics include; age, disability, race, ethnic or national origin, religion or belief, sex and sexual orientation). This should be a red flag.

You should seek clear assurances from the AI provider they have mitigated bias, asking to see any relevant documentation. The ICO has published guidance on this: How to we ensure fairness in AI?

8. Transparency

Are candidates aware an AI tool will used to process their personal details? Clear privacy information needs to be provided to job seekers which explains how and why the AI tool is being used. The ICO says this should extend to explain the ‘logic involved in making predictions or producing outputs which may affect people’. Candidates should also be told how they can challenge any automated decisions made by the tool.

The regulator recommends producing a privacy notice specifically for candidates on your AI platform which covers relevant UK GDPR requirements.

9. Human involvement in decision-making

There are strict rules under GPDR for automated decision-making (including profiling). Automated decision-making is the process of making a decision by automated means without any human involvement. A recruitment process wouldn’t be considered solely automated if someone (i.e. a human in the recruitment team) weighs up and interprets the result of an automated decision before applying it to the individual.

There needs to be meaningful human involvement in the process to prevent solely automated decisions being made about candidates. The ICO recommendations for recruiters include:

Ensure that recruiting managers do not use AI outputs (particularly ‘fit’ or suitability scores) to make automated recruitment decisions, where AI tools are not designed for this purpose.
Offer a simple way for candidates to object to or challenge automated decisions, where AI tools make automated decisions.

10. Data Retention

Another core data protection principle is ‘storage limitation’. This means not keeping personal data for longer than necessary for the purpose(s) it was collected for. It’s important to assess how long the data inputted and generated from AI tools will be kept for. Information about retention periods should be provided in relevant privacy information provided to job applicants (e.g. in an Applicant Privacy Notice provided on your AI platform).

The ICO says data retention periods should be detailed in contracts, including how long each category of personal information is kept and why. Plus what action the AI provider must take at the end of the retention period.

Summary

The ICO acknowledges the benefits of AI and doesn’t want to stand in the way of those seeking to use AI driven solutions. It does, however, ask recruiters to consider the technology’s compatibility with data protection law.

AI is a complex area for many and it’s easy to see how unintended misuse of personal data, or unfairness and bias in candidate selection could ‘slip through the cracks’ in the digital pavement. HR  professionals and recruiters can avoid problems later down the line by addressing these as Day One issues when considering AI.

Fairness and respect for candidate privacy are central principles of HR best practice and necessary for data protection compliance. Applying these to new technological opportunities shouldn’t come as a surprise. Including your data protection team in the planning stage can help to mitigate and possibly eliminate some risks. A win-win which would leave organisations more confident in reaping the benefits AI offers.