Data protection and employment records

February 2025

How to manage personal data relating to employees

Data protection compliance efforts are often focused on commercial or public-facing aspects of an organisation’s activities. Making sure core data protection principles and requirements are met when collecting and handling the data of customers, members, supporters, students, patients, and so on. However the personal data held relating to employees and job applicants doesn’t always get the same level of attention.

Handling employees’ personal information is an essential part of running a business, and organisations need to be aware and mindful of their obligations under the UK GDPR and Data Protection Act 2018. As well as, of course, obligations under employment law, health and safety law, and any other relevant legislation or sector specific standards.

A personal data breach could affect employee records. Employees can raise complaints about an organisation’s employment activities and employees (or former employees) can raise Data Subject Access Requests which can sometimes be complex to respond to. All of which can expose gaps in compliance with data protection laws. In some organisations employee records may represent the highest privacy risk.

Employee records are likely to include special category data and more sensitive information such as:

DE&I information (such as information relating to race, ethnicity, religion, gender, age, sexual orientation, etc)
disabilities and/or medical conditions
health and safety records
absence and sickness records
performance reviews and development plans
disciplinary and grievance records
occupational health referrals
financial information required for payroll

Alongside the core HR records, employees may be present on other records – such as CCTV, any tracking of computer / internet use, and so on. All of which need careful consideration from a data protection standpoint. Also see monitoring employees.

In my experience, while the security of employee records may often be taken into consideration, other core data protection principles might sometimes be overlooked, such as:

Lawfulness

It’s necessary to have a lawful basis for each processing activity. Many activities may be necessary to perform a legal obligation or covered under the contract of employment with the individual. However, the contract may not cover every activity an organisation has requiring the use of employee data. It should be clearly determined where legal obligation or the contract is appropriate for any given activity and confirm any activities where you may instead need to rely on other lawful bases, such as legitimate interests or consent.

Special category data

To handle medical information, trade union membership and diversity, equity and inclusion (DE&I) activities, and any other uses of special category data, it’s necessary to determine a lawful basis, plus a separate condition for processing under Article 9. Handling special category data

Data minimisation

The principle of data minimisation requires employers to take steps to minimise the amount of personal information about their employees to what is necessary for their activities and not hold additional personal information ‘just in case’ they might need it.

Data retention

Employee’s data should not be kept longer than necessary. There are statutory retention requirements for employment records in the UK (and many other jurisdictions), which set out how long they must be kept. But these laws may not cover all types of activities you may have for employment data. Once you set these retention periods, they need to be implemented in practice, i.e. regular reviews of the data you hold for specific purposes and securely destroy records you no longer need. These may be electronic records on IT systems or perhaps physical HR records languishing in boxes in a storeroom! You may wish to refer to our Data Retention Guidance

Transparency

Employees are entitled to know the ways in which their employer uses their personal data, the lawful bases, the retention periods and so on. The requirements for privacy notices must be applied to employees, just like external audiences. This necessary privacy information may be provided in an Employee Privacy Notice or via an Employee Handbook.

Risk assessments

Data Protection Impact Assessments are mandatory in certain circumstances. In other cases they might be helpful to conduct. Organisations mustn’t overlook DPIA requirements in relation to employee activities. For example, any monitoring of employees which might be considered intrusive or the use of biometric data for identification purposes.

Record keeping

Appropriate measures need to be in place to make sure employee records are being handled lawfully, fairly and transparently and in line with other core data protection principles. It’s difficult to do this without mapping employee data and maintaining clear records of the purposes you are using it for, the lawful bases, special category conditions and so on, i.e. your Record of Processing Activities (RoPA). The absence adequate records will make the creating a comprehensive privacy notice rather challenging.

Training

Whilst we’re on the topic of employees, let’s also give a mention to training. All employees handling personal data should receive appropriate information security and data protection training. It’s likely those in HR / People teams handling employee data on a daily basis will benefit from specialist training beyond the generic online training modules aimed at all staff.

To help you navigate data protection obligations the ICO has published new guidance on handling employee records, which provides more detail on what the law requires and regulatory expectations.

Finally, don’t forget data protection compliance efforts need to extend beyond employees to job applicants, contractors, volunteers and others who perform work-related duties for the organisation.

Using AI tools for recruitment

November 2024

How to comply with GDPR

AI tools offer dynamic, efficient solutions for streamlining recruitment processes. AI is capable of speedily identifying and sourcing potential candidates, summarising their CVs and scoring their suitability for the role.

What’s not to like?

Nonetheless, these processes must be fair and lawful. Is there a potential for bias and/or inaccurate outputs? How else will AI providers use jobseekers’ personal details? What data protection compliance considerations are baked into the AI’s architecture?

The Information Commissioner’s Office (ICO) is calling on AI providers and recruiters to do more to make sure AI tools don’t adversely impact on applicants. People could be unfairly excluded from potential jobs and/or have their privacy comprised. Why undo the good work HR professionals undertake to satisfy legal and best practice by using questionable technology?

The ICO recently ran a consensual audit of several developers and providers of AI recruitment tools. Some of the findings included;

Excessive personal data being collected
Data being used for incompatible purposes
A lack of transparency for jobseekers about how AI uses their details

The AI Tools in Recruitment Audit Report provides several hundred recommendations. The unambiguous message is using AI in the recruitment processes shouldn’t be taken lightly. Of course, this doesn’t mean recruiters shouldn’t embrace new technologies, but does mean sensible checks and balances are required. Here’s a summary of key ICO recommendations, with some additional information and thoughts.

10 key steps for recruiters looking to engage AI providers

1. Data Protection Impact Assessment (DPIA)

DPIAs are mandatory under GDPR where a type of processing is likely to result in high risk. The ICO says ‘processing involving the use of innovative technologies, or the novel application of existing technologies (including AI)’ is an example of processing they would consider likely to result in a high risk.

Using AI tools for recruitment purposes squarely meets these criteria. A DPIA will help you to better understand, address and mitigate any potential privacy risks or harms to people. It should help you to ask the right questions of the AI provider. It’s likely your DPIA will need to be agile; revisited and updated as the processing and its potential impacts evolve.

ICO DPIA recommendations for recruiters:

Complete a DPIA before commencing processing that is likely to result in a high risk to the people’s rights and freedoms such as procuring an AI recruitment tool or other innovative technology.
Ensure DPIAs are comprehensive and detailed, including:
– the scope and purpose of the processing;
– a clear explanation of relationships and data flows between each party;
– how processing will comply with UK GDPR principles; and consideration of alternative approaches.
– Assess the risks to people’s rights and freedoms clearly in a DPIA, and identify and implement measures to mitigate each risk.
Follow a clear DPIA process that follows the recommendations above.

2. Lawful basis for processing

When recruiting organisations need to identify a lawful basis for this processing activity. You need to choose the most appropriate of the six lawful bases such as consent or legitimate interests.

To rely on legitimate interests you will need to:
1. Identify a legitimate interest
2. Assess the necessity
3. Balance your organisation’s interests with the interests, rights and freedoms of individuals.

This is known as the ‘3-stage test’. We’d highly recommend you conduct and document a Legitimate Interests Assessment. Our recently updated Legitimate Interests Guidance includes a LIA temple (in Excel). Your DPIA can be referenced in this assessment.

3. Special category data condition

If you will be processing special category data, such as health information or Diversity, Equity and Inclusion data (DE&I), alongside a lawful basis you’ll need to meet a specific special category condition (i.e. an Article 9 condition under UK GDPR).

It’s worth noting, some AI providers may infer people’s characteristics from candidate profiles rather than directly collecting it. This can include predicting gender and ethnicity. This type of information even if inferred, will be special category data. It also raises questions about ‘invisible’ processing (i.e. processing the individual is not aware of) and a lack of transparency. The ICO recommends not using inferred information in this way.

4. Controller, processor or joint controller

Both recruiters and AI providers have a responsibility for data protection compliance. It should be clear who is the controller or processor of the personal information. Is the AI provider a controller, joint-controller or processor? The ICO recommends this relationship is carefully scrutinised and clearly recorded in a contract with the AI provider.

If the provider is acting as a processor, the ICO says ‘explicit and comprehensive instructions must be provided for them to follow’. The regulator says this should include establishing how you’ll make sure the provider is complying with these instructions. As a controller your organisation should be able to direct the means and purpose of the processing and tailor it to your requirements. If not, the AI provider is likely to be a controller or joint-controller.

5. Data minimisation

One of the core data protection principles is data minimisation. We should only collect and use personal information which is necessary for our purpose(s). The ICO’s audit found some AI tools collected far more personal information than necessary and retained it indefinitely to build large databases of potential candidates without their knowledge. What might make perfect sense to AI or the programmers creating such technology might not be compliant with data protection law!

Recruiters need to make sure the AI tools they use only collect the minimum amount of personal information required to achieve your purpose(s). (A purpose/purposes which should be clearly defined in your DPIA and, where relevant, your LIA).

There is also an obligation to make sure the personal details candidates are providing are not used for other incompatible purposes. Remember, if the AI provider is retaining data and using this information for its own purposes, it will not be a processor.

6. Information security and integrity

As part of the procurement process, recruiters need to undertake meaningful due diligence. This means asking the AI provider for evidence that appropriate technical and organisational controls are in place. These technical and organisational controls should also be documented in the contract. The ICO recommends regular compliance checks are undertaken while the contract is in place, to make sure effective controls remain in place.

7. Fairness and mitigating bias risks

Recruiters need to be confident the outputs from AI tools are accurate, fair and unbiased. The ICO’s audit of AI recruitment providers found evidence tools were not processing personal information fairly. For example, in some cases they allowed for recruiters to filter out candidates with protected characteristics. (Protected characteristics include; age, disability, race, ethnic or national origin, religion or belief, sex and sexual orientation). This should be a red flag.

You should seek clear assurances from the AI provider they have mitigated bias, asking to see any relevant documentation. The ICO has published guidance on this: How to we ensure fairness in AI?

8. Transparency

Are candidates aware an AI tool will used to process their personal details? Clear privacy information needs to be provided to job seekers which explains how and why the AI tool is being used. The ICO says this should extend to explain the ‘logic involved in making predictions or producing outputs which may affect people’. Candidates should also be told how they can challenge any automated decisions made by the tool.

The regulator recommends producing a privacy notice specifically for candidates on your AI platform which covers relevant UK GDPR requirements.

9. Human involvement in decision-making

There are strict rules under GPDR for automated decision-making (including profiling). Automated decision-making is the process of making a decision by automated means without any human involvement. A recruitment process wouldn’t be considered solely automated if someone (i.e. a human in the recruitment team) weighs up and interprets the result of an automated decision before applying it to the individual.

There needs to be meaningful human involvement in the process to prevent solely automated decisions being made about candidates. The ICO recommendations for recruiters include:

Ensure that recruiting managers do not use AI outputs (particularly ‘fit’ or suitability scores) to make automated recruitment decisions, where AI tools are not designed for this purpose.
Offer a simple way for candidates to object to or challenge automated decisions, where AI tools make automated decisions.

10. Data Retention

Another core data protection principle is ‘storage limitation’. This means not keeping personal data for longer than necessary for the purpose(s) it was collected for. It’s important to assess how long the data inputted and generated from AI tools will be kept for. Information about retention periods should be provided in relevant privacy information provided to job applicants (e.g. in an Applicant Privacy Notice provided on your AI platform).

The ICO says data retention periods should be detailed in contracts, including how long each category of personal information is kept and why. Plus what action the AI provider must take at the end of the retention period.

Summary

The ICO acknowledges the benefits of AI and doesn’t want to stand in the way of those seeking to use AI driven solutions. It does, however, ask recruiters to consider the technology’s compatibility with data protection law.

AI is a complex area for many and it’s easy to see how unintended misuse of personal data, or unfairness and bias in candidate selection could ‘slip through the cracks’ in the digital pavement. HR  professionals and recruiters can avoid problems later down the line by addressing these as Day One issues when considering AI.

Fairness and respect for candidate privacy are central principles of HR best practice and necessary for data protection compliance. Applying these to new technological opportunities shouldn’t come as a surprise. Including your data protection team in the planning stage can help to mitigate and possibly eliminate some risks. A win-win which would leave organisations more confident in reaping the benefits AI offers.

Monitoring employees and data protection

Is it transparent, reasonable and proportionate?

There are plenty of reasons why employers might want to monitor staff; to check they’re working, to detect and prevent criminal activity, to make sure people are complying with internal policies, to check their performance, for safety and security reasons, and so on.

With significant advances in technology, there are multiple options available for employees seeking to monitor their workforce, such as:

  • Camera surveillance, including CCTV and body worn cameras
  • Webcams and screenshots
  • Monitoring timekeeping or access control using biometric data
  • Keystroke monitoring
  • Internet tracking for misuse
  • Covert audio recording

Add the growing number of AI-powered solutions into the mix, and the opportunities are seemingly endless. I’ve even seen demos of AI tools which sentiment check emails; scanning the language employees use to detect content which might be discriminatory, bullying or aggressive.

Just because a range of monitoring technologies exist, doesn’t mean we should use them.

A survey commissioned by the UK’s Information Commissioner’s Office in 2023 revealed almost one in five people believe they’ve been monitored by their employer, and would be reluctant to take a job if they knew they were going to be monitored. This research showed 70% of the public believe it’s intrusive to be monitored in the workplace.

However, there is a broad understanding employers might carry out checks on the quality and quantity of their work and an appreciation there may be a necessity to do this proportionately to meet health and safety or other regulatory requirements. Emily Keaney, the ICO’s Deputy Commissioner of Regulatory Policy says “While data protection law does not prevent monitoring, it must be necessary, proportionate and respect the rights and freedoms of workers. We will take action if we believe people’s privacy is being threatened.”

Earlier this year, the ICO did just that, and ordered a Leisure Company to stop using biometric data to monitor their staff. You can read more about the case here: using biometrics to monitor staff

To prevent monitoring employees in an overly intrusive and disproportionate way, it’s crucial to carefully consider any planned monitoring activity and make sure it’s a reasonable thing to be doing.

Workplace monitoring checklist

Here are some of the key considerations to take into account:

1. Is it `lawful, fair and transparent?

To be lawful you need to identify a lawful basis under UK GDPR and meet relevant conditions. Remember, consent would only work where employees have a genuine and fair choice. Often an imbalance of power means consent is not appropriate in an employee context. Employees may feel duty-bound to give consent and therefore there may be an imbalance.

You may be tempted to rely your employment contract with individuals, (i.e the ‘contractual necessity’ lawful basis) but this would need to be genuinely necessary. Many employers may choose to rely on legitimate interests, but this requires a balancing test, and we’d highly recommend conducting and keeping a record of your Legitimate Interests Assessment (LIA).

To be fair you should only monitor workers in ways they would reasonably expect, and in ways which wouldn’t have unjustified adverse effects on them. The ICO says you should conduct a Data Protection Impact Assessment to make sure any monitoring is fair and proportionate.

To be transparent you must be open and upfront about what you’re doing. Monitoring should not routinely be done in secret. Monitoring conducted without transparency is fundamentally unfair. There may however be exceptional circumstances where covert monitoring is justified.

2. Will monitoring gather special category data information?

If monitoring involves special category data, you’ll need to identify a special category condition, as well as a lawful basis. Special category data includes data revealing racial or ethnic origin, religious, political or philosophical beliefs, trade union membership, genetic and biometric data, data concerning health or data about a person’s sex life or sexual orientation.

You may not automatically think this is relevant, but be mindful even monitoring emails, for example, could, without appropriate controls in place, lead to the processing of special category data.

3. Have you clearly set out your purpose(s) for employee monitoring?

You need to be clear about your purpose(s) and not monitor workers ‘just in case’ it might be useful. Personal details captured should not subsequently be used for a different purpose, unless this is assessed to be compatible with the original specified purpose(s).

4. Are you minimising the personal details gathered?

Organisations are required to not collect more personal information than they need to achieve their defined purpose(s). This should be approached with care as many monitoring technologies and methods have the capability to gather more information than necessary. You should take steps to limit the amount of data collected and how long it’s necessary to retain it for.

5. Is the information gathered accurate?

You need to take all reasonable steps to make sure the personal information gathered through monitoring workers is accurate and not misleading, or taken out of context, and people should have the ability to challenge the results of any monitoring.

6. Have you decided how long information will be kept?

Personal information gathered must not be kept for any longer than is necessary. It shouldn’t be kept just in case it might be useful in future. Organisations must have a data retention schedule and delete any information in line with this. The UK GDPR doesn’t tell us precisely how long this should be, but other laws might. Organisations need to be able to justify any retention periods they set.

7. Is the information kept securely?

You must have ‘appropriate technical and organisation measures’ in place to protect personal information. Technical measures include things like firewalls, encryption, multi-factor authentication, and so on. Data security risks should be assessed, access should be restricted, and those handling the information should receive appropriate training.

If monitoring is outsourced to a third-party processor, you’ll be responsible for compliance with data protection law.

8. Are you able to demonstrate your compliance with data protection law?

Organisations need to be able to demonstrate their compliance with UK GDPR. This means making sure appropriate policies, procedures and measures are put in place for workplace monitoring activities. And let’s also consider any monitoring of workers who work from home, or other ‘offsite’ locations. As with everything this must be proportionate to the risks. The ICO says organisations should make sure ‘overall responsibility for monitoring workers rest at the higher senior management level’.

Monitoring people is by its very nature intrusive, it must be proportionate, justified and people should in most circumstances be told it’s happening.

The ICO has published detailed guidance on this: Employment practices and data protection: monitoring workers and the regulator’s overriding message is organisations should carry out a DPIA if they’re considering monitoring their staff.

Workplace use of facial recognition and fingerprint scanning

February 2024

Just because you can use biometric data, doesn’t mean you should

The use of biometric data is escalating, and recent enforcement action by the UK Information Commissioner’s Office (ICO) concerning its use for workplace monitoring is worth taking note of. We share 12 key considerations if you’re considering using facial recognition, fingerprint scanning or other biometric systems.

In a personal context, many use fingerprint or iris scans to open their smartphones or laptops. In the world of banking facial recognition, voice recognition, fingerprint scans or retina recognition have become commonplace for authentication and security purposes. The UK Border Force is set to trial passport free travel, using facial recognition technology. And increasingly organisations are using biometrics for security or employee monitoring purposes.

Any decision to use biometric systems shouldn’t be taken lightly. If biometric data is being used to identify people, it falls under the definition of Special Category Data under UK GDPR. This means there are specific considerations and requirements which need to be met.

What is biometric data?

Biometric data is also special category data whenever you process it for the purpose of uniquely identifying an individual. To quote the ICO;

Personal information is biometric data if it:

  • relates to someone’s physical, physiological or behavioural characteristics (e.g. the way someone types, a person’s voice, fingerprints, or face);
  • has been processed using specific technologies (e.g. an audio recording of someone talking is analysed with specific software to detect qualities like tone, pitch, accents and inflections); and
  • can uniquely identify (recognise) the person it relates to.

Not all biometric data is classified as ‘special category’ data but it is when you use it, or intend to use it, to uniquely identify someone. It will also be special category data if, for example, you use it to infer other special category data; such as someone’s racial/ethnic origin or information about people’s health.

Special category data requirements

There are key legal requirements under data protection law when processing special category data. In summary, these comprise:

  • Conduct a Data Protection Impact Assessment
  • Identify a lawful basis under Article 6 of GDPR.
  • Identify a separate condition for processing under Article 9. There are ten different conditions to choose from.
  • Your lawful basis and special category condition do not need to be linked.
  • Five of the special category conditions require additional safeguards under the UK’s Data Protection Act 2018 (DPA 2018).
  • In many cases you’ll also need an Appropriate Policy Document in place.

Also see the ICO Special Category Data Guidance.

ICO enforcement action on biometric data use in the workplace

The Regulator has ordered Serco Leisure and a number of associated community leisure trusts to stop using Facial Recognition Technology (FRT) and fingerprint scanning to monitor workers’ attendance. They’ve also ordered the destruction of all biometric data which is not legally required to be retained.

The ICO’s investigation found the biometric data of more than 2,000 employees at 38 leisure centres was being unlawfully processed for the purpose of attendance checks and subsequent payment.

Serco Leisure was unable to demonstrate why it was necessary or proportionate to use FRT and fingerprint scanning for this purpose. The ICO noted there are less intrusive means available, such as ID cards and fobs. Serco Leisure said these methods were open to abuse by employees, but no evidence was produced to support this claim.

Crucially, employees were not proactively offered an alternative to having their faces and fingers scanned. It was presented to employees as a requirement in order to get paid.

Serco Leisure conducted a Data Protection Impact Assessment and a Legitimate Interests Assessment, but these fell short when subject to ICO scrutiny.

Lawful basis

Serco Leisure identified their lawful bases as contractual necessity and legitimate interests. However, the Regulator found the following:

1) While recording attendance times may be necessary to fulfil obligations under employment contracts, it doesn’t follow that the processing of biometric data is necessary to achieve this.

2) Legitimate interests will not apply if a controller can reasonably achieve the same results in another less intrusive way.

Special category condition

Initially Serco Leisure had not identified a condition before implementing biometric systems. It then chose the relevant condition as being for employment, social security and social protection, citing Section 9 of the Working Time Regulations 1998 and the Employment Rights Act 1996.

The ICO found the special category condition chosen did not cover processing to purely meet contractual employment rights or obligations. Serco Leisure also failed to produce a required Appropriate Policy Document.

Read more about this ICO enforcement action.

12 key steps when considering using biometric data

If you’re considering using biometrics systems which will be used to uniquely identify individuals for any purpose, we’d highly recommend taking the following steps:

1. DPIA: Carry out a Data Protection Impact Assessment.

2. Due diligence: Conduct robust due diligence of any provider of biometric systems.

3. Lawful basis: Identify a lawful basis for processing and make sure you meet the requirements of this lawful basis.

4. Special category condition: Identify an appropriate Article 9 condition for processing special category biometric data. The ICO says explicit consent is likely to most appropriate, but other conditions may apply depending on your circumstances.

5. APD: Produce an Appropriate Policy Document where required under DPA 2018.

6. Accuracy: Make sure biometric systems are sufficiently accurate for your purpose. Test and mitigate for biases. For example, bias and inequality may be caused by a lack of diverse data, bugs and inconsistencies in biometric systems.

7. Safeguards: Consider what safeguards will be necessary to mitigate the risk of discrimination, false acceptance and rejection rates.

8. Transparency: Consider how you will be open and upfront about your use of biometric systems. How will you explain this in a clear, concise, and easy to access way? If you are relying on consent, you’ll need to clearly tell people what they’re consenting to, and consent will need to be freely given. Consent: Getting it Right

9. Privacy rights: Assess how people’s rights will apply, and have processes in place to recognise and respond to individual privacy rights requests.

10. Security: Assess what security measures will be needed by your own organisation and by any biometric system provider.

11. Data retention: Assess how long you will need to keep the biometric data. Have robust procedures in place for deleting it when no longer required.

12. Documentation: Keep evidence of everything!

More detail can be found in the ICO Biometric Data Guidance.

Is bias and discrimination in AI a problem?

September 2022

Artificial Intelligence - good governance will need to catch up with the technology

The AI landscape

We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include:

  • Cybersecurity analysis to identify anomalies in IT structures
  • Automating repetitive maintenance tasks and guiding technical support teams
  • Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement
  • Reviewing job applications to identify the best-qualified candidates in HR
  • Research scientists looking for patterns in health to identify new cures for cancer
  • Predicting equipment failure in manufacturing
  • Detecting fraud in banking by analysing irregular patterns in transactions.
  • TV and movie recommendations for Netflix users
  • Inventory optimisation and demand forecasting in retail & transportation
  • Programming cars to self-drive

Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.  

In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications. 

Why recruitment applications?

AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest. 

The Amazon recruitment tool – an example of bias/discrimination

The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications.

What is AI?

AI can be defined as: 

“using a non-human system to learn from experience and imitate human intelligent behaviour”

The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.  

Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes. 

In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account.

You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity. 

How is AI used? 

AI can be used to make decisions:

1.     A prediction – e.g. you will be good at a job

2.     A recommendation – e.g. you will like this news article

3.     A classification – e.g. this email is spam. 

The benefits of AI

AI is generally a force for good:

1.     It can automate a process and save time

2.     It can optimise the efficiency of a process or function (often seen in factory or processing plants)

3.     It can enhance the ability of individuals – often by speeding processes

Where do data protection and AI intersect?

An explanation of AI-assisted decisions is required: 

1.     If there is a process without any human involvement

2.     It produces legal or similarly significant effects on an individual – e.g. not getting a job. 

Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing. 

What data is in scope?

  • Personal data used to train a model
  • Personal data used to test a model
  • On deployment, personal data used or created to make decisions about individuals

If no personal data is included in a model, AI is not in scope for data protection. 

How to approach an AI project?

 Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation. 

Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination. 

Where does bias/discrimination creep in?

Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:

  • Age
  • Disability
  • Gender reassignment
  • Marriage and civil partnership
  • Pregnancy and maternity
  • Race
  • Religion and belief
  • Sex
  • Sexual orientation. 

When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination. 

Our Top 10 Tips

  1. Ask how the algorithm has been trained – the “black box” excuse isn’t good enough
  2. Review the training inputs to identify possible bias with the use of historic data
  3. Test the outcomes of the model – this really seems so obvious but not done regularly enough
  4. Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes
  5. Consider how to compensate for bias built into the training – a possible form of positive discrimination
  6. Have a person review the outcomes of the model if it is challenged and give that person authority to challenge
  7. Incorporate your AI projects into your data protection governance structure
  8. Ensure that you’ve done a full DPIA identifying risks and mitigations
  9. Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework
  10. Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised? 

In summary

AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable. 

In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged.