Is bias and discrimination in AI a problem?
Artificial Intelligence - good governance will need to catch up with the technology
The AI landscape
We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include:
- Cybersecurity analysis to identify anomalies in IT structures
- Automating repetitive maintenance tasks and guiding technical support teams
- Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement
- Reviewing job applications to identify the best-qualified candidates in HR
- Research scientists looking for patterns in health to identify new cures for cancer
- Predicting equipment failure in manufacturing
- Detecting fraud in banking by analysing irregular patterns in transactions.
- TV and movie recommendations for Netflix users
- Inventory optimisation and demand forecasting in retail & transportation
- Programming cars to self-drive
Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.
In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications.
Why recruitment applications?
AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest.
The Amazon recruitment tool – an example of bias/discrimination
The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications.
What is AI?
AI can be defined as:
“using a non-human system to learn from experience and imitate human intelligent behaviour”
The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.
Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes.
In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account.
You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity.
How is AI used?
AI can be used to make decisions:
1. A prediction – e.g. you will be good at a job
2. A recommendation – e.g. you will like this news article
3. A classification – e.g. this email is spam.
The benefits of AI
AI is generally a force for good:
1. It can automate a process and save time
2. It can optimise the efficiency of a process or function (often seen in factory or processing plants)
3. It can enhance the ability of individuals – often by speeding processes
Where do data protection and AI intersect?
An explanation of AI-assisted decisions is required:
1. If there is a process without any human involvement
2. It produces legal or similarly significant effects on an individual – e.g. not getting a job.
Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing.
What data is in scope?
- Personal data used to train a model
- Personal data used to test a model
- On deployment, personal data used or created to make decisions about individuals
If no personal data is included in a model, AI is not in scope for data protection.
How to approach an AI project?
Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation.
Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination.
Where does bias/discrimination creep in?
Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:
- Age
- Disability
- Gender reassignment
- Marriage and civil partnership
- Pregnancy and maternity
- Race
- Religion and belief
- Sex
- Sexual orientation.
When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination.
Our Top 10 Tips
- Ask how the algorithm has been trained – the “black box” excuse isn’t good enough
- Review the training inputs to identify possible bias with the use of historic data
- Test the outcomes of the model – this really seems so obvious but not done regularly enough
- Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes
- Consider how to compensate for bias built into the training – a possible form of positive discrimination
- Have a person review the outcomes of the model if it is challenged and give that person authority to challenge
- Incorporate your AI projects into your data protection governance structure
- Ensure that you’ve done a full DPIA identifying risks and mitigations
- Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework
- Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised?
In summary
AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable.
In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged.