New guidance on Artificial Intelligence and data protection
The use of artificial intelligence (AI) is growing fast across the digital economy, from consumer products to surveillance applications. Take the current coronavirus pandemic for instance, it’s driving the development of innovative new data applications, many of which are underpinned by the use of AI.
AI has the potential to deliver significant benefits for individuals and for society as a whole. Equally, it has the potential to raise privacy issues. How can we be confident our personal data is being properly handled and protected when its used within AI-driven applications?
The ICO has released their (rather timely) Guidance on artificial intelligence and data protection’. It marks the culmination of two years of research and consultation between Professor Reuben Binns (University of Oxford) and the ICO AI team. It follows up on the Regulator’s May 2020 guidance; Explaining decisions made with AI.
This latest guidance aims to help businesses identify and mitigate the data protection risks that can arise from use of AI. It explains how data protection principles should be applied to AI projects, without losing sight of the benefits these projects can deliver.
What is meant by artificial intelligence?
A prominent area of AI is machine learning (ML), which is uses computerised statistical models. These models often use very large quantities of data and may fall within data laws if certain data can be linked back to specific individuals, i.e. if individuals are identified or identifiable.
ML models can be used to make classifications or to make predictions for new data points, which themselves may or may not be linkable to individuals. Therefore, its vital to assess if and how ‘personal’ data is used within ML.
While not all AI involves ML and personal data, much recent interest in AI has been sparked by the growth of ML applications. For example, facial recognition, speech-to-text apps and credit risk scoring.
Who is the ICO guidance aimed at?
The guidance is focused on two key audiences – those in compliance (such as DPOs, general counsel, etc) and technology specialists (such as data scientists & AI / ML experts, software developers and engineers, cybersecurity and IT risk managers).
So how does the ICO want us to approach the use of AI?
The guidance is structured into logical sections to help you work through compliance best practices step by step.
1. Address accountability and governance – this includes the need for data protection impact assessments (DPIAs). Remember, in most cases organisations are required to complete a DPIA before they start to use AI systems that process personal data.
2. Make sure your AI activities are fair, lawful and transparent – for example you should;
- choose appropriate lawful bases for each processing activity, including during both the development and deployment phases
- assess and improve AI system performance
- mitigate any potential discriminatory effects. Any bias in a data set, if unchecked, could potentially lead to the AI making biased inferences or decisions.
- tell people what you are doing!
3. Address data minimisation and information security – be sure to look at;
- Minimisation: limiting the use of personal data in your model to only use what is necessary to operate it effectively.
- Security: appropriate security measures for your use of AI will depend on the level and type of risks that may arise from your processing activities.
4. Fulfil individual privacy rights – specifically, care should be taken surrounding rights in relation to automated decision-making.
Simon McDougall, Deputy Commissioner of regulatory innovation and technology at the ICO commented:
“The guidance contains recommendations on best practice and technical measures that organisations can use to mitigate those risks caused or exacerbated by the use of this technology.
It is my hope this guidance will answer some of the questions I know organisations have about the relationship between AI and data protection, and will act as a roadmap to compliance for those individuals designing, building and implementing AI systems.”
AI has great potential to deliver huge benefits for consumers, business and society at large. While data protection shouldn’t get in the way of innovation, you do need to think about privacy measures when moving forward with AI projects.
If you’re conducting a project involving the use of AI and you’d like help with data protection why not Contact Us for an informal chat?
Simon Blanchard, August 2020
The information provided and the opinions expressed in this document represent the views of the Data Protection Network. They do not constitute legal advice and cannot be construed as offering comprehensive guidance on the EU General Data Protection Regulation (GDPR) or other statutory measures referred to.