Artificial Intelligence – helping businesses address the privacy risks

August 2021

The use of artificial intelligence (AI) is increasing at great pace, to drive valuable new benefits across all areas of business and society. We see its applications expanding across many areas of our daily lives anything from social media usage through to self-driving and parking cars, and medical applications.

However, as with any new technology, there can be challenges too. How can we be sure we are protecting people from risk and potential harm when processing their personal data within AI systems?

Like with any other use of personal data, businesses need to ensure they comply with core data protection principles when designing, developing or productionising AI systems which use personal data.

You may recall in April 2021, the European Commission published its proposal for new regulation, harmonising the rules governing artificial intelligence.

The regulation of AI is a tricky balancing act. On the one hand there’s the desire not to hinder research and development from adopting new technologies to bring increasing societal benefits – but those exciting opportunities must be balanced against the need to protect individuals against any inherent risks.

So how can we strike the right balance?

AI privacy ‘toolkit’

The ICO have published an improved ‘beta’ version of their AI toolkit, which aims to help organisations using AI to better understand & assess data protection risks.

It’s targeted at two main audiences; those with a compliance focus such as DPOs, general counsel, risk managers and senior management; alongside technology specialists such as AI/ML developers, data scientists, software developers & engineers and cybersecurity & IT risk managers.

So what is the toolkit?

It’s an Excel spreadsheet which maps key stages of the AI lifecycle against the data protection principles, highlighting relevant risks and giving practical steps you can take to assess, manage and mitigate risks.

It also provides suggestions on technical and organisational measures which could be adopted to tackle any risks. The toolkit focuses on four key stages of the AI lifecycle:

  • Business requirements and design
  • Data acquisition and preparation
  • Training and testing
  • Deployment and monitoring

The ICO have quite rightly recognised that the development of AI systems is not always a linear journey from A to B to C. One stage does not necessarily flow straight into another.

Therefore it will often be best to take a holistic approach and recognise you won’t have all the information available for assessment at ‘product definition’ stage. The engagement for a DPO (or other privacy champion) will need to stretch across all stages of the AI lifecycle.

What kinds of risk are highlighted?

Quite a few actually, including:

  • Failure to adequately handle the rights of individuals
  • Failure to choose and appropriate lawful basis for the different stages of development
  • Issues with training data which could lead to negative impacts on individuals – such as discrimination, financial loss or other significant economic or social disadvantages
  • Lack of transparency regarding the processes, services and decisions made using AI
  • Unauthorised / unlawful processing, accidental loss, destruction or damage to personal data
  • Excessive collection or use of personal data
  • Lack of accountability or governance over the use of AI and the outcomes it gives

AI has become a real focus area for the ICO of late. The toolkit follows on the heels of their Guidance on AI and Data Protection; their co-badged guidance with The Alan Turing Institute on Explaining Decisions Made With AI. This is all connected with their commitment to enable good data protection practice in AI.

Want to join the consultation?

The ICO are currently looking for organisations using their guidance, to better understand how it works in practice and make sure the Regulator keeps pace with emerging developments and that guidance and toolkits are genuinely useful to businesses. You can give your feedback to the ICO here.

In summary

The use of AI is exciting and presents many opportunities and potential benefits, but it‘s clearly not without its risks. There’s more and more guidance emerging to help organisations begin to adopt or continue to expand their use of AI. The clear message from the Regulator is this activity must be handled carefully and data protection must be considered from the outset.

The ICO is keen to work with businesses to make sure its guidance its useful for organisations, so it can continue to support the increasing use of AI.

 

Is this all going in the right direction? We’d be delighted to hear your thoughts. Alternatively if you’d like data protection advice when designing and developing with AI, we can help. CONTACT US.