Is bias and discrimination in AI a problem?

September 2022

Artificial Intelligence - good governance will need to catch up with the technology

The AI landscape

We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include:

  • Cybersecurity analysis to identify anomalies in IT structures
  • Automating repetitive maintenance tasks and guiding technical support teams
  • Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement
  • Reviewing job applications to identify the best-qualified candidates in HR
  • Research scientists looking for patterns in health to identify new cures for cancer
  • Predicting equipment failure in manufacturing
  • Detecting fraud in banking by analysing irregular patterns in transactions.
  • TV and movie recommendations for Netflix users
  • Inventory optimisation and demand forecasting in retail & transportation
  • Programming cars to self-drive

Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.  

In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications. 

Why recruitment applications?

AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest. 

The Amazon recruitment tool – an example of bias/discrimination

The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications.

What is AI?

AI can be defined as: 

“using a non-human system to learn from experience and imitate human intelligent behaviour”

The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.  

Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes. 

In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account.

You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity. 

How is AI used? 

AI can be used to make decisions:

1.     A prediction – e.g. you will be good at a job

2.     A recommendation – e.g. you will like this news article

3.     A classification – e.g. this email is spam. 

The benefits of AI

AI is generally a force for good:

1.     It can automate a process and save time

2.     It can optimise the efficiency of a process or function (often seen in factory or processing plants)

3.     It can enhance the ability of individuals – often by speeding processes

Where do data protection and AI intersect?

An explanation of AI-assisted decisions is required: 

1.     If there is a process without any human involvement

2.     It produces legal or similarly significant effects on an individual – e.g. not getting a job. 

Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing. 

What data is in scope?

  • Personal data used to train a model
  • Personal data used to test a model
  • On deployment, personal data used or created to make decisions about individuals

If no personal data is included in a model, AI is not in scope for data protection. 

How to approach an AI project?

 Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation. 

Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination. 

Where does bias/discrimination creep in?

Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:

  • Age
  • Disability
  • Gender reassignment
  • Marriage and civil partnership
  • Pregnancy and maternity
  • Race
  • Religion and belief
  • Sex
  • Sexual orientation. 

When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination. 

Our Top 10 Tips

  1. Ask how the algorithm has been trained – the “black box” excuse isn’t good enough
  2. Review the training inputs to identify possible bias with the use of historic data
  3. Test the outcomes of the model – this really seems so obvious but not done regularly enough
  4. Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes
  5. Consider how to compensate for bias built into the training – a possible form of positive discrimination
  6. Have a person review the outcomes of the model if it is challenged and give that person authority to challenge
  7. Incorporate your AI projects into your data protection governance structure
  8. Ensure that you’ve done a full DPIA identifying risks and mitigations
  9. Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework
  10. Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised? 

In summary

AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable. 

In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged. 

Data Retention Guide

Data retention tools, tips and templates

This comprehensive guides take you through the key steps and considerations when approaching data retention. Whether you’re starting out or reviewing your retention policy and schedules, we hope this guide will support your work.

The guide, first published in June 2020 was developed and written by data protection specialists from a broad range of organisations and sectors.  A huge thank you to all those who made it possible.

Suppliers – why your contracts and security are important

Processors and controllers are both accountable

Do you provide a service to clients and handle your client’s personal data? If you’re acting as a processor, a recent GDPR fine serves as a helpful reminder to be sure to have all your ducks in a row.

There’s a clear warning you shouldn’t just assume the contracts your clients ask you to sign are okay, nor can you just say you have robust security measures in place, you actually have to have them!

In this recent case a software publisher, acting as a processor for their clients, was fined 1.5 million Euros by the French regulator (CNIL) following a data breach involving sensitive health data.

It was found data was exfiltrated by unauthorised parties from a poorly protected server. In a nutshell the key findings were:

  • Significant gaps in the processor’s security processes
  • Contractual documentation which failed to include mandatory obligations required under Article 28 of GDPR.

It’s worth noting the fine was based on both these counts. The ruling makes it clear processors should be wary of relying on their clients to make sure contractual terms are up to scratch. It’s the responsibility of both parties.

Here’s a quick recap on the how suppliers can minimise their risks.

Getting the relationship clear

The most important first step is to establish the relationship between your company and another.

  • Are you handling a client’s data on their behalf, under their instruction, to provide a service to them?
  • Are you acting as controller, clearly determining how the personal data will be used for your own purpose(s)?
  • Are you both? i.e. acting as a controller in certain circumstances, but a processor for specific services you provide to clients.

Are we controller or are we processor?

What are the contractual requirements?

Once you’re clear you are a processor, acting under your client’s instructions, the law states your arrangements with clients must be covered by a binding agreement. EU and UK GDPR set out specific provisions which must be written into such contracts. In brief these are as follows:

1. Types of personal data & categories of data subject

The contract needs to specify what types of personal data you’ll be handling. It should also include details of whether this data relates to your client’s employees, patients, customers, and so forth.

2. Nature, purpose, duration of processing

The contract should describe the nature of the service(s) you provide, what purpose(s) this serves and the term of the contract. The agreement should cover instructions from your client of what you are permitted to do with their data.

3. The rights and duties of each party

The obligations of both parties should be clearly defined. For example, the client’s obligation to have a lawful basis for processing, its responsibility to fulfil individual privacy rights and your commitment as a supplier to not use your client’s data for any other purpose.

4. Technical and organisational measures

As a supplier you need to provide sufficient guarantees to implement proportionate technical and organisational measures to meet requirements of UK/EU GDPR.

5. Sub-processors

If you engage other companies (‘sub processors’) to support you in delivering your services, you’ll need specific or general written authorisation from your client(s). If you make any changes to which sub-processors you use (including software providers), you’ll need to tell your client and give them the opportunity to object. Contractual terms should stipulate that you are accountable for your sub-processors.

6. International transfers

If relevant, the agreement should include details and provisions for any transfers of personal data to a third country. For example if you are based in the UK, a transfer to any other country. This would include details of any sub-processors based outside the UK. A transfer is often associated with the act of sending or transmitting personal data from one country to another. It should be noted the definition also covers cases where personal data is made ‘available’, in other words can be accessed in a third country.

7. Duty of confidentiality

There must be a confidentiality clause, which commits you to ensuring any of your staff authorised to access the client’s data are committed to a duty of confidentiality or are under a statutory obligation of confidentiality.

8. Assisting your clients

The contract should cover your commitment to assisting your clients, where necessary, with handling individual privacy rights, handling data breaches and conducting data protection impact assessments.

9. Return or destruction of data

It should be clear what happens to the client’s data when the contract ends. Does the client want you to return the data or destroy it?

10. Audits and inspections

As a processor you must agree to make available all information necessary to demonstrate your compliance and agree to audits, including inspections by your client or their authorised auditor.

Processors have obligations

This recent CNIL fine shows you can’t just sign a contract, sit back and relax.

As a processor you’re responsible for your sub-processors, data transfers, staff training and confidentiality, assisting your clients when necessary and so forth. You have to be sure to implement the technical and organisation measures you said you would to protect your client’s data.

While some clients will ask you to jump through multiple hoops as part of their due diligence process, making you clearly demonstrate your security measures are robust, others may not be so picky. But that doesn’t release you from your responsibilities.

The law and this recent fine make it clear processors can be held liable. In the event of a breach, your contractual arrangements and internal practices could come under rigorous scrutiny.

Data Subject Access Request Guide

Being prepared and handing DSARs

Handling Data Subject Access Requests can be complex, costly and time-consuming. How do you make sure you’re on the front foot, with adequate resources, understanding and the technical capability to respond within a tight legal timeframe?

Data subject access request from the data protection consultancy DPN - Data Protection Network

This guide aims to take you through the key steps to consider, such as…

  • Being prepared
  • Retrieving the personal data
  • Balancing complex requests
  • Applying redactions & exemptions
  • How technology can help

Is your Privacy Notice complete?

April 2022

A GDPR fine reveals gaps in necessary privacy information

A core GDPR theme is transparency; being upfront and open about how people’s personal information is collected and used. People have a fundamental right to be informed and one of the key ways organisations can do this is with easily accessible privacy notices.

Four years ago, in the run up to GDPR enforcement, many businesses rushed to make sure their privacy notices met the enhanced and specific requirements.

  • When did you last review yours?
  • Have your business activities changed in recent years?
  • Are you sure you’ve got everything covered?

It can be easy to think nobody actually reads our privacy notices, but some do, and a regulator most definitely would. A recent 725,000 Euro fine for a lack of transparency shows how it can come back to haunt you if you’ve missed vital aspects out, or not been as clear as you could have been.

This is an area some major charities were found wanting before GDPR was even enforced. Back in 2017 the Information Commissioner’s Office (ICO) issued a series of fines and a key finding was the charities had failed to tell people about activities such as wealth-screening and appending telephone numbers.

The GDPR fine

Fast forward to 2022 and a recent fine against Klarna Bank AB, by the Swedish Data Protection Authority (IMY), reveals a failure to give customers necessary privacy information.

What necessary information did the bank not provide?

  • Purposes and lawful basis for processing
    It was found for one of the bank’s services Klarna did not provide information on the purpose(s) for which it was processing personal data and the lawful basis/bases it was relying on.
  • Recipients who data is shared with
    It was found incomplete and misleading information was provided about other companies they shared personal data with.
  • International transfers
    Information was not given on which countries outside the EU/EEA personal data was transferred to. There was also no information about the safeguards which might apply to such transfers.
  • Individual rights
    Incomplete information was provided about people’s privacy rights, such as the right to erasure, data portability and the right to object.

In conclusion it was found Klarna had failed to fulfil the basic principle of transparency and people’s right to information.

Privacy notice checklist

As a reminder for us all, here are the key points which should be covered in privacy notices. This checklist is based on Article 13 of UK/EU GDPR and ICO guidance.

The 7 essential elements

  1. Name and contact details of your organisation
  2. Purposes of processing – explain each different purpose you use people’s personal information for.
  3. Lawful basis for processing – explain the lawful basis you rely on to collect and use people’s personal data.
  4. Data retention – tell people how long you envisage keeping personal data for, or at least the criteria used to decide retention periods.
  5. Privacy rights – tell people what their privacy rights are and how they can exercise them. The right of access, erasure, objection, rectification, data portability, restriction.
  6. Right to withdraw consent – tell people they can withdraw their consent at any time, where this is the lawful basis you are relying on. It should be as easy to withdraw consent as it is to give it and you should tell people how they can withdraw their consent.
  7. Right to lodge a complaint – tell people they have the right to complain to a supervisory authority, for example the Information Commissioner’s Office in the UK.

7 more points to include, if relevant for your business

Where applicable you’re also required to provide the following details:

  1. DPO – Provide contact details of your Data Protection Officer (if you have appointed one)
  2. Data Protection Representative – If you are based outside the EU/UK, but you offer services of monitor the behaviour of people based in the EU/UK you should have a Data Protection Representative and provide contact details for them.
  3. Legitimate Interests – Explain which purposes you rely on legitimate interests for.
  4. Recipients, or categories of recipients – Provide details of who you’ll share people’s personal data with. This includes suppliers acting as processors, handling data on your behalf. ICO guidance states you can provide specific names, or at least the categories of organisation they fall within.
  5. International Transfers – Inform people if you transfer their personal data to any countries outside the UK (or if based in the EU, outside the EU). Explain whether transfers are based on an adequacy decision. If not provide a description of other safeguards in place, such as Standard Contractual Clauses.
  6. Automated decision-making, including profiling – Tell people if you make solely automated decisions, including profiling that may have a legal or similar significant effect on individuals. Meaningful information should be provided about the logic involved, the significance and envisaged consequences.
  7. Statutory/contractual obligations – Let people know if you are required to collect their data by law or under contract, and the consequences should they not provide necessary information.

In addition to the above there are some other best practice points, such as indicating when the privacy notice was last updated and offering further assurances surrounding how personal data is protected.

Furthermore, if you collect details about people from another source, in order words not directly from them, you should make sure you tell them you are handling their personal data and provide the relevant privacy information.

This case serves as a reminder that we need to regularly review our privacy notices. Put very simply, the law says there should be no surprises about how we’re using people’s personal data.

Our privacy notice may be the least clicked link on our websites, but it’s not just regulators and people like me who read them. It’s not unusual for businesses, as part of their data protection due diligence when considering working with other companies, to take a peek at privacy notices to check they look relatively in order.

Dark patterns: is your website tricking people?

April 2022

Why should we be concerned about dark patterns?

Do you ever feel like a website or app has been designed to manipulate you into doing things you really don’t want to do? I bet we all have. Welcome to the murky world of ‘dark patterns’.

This term was originally penned in 2010 by user experience specialist, Harry Brignull, who defines dark patterns as “features of interface design crafted to trick people into doing things they might not want to do and which benefit the business”.

Whenever we use the internet, businesses are fighting for our attention and it’s often hard for them to get cut through. And we often don’t have the time or inclination to read the small print, we just want to achieve what we set out to do.

Business can take advantage of this. Sometimes they make it difficult to do things which should, on the face of it, be really simple. Like cancel or say no. They may try to lead you down a different path that suits their business better and leads to higher profits.

These practices are in the spotlight, and businesses could face more scrutiny in future. Sometimes dark patterns are deliberate, sometimes they may be accidental.

What are dark patterns?

There are many interpretations of dark patterns and many different examples. Here’s just a handful to give you a flavour – it’s by no means an exhaustive list.

  • ‘Roach Motel’ – this is where the user journey makes it easy to get into a situation, but hard to get out. (Perhaps ‘Hotel California’ might have been a better name?). For example, when it’s easy to sign up to a service, but very difficult to cancel it because it’s buried somewhere you wouldn’t think to look. And when you eventually find it, you still have to wade through several messages urging you not to cancel.
  • FOMO (Fear Of Missing Out) – this for example is when you’re hurried into making a purchase by ‘urgent’ messages showing a countdown clock or alert messages saying the offer will end imminently.
  • Overloading – this is when we’re confronted with a large number of requests, information, options or possibilities to prompt us to share more data. Or it could be used to prompt us to unintentionally allow our data to be handled in a way we’d never expect.
  • Skipping – this is where the design of the interface or user experience is done is such a way that we forget, or don’t think about, the data protection implications. Some cookie notices are designed this way.
  • Stirring – this affects the choices we make by appealing to our emotions or using visual cues. For example, using a certain colour for buttons you’d naturally click for routine actions – getting you into the habit of clicking on that colour. Then suddenly using that colour button for the paid for service, and making the free option you were after hard to spot.
  • Subliminal advertising – this is the use of images or sounds to influence our responses without us being consciously aware of it. This is banned in many countries as deceptive and unethical.

Social engineering?

Some argue the ‘big players’ in search and social media have been the worst culprits in the evolution and proliferation of dark patterns. For instance, the Netflix video ‘The Social Dilemma’ argued that Google and Facebook have teams of engineers mining behavioural data for insights on user psychology to help them evolve their interface and user experience.

The mountain of data harvested when we search, browse, like, comment, post and so on can be used against us, to drive us to behave they way they want us to – without us even realising. The rapid growth of AI could push this all to a whole new level if left unchecked.

The privacy challenge

Unsurprisingly there’s a massive cross-over between dark patterns and negatively impacting on a user’s privacy. The way user interfaces are designed can play a vital role in good or bad privacy.

In the EU, discussions about dark patterns (under the EU GDPR) tend to concentrate on to how dark patterns can be used to manipulate buyers to give their consent – and point out consent would be invalid if it’s achieved deceptively or not given freely.

Here are some specific privacy related examples.

  • Tricking you to installing an application you didn’t want, i.e. consent is not unambiguous or freely given.
  • When the default privacy settings are biased to push you in a certain direction. For example, on cookie notices where it’s much simpler to accept than object, and can take more clicks to object. Confusing language may also be used to manipulate behaviour.
  • ‘Privacy Zuckering’, is a term used for when you’re tricked into publicly sharing more information about yourself than you really intended to. Named after Facebook’s co-founder & CEO, but it isn’t unique to them, for example, LinkedIn have been fined for this.
  • When an email unsubscribe link is hidden within other text.
  • Where more screen space is given to selecting options the business wants you to take, and less space for what might be more preferable options the customer. For example, e.g. the frequency of a subscription, rather than a one-off purchase.

Should businesses avoid using dark patterns?

Many will argue YES! Data ethics is right at the heart of the debate. Businesses should ask themselves if what they are doing is fair and reasonable to try to encourage sales and if their practices could be seen as deceptive. Are they doing enough to protect their customers?

Here are just a few reasons to avoid using dark patterns:

  • They annoy your customers and damage their experience of your brand. A survey by Hubspot found 80% of respondents said they had stopped doing business with a company because of a poor customer experience. If your customers are dissatisfied, they can and will switch to another provider.
  • They could lead to higher website abandon rates.
  • Consent gathered by manipulating consumer behaviour is unlikely to meet the GDPR consent requirements, i.e. freely given, informed, explicit and unambiguous. So your processing could turn out to be unlawful.

Can these effects happen by mistake?

Dark patterns aren’t always deliberate. They can arise due to loss of focus, short-sightedness, poorly trained AI models, or a number of other factors. However they are more likely to occur when designers are put under pressure to deliver on time for a launch date, particularly when commercial objectives are prioritised above all else.

Cliff Kuang, author of “User Friendly”, says there’s a tendency to make it easy for users to perform the tasks that suit the company’s preferred outcomes. The controls for limiting functionality or privacy controls can sometimes be an afterthought.

What can businesses do to prevent this?

In practice it’s not easy to strike the right balance. We want to provide helpful information help our website / app users to make decisions. It’s likely we want to ‘nudge’ them in the ‘right’ direction. But we should be careful we don’t do this in ways which confuse, mislead or hurry users into doing things they don’t really want to do.

It’s not like the web is unique in this aim (it’s just that we have a ton of data to help us). In supermarkets, you used to always see sweets displayed beside the checkout. A captive queuing audience, and if it didn’t work on you, a clear ‘nudge’ to your kids! But a practice now largely frowned upon.

So how can we do ‘good sales’ online without using manipulation or coercion? It’s all about finding a healthy balance.

Here’s a few suggestions which might help your teams:

  • Train your product developers, designers & UX experts – not only in data protection but also in dark patterns and design ethics. In particular, help them recognise dark patterns and understand the negative impacts they can cause. Explain the principles of privacy by design and the conditions for valid consent.
  • Don’t allow business pressure and priorities to dictate over good ethics and privacy by design.
  • Remember data must always be collected and processed fairly and lawfully.

Can dark patterns be regulated?

The proliferation of dark patterns over recent years has largely been unrestricted by regulation.

In the UK & Europe, where UK & EU GDPR are in force, discussions about dark patterns have mostly gravitated around matters relating to consent – where that consent may have been gathered by manipulation and may not meet the required conditions.

In France, the CNIL (France’s data protection authority) has stressed the design of user interfaces is critical to help protect privacy. In 2019 CNIL took the view that consent gathered using dark patterns does not qualify as valid freely given consent.

Fast forward to 2022 and the European Data Protection Board (EDPB) has released guidelines; Dark patterns in social media platform interfaces: How to recognise and avoid them.

These guidelines offer examples, best practices and practical recommendations to designers and users of social media platforms, on how to assess and avoid dark patterns in social media interfaces which contravene GDPR requirements. The guidance also contains useful lessons for all websites and applications.

They remind us we should take into account the principles of fairness, transparency, data minimisation, accountability and purpose limitation, as well the requirements of data protection by design and by default.

We anticipate EU regulation of dark patterns may soon be coming our way. The International Association of Privacy Professionals (IAPP) recently said, “Privacy and data protection regulators and lawmakers are increasingly focusing their attention on the impacts of so-called ‘dark patterns’ in technical design on user choice, privacy, data protection and commerce.”

Moves to tackle dark patterns in the US

The US Federal Trade Commission has indicated it’s giving serious attention to business use of dark patterns and has issued a complaint against Age of Learning for its use of dark patterns for their ABC Mouse service.

Looking at state-led regulations, in California modifications to the CCPA have been proposed to tackle dark patterns. The Colorado Privacy Act also looks to address this topic.

What next?

It’s clear businesses should be mindful of dark patterns and consider taking an ethical stance to protect their customers / website users. Could your website development teams be intentionally or accidentally going too far? Its good practice to train website and development teams so they can prevent dark patterns occurring, intentionally or by mistake.

Making your RoPA work for your business

April 2022

Records of Processing Activities

Creating and maintaining Records of Processing Activities, is a core data protection obligation for many businesses, but it’s clear it’s an area many struggle with.

Our Privacy Pulse Report 2022 revealed this to be the top challenge facing DPOs and privacy teams.

It’s an area which was raised in the UK Government’s consultation on UK data law reform. Proposals included introducing a more flexible and proportionate approach to record keeping.

Currently, the level of detailed required under UK GDPR makes records time consuming to create. Maintaining these records over time as your business processing evolves requires resources and ongoing engagement from across the organisation.

However, even if the data reform proposals go through, it’s clear businesses won’t be able to rip up and disregard recording keeping activities.

Maintaining a central record of what personal data you hold, what it’s used for, where it’s stored, how its protected and who it’s shared with is a sensible and valuable asset for any organisation.

6 reasons why your RoPA should be a valuable asset

1. Risk awareness

Identifying and recording your business activities means you can fully understand the breadth and sensitivity of your data processing. This can help you to clearly identify where data protection risks lie, so you can establish priorities and fully get to grips with mitigating these risks.

2. Lawful processing

Confirming and recording which lawful bases you’re relying on for each processing task means you check you’re meeting the relevant conditions for this basis. Be it consent, contract, legitimate interests and so forth.

3. Personal data breaches

Your RoPA should be the ‘go to’ place if you suffer a breach. It can help you to identify what personal data may have been exposed and how sensitive that data is, who might be affected, which processors might be involved and so on. Helping you to make a rapid risk assessment (within 72 hours) and helping you make good decisions to mitigate risks from the breach.

4. Individual privacy rights

If you receive a Data Subject Access Request, your records can help to locate and access the specific data required to fulfil the request. If you receive an erasure request, you can quickly check your lawful basis for processing and see if the right applies.

5. Transparency

With good records in place, you can be confident you’ve identified all the types of activities which need to be covered in your privacy notice.

6. Suppliers (processors)

Logging all your processors can support you in keeping on top of supplier management including due diligence, contractual requirements and international data transfers.

While many may not find documentation and record keeping much fun. Try and sell the benefits, get key stakeholders on board and bake it in to your routine business activities.

What does the IKEA CCTV story tell us?

April 2022

Only set up video surveillance if underpinned by data protection by design and default

What happened?

Following an internal investigation, IKEA was forced to apologise for placing CCTV cameras in the ceiling voids above the staff bathroom facilities in their Peterborough depot. The cameras were discovered and removed in September 2021, but the investigation has only just concluded in late March 2022.

An IKEA spokesman said:

 “Whilst the intention at the time was to ensure the health and safety of co-workers, we understand the fact that colleagues were filmed unknowingly in these circumstances will have caused real concern, and for this we are sincerely sorry.”

The cameras were installed following “serious concerns about the use of drugs onsite, which, owing to the nature of work carried out at the site, could have very serious consequences for the safety of our co-workers”.

They had been sanctioned following “multiple attempts to address serious concerns about drug use, and the use of false urine samples as a way of disguising it”.

“The cameras placed within the voids were positioned only to record irregular activity in the ceiling voids,” he said.

“They were not intended to, and did not, record footage in the toilet cubicles themselves. However, as aresult of ceiling tiles becoming dislodged, two cameras inadvertently recorded footage of the communal areas of two bathrooms for a period of time in 2017. The footage was not viewed at the time and was only recovered as part of these investigations.”

Apology and new ICO guidance

The key question raised by this incident is where to draw the line. When is it inappropriate to set up CCTV? In this instance, the company had concerns about drug misuse – but was that a good enough reason? I think a lot of us intuitively felt the answer was no. 

This apology conveniently coincides with the recent publication of some new guidance on video surveillance from ICO regarding UK GDPR and Data Protection Act 2018.

This guidance is not based on any changes in the legislation – more an update to provide greater clarity about what you should be considering.

Video surveillance definition

The ICO guidance includes all the following in a commercial setting:

  • Traditional CCTV
  • ANPR (automatic number plate recognition)
  • Body Worn Video (BWV)
  • Facial Recognition Technology (FRT)
  • Drones
  • Commercially available technologies such as smart doorbells and dashcams (not domestic settings)

Guidance for domestic use is slightly different.

Before setting up your video surveillance activity 

As part of the system setup, it’s important to create a record of the activities taking place. This should be included in the company RoPA (Record of Processing Activities).

As part of this exercise, one needs to identify:

  • the purpose of the lawful use of surveillance
  • the appropriate lawful basis for processing
  • the necessary and proportionate justification for any processing
  • identification of any data-sharing agreements
  • the retention periods for any personal data

 As with any activity relating to the processing of personal data, the organisation should take a data protection by design and default approach when setting up the surveillance system.

Before installing anything, you should also carry out a DPIA (Data Protection Impact Assessment) for any processing that’s likely to result in a high risk for individuals. This includes:

  • Processing special category data
  • Monitoring publicly accessible places on a large scale
  • Monitoring individuals at a workplace

A DPIA means you can identify any key risks as well as potential mitigation for managing these. You should assess whether the surveillance is appropriate in the circumstances.

In an employee context it’s important to consult with the workforce, consider their reasonable expectations and the potential impact on their rights and freedoms. One could speculate that IKEA may not have gone through that exercise.

Introducing video surveillance

Once the risk assessment and RoPA are completed, other areas of consideration include:

  • Surveillance material should be securely stored – need to prevent unauthorised access
  • Any data which can be transmitted wirelessly or over the internet requires encryption to prevent interceptions
  • How easily data can be exported to fulfil DSARs
  • Ensuring adequate signage is in place to define the scope of what’s captured and used.

Additional considerations for Body Worn Video  

  • It’s more intrusive than CCTV so the privacy concerns are greater
  • Whether the data is stored centrally or on individual devices
  • What user access controls are required
  • Establishing device usage logs
  • Whether you want to have the continuous or intermittent recording
  • Whether audio and video should be treated as two separate feeds

In any instance where video surveillance is in use, it’s paramount individuals are aware of the activity and understand how that data is being used.