Data Governance Quick Guide

January 2023

Taking control of our data

In essence Data governance is a framework of management practices which makes sure data is used properly in line with our organisational aims, the law and best practice.

Think of it as embedding Data Protection by Design and by Default across the organisation. It means business objectives can be met without taking unnecessary risks with data. Data governance helps us to:

  • protect the business and those whose data we process: customers, employees, etc.
  • reduce our organisational risk profile
  • educate our people, by providing policy & guidance to them on how to use data in the safe and appropriate ways
  • build in an ethical approach
  • build our reputation, customer trust and enhance the value of our data assets
  • support our teams’ innovation with use of data.

The 6 data governance steps

building a robust data governance framework from the data protection consultancy DPN

1. Data discovery

It’s vital to identify data assets held across the business understanding how personal data is being gathered, stored, used and shared. It can be helpful to map where the data is located on systems, and document it.

Most medium to large businesses will need to do this anyway to create and maintain an Information Asset Register (IAR) and Records of Processing Activity (RoPA).

2. Policies & standards

If our people don’t know how we expect them to behave when handling other people’s data, we can’t expect them to make a great job of it. Are your policies and procedures all up to scratch? Having a straight-forward, easy to understand and practical Data Protection Policy is a good place to start (alongside relevant training). The importance of well-crafted easy to use policies shouldn’t be underestimated.

3. Stakeholder accountability

We need to identify key stakeholders within the business. Likely to be heads of key functions, such as HR, Operations, Sales & Marketing, and so on.

It’s good to establish data roles and responsibilities, so people are clear what aspects they and others are responsible for. Who has the authority to make decisions about certain data?

4. Risk assessment process

Businesses should have risk assessment procedures to discover, assess, prioritise and take action to mitigate data risks. A governance programme helps teams to identify and assess both existing and emerging risks, so they can be efficiently assessed and mitigated.

Think of data like a balance sheet: it has great potential to create value, but also carries risks and liabilities.

The aim of a data governance programme is to protect both the business and those whose data we process from harm which may arise. For example, things like inaccurate data, unlawful or unfair processing or using people’s data in ways they would not expect or want.
For certain projects it will be necessary to conduct a Data Protection Impact Assessment (DPIA).

5. Technical and organisational measures (TOMs)

Once privacy risks have been identified, we need to consider what measures could be put in place to tackle them. You may choose to mitigate them internally with new procedures or security measures, or perhaps work with a third party to adopt technical or operational measures. Privacy Enhancing Technologies – how they can help

Organisational measures include making sure there’s good awareness about data protection across the business, and employees receive appropriate training.

6. Executive oversight

Risks should be reported up the line to make sure the Senior leadership team has proper oversight and the opportunity to take appropriate action. If your organisation has a Data Protection Officer (DPO) this reporting will be part of the formal accountabilities for their role. But remember not all businesses need to have a DPO. Should we appoint a DPO?

Overcoming cultural challenges

Data protection and privacy professionals face a cultural challenge to win hearts and minds. I have sometimes heard legal or privacy teams described as ‘the department of no’. That’s not how we want to be seen!

Smart businesses are realising the value of taking privacy seriously. We should help our business colleagues to balance the needs of commercial and operational functions with legal & ethical requirements.

We shouldn’t just explain what the law requires. We must go further and help them our colleagues to find practical solutions. Collaboration and mutual understanding are essential ingredients for successful data governance.

Controller or processor? What are we?

November 2022

Are you a service provider acting as a processor? Or a controller engaging a service provider? Is the relationship clear?

There are a few regulatory cases which remind us why it’s important to establish whether we’re acting as a controller or a processor, and to clearly define the relationship in contractual terms.

On paper the definitions may seem straight-forward, but deciding whether you’re acting as a controller, joint-controller or processor can be a contentious area.

Two regulator rulings to note

  • The ICO has taken action against a company providing email data, cleansing and marketing services. In the enforcement notice, it’s made clear the marketing company had classified itself as a processor. The ICO disagreed.
  • The Spanish data protection authority (AEPD) has ruled a global courier service was acting as a controller for the deliveries it was making. Why? Largely due to insufficient contractual arrangements setting out the relationship and the nature of the processing.

Many a debate has been had between DPOs, lawyers and other privacy professionals when trying to classify the relationship between different parties.

It’s not unusual for it to be automatically assumed all suppliers providing a service are acting as processors, but this isn’t always the case. Sometimes joint controllership, or separate distinct controllers, is more appropriate.

Organisations more often than not act as both, acting as controller and processor for specific processing tasks. Few companies will solely be a processor, for example, most will be a controller for at least their own employment data, and often for their own marketing activities too.

What the law says about controllers and processors

The GDPR tells us a controller means ‘the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data’.

A processor means ‘a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller’.

How to decide which we are

There are some key questions to ask which will help organisations reach a conclusion.

  • Do we decide how and what personal data is collected?
  • Are we responsible for deciding the purposes for which the personal data is processed?
  • Do we use personal data received from a third party for our own business purposes?
  • Do we decide the lawful basis for the processing tasks we are carrying out?
  • Are we responsible for making sure people are informed about the processing? (Is it our privacy notice people should see?)
  • Are we responsible for handling individual privacy rights, such as data subject access requests?
  • Is it us who’ll notify the regulator and/or affected individuals in the event of a significant data breach?

If you’re answering ‘yes’, to some or all of these questions, it’s highly likely you’re a controller.

And the ICO makes it clear it doesn’t matter if a contract describes you as a processor; “organisations that determine the purposes and means of processing will be controllers regardless of how they are described in any contract about processing services”.

Controller or processor? why it’s important to confirm your status

Controllers have a higher level of accountability to comply with all data protection principles, and are also responsible for the compliance of their processors.

If you are a processor, you must only handle the controller’s data under their instructions.

This means if you’re doing anything else with this data, for your own purposes, you can’t be a processor for those purposes. You will be acting as a controller when the processing is for your own purposes.

Let’s be clear though, this doesn’t mean a processor can’t make some technical decisions about how personal data is processed.
Data protection law does not prevent processors providing added value services for their clients. But as a processor you must always process data in accordance with the controller’s instructions.

Processors also have a number of direct obligations under UK GDPR – such as the technical and organisation measures it uses to protect personal data. A processor is responsible for ensuring the compliance of any sub-processors it may use to fulfil their services to a controller.

Controller-Processor data processing agreements

If the relationship is controller to processor, you must make sure you have a suitable agreement in place. The specific requirements for what must be included in contractual terms between a controller and processor are set out in Article 28 of EU / UK GDPR.

Often overlooked is the need to have clear documented instructions from the controller. These instructions are often provided as an annex to the main contract (or master services agreement), so they can be updated if the processing changes.

There will be times where you’re looking to engage the services of a household name, a well-known and well-used processor. There may be limited or no flexibility to negotiate contractual terms. In such cases, it pays to check the terms and, if necessary, take a risk-based view on whether you wish to proceed.

What’s clear from the Spanish courier case is how important it is to have contracts in place defining the relationship. The ICO ruling demonstrates even if your contract says you’re a processor, if you are in fact in control of the processing, this will be overturned, and you’d be expected to meet your obligations as a controller.

Are we conducting too many DPIAs – or not enough?

October 2022

How to decide when to conduct Data Protection Impact Assessments

Make no mistake, Data Protection Impact Assessments (DPIAs) are a really useful risk management tool. They help organisations to identify likely data protection risks before they materialise, so corrective action can be taken. Protecting your customers, staff and the interests of the business.

DPIAs are key element of the GDPR’s focus on accountability and Data Protection by Design.

It’s not easy working out when a DPIA is necessary, or when it might be useful, even if not strictly required by law. Businesses need to be in control of their exposure to risk, but don’t want to burden their teams with unnecessary work. So it falls to privacy professionals to use their judgement in what can be a delicate balancing act.

Lack of clarity around when DPIAs are genuinely needed could lead businesses to carry out far more DPIAs than needed – whilst others may carry out too few.

When are DPIAs required?

We should check if a DPIA is required during the planning stage of new projects, or when changes are being planned to existing activity. Where needed, DPIAs must be conducted BEFORE the new processing begins.

DPIAs are considered legally necessary when the processing of personal data is likely to involve a ‘high risk’ to the rights and freedoms of individuals.

What does ‘high risk’ look like?

Why types of activity might fall into ‘high risk’ isn’t always clear. Fortunately the ICO have given examples of processing likely to result in high risk to help you make this call. Regulated sectors, such as financial services and telecoms, have specific regulatory risks to consider too.

Give consideration to the scope, types of data used and the manner of processing. It’s wise to also take account of any protective measures already in place. In situations where the nature, scope, context and purposes of processing are very similar to another activity, where a DPIA has already been carried out, you may not need to conduct another.

Three key steps for a robust DPIA screening process

1. Engage your key teams

In larger organisations, building good relationships with key teams such as Procurement, IT, Project Management, Legal and Information Security can really help. They might hear about projects involving personal data before you do. Make sure they’re aware when a DPIA may be required. This means they’ll be more likely to ‘raise a hand’ and let you know when a project which might require a DPIA comes across their desk.

In smaller businesses there may still be others who can help ‘raise a hand’ and let you know about relevant projects. Work out who those people are.

2. Confirm the businesses appetite for risk

Is your organisation the sort which only wants DPIAs to be carried out when strictly required by law? Or perhaps you want a greater level of oversight? Choosing to carry out DPIAs as your standard risk assessment methodology for any significant projects involving personal data – even if they might appear to involve lower levels of risks to individuals.

Logic says you’ll never be 100% sure unless you carry out an assessment and DPIAs are a tried and tested way to give you oversight and confidence. But this approach requires more time, resources and commitment from the business. You need to strike the right balance for your organisation.

3. Adopt a DPIA screening process

If you don’t currently use a screening process, you really should consider adopting one. It’s a quick and methodical way to identify if a project does or does not require a DPIA.

You can use a short set of standard questions, which can be provided for stakeholders to complete and return or discussed in a call. So the question ‘Is a DPIA needed or not?’ can be reached rapidly and with confidence.

Personally I prefer to arrange a short call with the stakeholders, using my screening questionnaire as a prompt to guide the discussion.

Don’t forget to keep a record of your decisions! Including when you decide a DPIA isn’t necessary.

Try not to burden colleagues with unnecessary assessments for every project, if there really is minimal risk. This is unlikely to be a well-received approach. Raise awareness and have a built-in DPIA screening process to make sure you catch the projects which really do warrant a deeper dive.


Is bias and discrimination in AI a problem?

September 2022

Artificial Intelligence - good governance will need to catch up with the technology

The AI landscape

We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include:

  • Cybersecurity analysis to identify anomalies in IT structures
  • Automating repetitive maintenance tasks and guiding technical support teams
  • Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement
  • Reviewing job applications to identify the best-qualified candidates in HR
  • Research scientists looking for patterns in health to identify new cures for cancer
  • Predicting equipment failure in manufacturing
  • Detecting fraud in banking by analysing irregular patterns in transactions.
  • TV and movie recommendations for Netflix users
  • Inventory optimisation and demand forecasting in retail & transportation
  • Programming cars to self-drive

Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.  

In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications. 

Why recruitment applications?

AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest. 

The Amazon recruitment tool – an example of bias/discrimination

The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications.

What is AI?

AI can be defined as: 

“using a non-human system to learn from experience and imitate human intelligent behaviour”

The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.  

Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes. 

In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account.

You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity. 

How is AI used? 

AI can be used to make decisions:

1.     A prediction – e.g. you will be good at a job

2.     A recommendation – e.g. you will like this news article

3.     A classification – e.g. this email is spam. 

The benefits of AI

AI is generally a force for good:

1.     It can automate a process and save time

2.     It can optimise the efficiency of a process or function (often seen in factory or processing plants)

3.     It can enhance the ability of individuals – often by speeding processes

Where do data protection and AI intersect?

An explanation of AI-assisted decisions is required: 

1.     If there is a process without any human involvement

2.     It produces legal or similarly significant effects on an individual – e.g. not getting a job. 

Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing. 

What data is in scope?

  • Personal data used to train a model
  • Personal data used to test a model
  • On deployment, personal data used or created to make decisions about individuals

If no personal data is included in a model, AI is not in scope for data protection. 

How to approach an AI project?

 Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation. 

Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination. 

Where does bias/discrimination creep in?

Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:

  • Age
  • Disability
  • Gender reassignment
  • Marriage and civil partnership
  • Pregnancy and maternity
  • Race
  • Religion and belief
  • Sex
  • Sexual orientation. 

When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination. 

Our Top 10 Tips

  1. Ask how the algorithm has been trained – the “black box” excuse isn’t good enough
  2. Review the training inputs to identify possible bias with the use of historic data
  3. Test the outcomes of the model – this really seems so obvious but not done regularly enough
  4. Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes
  5. Consider how to compensate for bias built into the training – a possible form of positive discrimination
  6. Have a person review the outcomes of the model if it is challenged and give that person authority to challenge
  7. Incorporate your AI projects into your data protection governance structure
  8. Ensure that you’ve done a full DPIA identifying risks and mitigations
  9. Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework
  10. Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised? 

In summary

AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable. 

In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged. 

Body worn cameras and privacy implications

Is your use of body worn cameras reasonable and proportionate?

Wearable technologies such as body worn cameras (‘bodycams’) and other recording devices are often used for public safety and security purposes. An obvious example is by the Police and other public services. They can also used for recreational pursuits.

The use of these devices has increased significantly over recent years as the technology advances in leaps and bounds, and prices fall.

Bodycams can pose a real challenge from a data protection perspective due to their portability. When a bodycam is in use, it effectively becomes a mobile surveillance system which is highly likely to capture images or audio of individuals. This should be regarded as personal data. Businesses using these devices need to be mindful of relevant data protection requirements.

We need to bear in mind, when bodycams are combined with the use of facial recognition technology to uniquely identify individuals (e.g. for safety and security purposes), the data protection concerns increase. If you are actively processing special category data, such as biometric data used within facial recognition systems, you need to identify a suitable condition under Article  9, UK GDPR.

Eight key privacy principles and obligations for bodycams

  1. Fair and lawful processing – you must be able to demonstrate your use of bodycams is both fair and legal. The necessity must be carefully considered. The potential for bodycams to be intrusive means meeting a relatively high threshold to demonstrate your use is genuinely necessary.
  2. Limited purposes & data minimisation – cameras should only record the minimum amount of personal information necessary for specified purposes.
  3. Transparency – did you clearly tell people before you began recording? Are they aware of their rights?
  4. Information security – Any recordings must be stored securely. If video footage is stored on the device itself, or on a memory stick (even temporarily) there’s an additional risk of loss or theft of personal data.
  5. Restricted access – clearly defined rules should be in place covering who can access recordings and for what purposes.
  6. Sharing – the disclosure of images and information should only take place when it’s necessary for specified purposes, or for law enforcement. And checks should be in place before disclosing to law enforcement or other agencies.
  7. Storage limitation – data on individuals (including video & audio) must be retained only for the minimum amount of time required, and then deleted.
  8. Individual rights – you must be able to respond appropriately to any privacy rights requests from individuals (such as the right of access or right to erasure).

Carrying out an impact assessment

In most situations it would be wise to conduct a Data Protection Impact Assessment (DPIA) to formally assess and document your approach and how you will meet the data protection obligations.

The DPIA will need to consider each of the relevant principles and evaluate if measures and controls in place are adequate to protect individuals whose personal data may be captured.

Other considerations

The ICO published ‘Guidance on video surveillance’ earlier in 2022. This covers the processing of personal data by video surveillance systems by both public and private sector organisations. Their scope for surveillance systems includes CCTV, ANPR, bodycams, drones (UAVs), facial recognition technology (FRT), and also dashcams and smart doorbell cameras. They pick up on many of the points I’ve summarised above.

The Biometrics and Surveillance Camera Commissioner published a draft update to its ‘Surveillance camera code of practice’ in August 2021, which is still out for consultation.

The draft code includes twelve guiding principles for surveillance system operators however, as you may anticipate, many of these overlap with the privacy principles I’ve picked out above. I’ve highlighted some other areas covered in this draft code:

  • There must be clear responsibility and accountability for all surveillance camera system activities including images and information collected, held and used.
  • Clear rules, policies and procedures must be in place before a surveillance camera system is used, and these must be communicated to all who need to comply with them.
  • Surveillance camera system operators should consider any approved operational, technical and competency standards relevant to a system and its purpose and work to meet and maintain those standards.
  • There should be effective review and audit mechanisms to ensure legal requirements, policies and standards are complied with in practice, and regular reports should be published.
  • When the use of a surveillance camera system is in pursuit of a legitimate aim, and there’s a pressing need for its use, it should then be used in the most effective way to support public safety and law enforcement with the aim of processing images and information of evidential value.
  • Any information used to support a surveillance camera system which compares against a reference database for matching purposes (for example, ANPR or facial recognition) should be accurate and kept up to date.

I hope you found this article useful. If you’d like to know more please just drop us a line and arrange a chat.

Three Steps to Transparency Heaven

June 2022

A strategic approach to transparency

Transparency is enshrined in one of the key data protection principles: Principle (a): Lawfulness, fairness and transparency….

You must be clear, open and honest with people from the start about how you will use their personal data. 

There’s also a requirement to consider a data protection by design and default approach. To legitimately take this approach requires some planning and clear communication between teams about which data is used for what. 

It’s obvious that most companies can pull together a privacy notice. However, as with many things to do with GDPR, creating engaging communications which deliver the correct information in a digestible format appears easier said than done.

Recent fines related to lack of transparency

In May we saw a €4.2m fine for Uber by Italian Data Protection Authority (the Garante) for data protection violations. Amongst other things, the privacy notice was incorrect and incomplete whilst there were not enough details on the purposes of processing and the data subject’s rights had not been spelled out.  

Earlier this year, Klarna Bank AB was fined by the Swedish Data Protection Authority (IMY) for lack of transparency. 

Be warned, the regulators are taking a look at these documents.

Step 1: Creating your Privacy Notice

Privacy notices have become rather formulaic since 2018 and my colleague Phil wrote a handy checklist of what must and should be included. Take note and have a look to see if you have ticked all the boxes. 

Step 2: Housekeeping your Privacy Notice

The privacy notice is a dynamic document. Keeping it up to date is important. 

  • New data processing activities: Make sure you’re made aware of new technology, new teams, new business processes which may all generate new data processing activities that need to be notified. 
  • Record of Processing Activities: Create a routine to keep your RoPA up to date and that any changes are clearly flagged to the DP team.
  • Regulatory changes: Review any change in regulatory guidance. International data transfers are a perfect case in point where the guidance has changed. Changes may necessitate an update to your privacy notice.
  • Supplier due diligence: Review your supplier arrangements – are they carrying out new data processing activities which need to be captured in the notice. Are new suppliers in place and have they been audited/reviewed?
  • Marketing innovations: Ask your marketing team about their plans as digital marketing developments move at breakneck speed. The use of AI for targeting and segmentation, innovations in digital advertising as well as the evolution of social media platforms all present privacy challenges. In addition you may need to inform consumers of material changes. 

Step 3: Breathing life into your Privacy Notice

It’s a marketing challenge to get people to pay attention to the privacy notice.

  • Use different communication methods – not everyone likes reading long screeds of text. Look at creative communication methods such as infographics, videos, cartoons to get the message across. Channel 4 are an exemplar as are The Guardian.
  • Use plain English – whenever you write it down, make sure it’s couched in terms your target audience will understand. Various reports place average reading age as 8, 9 or 11. Plain English, short sentences, easy to understand words should be deployed to get your message across.
  • Include information tailored to different target audiences: Companies will sometimes carry out data processing for clients, for consumers and for employees. Trying to cram all this information into one document makes it nigh on impossible for anyone to understand what’s going on. Separate it out and clearly signal what’s relevant to each group.
  • Use layers of communication – the ICO advocates a layered approach to communicating complicated messages. If you create a thread through your messages from clear top-level headlines with clear links to additional information, there’s a higher chance of achieving better levels of comprehension.
  • Keep it short and sweet – having read some of the documents produced by corporates, I am struck by how repetitive they can be. Not only do you lose the will to live, but comprehension levels are low and confusion levels are high. All of which is rather unhelpful.
  • Be upfront and transparent – do not obfuscate and confuse your audience. Although it can feel scary to tell individuals what is happening with their personal data, audiences appreciate the openness when processing is explained clearly. They need to know what’s in it for them. 

Overall, this is a major marketing challenge. Explaining how you use personal data is an important branding project which allows a company to reflect their values and their respect for their customers.

The marketing teams need to get close to their privacy colleagues and use their formidable communication skills to make these important data messages resonate and make sense.

Four years on from GDPR, now is a good time to take a look at your privacy notice to see if it needs a refresh.


Dark patterns: is your website tricking people?

April 2022

Why should we be concerned about dark patterns?

Do you ever feel like a website or app has been designed to manipulate you into doing things you really don’t want to do? I bet we all have. Welcome to the murky world of ‘dark patterns’.

This term was originally penned in 2010 by user experience specialist, Harry Brignull, who defines dark patterns as “features of interface design crafted to trick people into doing things they might not want to do and which benefit the business”.

Whenever we use the internet, businesses are fighting for our attention and it’s often hard for them to get cut through. And we often don’t have the time or inclination to read the small print, we just want to achieve what we set out to do.

Business can take advantage of this. Sometimes they make it difficult to do things which should, on the face of it, be really simple. Like cancel or say no. They may try to lead you down a different path that suits their business better and leads to higher profits.

These practices are in the spotlight, and businesses could face more scrutiny in future. Sometimes dark patterns are deliberate, sometimes they may be accidental.

What are dark patterns?

There are many interpretations of dark patterns and many different examples. Here’s just a handful to give you a flavour – it’s by no means an exhaustive list.

  • ‘Roach Motel’ – this is where the user journey makes it easy to get into a situation, but hard to get out. (Perhaps ‘Hotel California’ might have been a better name?). For example, when it’s easy to sign up to a service, but very difficult to cancel it because it’s buried somewhere you wouldn’t think to look. And when you eventually find it, you still have to wade through several messages urging you not to cancel.
  • FOMO (Fear Of Missing Out) – this for example is when you’re hurried into making a purchase by ‘urgent’ messages showing a countdown clock or alert messages saying the offer will end imminently.
  • Overloading – this is when we’re confronted with a large number of requests, information, options or possibilities to prompt us to share more data. Or it could be used to prompt us to unintentionally allow our data to be handled in a way we’d never expect.
  • Skipping – this is where the design of the interface or user experience is done is such a way that we forget, or don’t think about, the data protection implications. Some cookie notices are designed this way.
  • Stirring – this affects the choices we make by appealing to our emotions or using visual cues. For example, using a certain colour for buttons you’d naturally click for routine actions – getting you into the habit of clicking on that colour. Then suddenly using that colour button for the paid for service, and making the free option you were after hard to spot.
  • Subliminal advertising – this is the use of images or sounds to influence our responses without us being consciously aware of it. This is banned in many countries as deceptive and unethical.

Social engineering?

Some argue the ‘big players’ in search and social media have been the worst culprits in the evolution and proliferation of dark patterns. For instance, the Netflix video ‘The Social Dilemma’ argued that Google and Facebook have teams of engineers mining behavioural data for insights on user psychology to help them evolve their interface and user experience.

The mountain of data harvested when we search, browse, like, comment, post and so on can be used against us, to drive us to behave they way they want us to – without us even realising. The rapid growth of AI could push this all to a whole new level if left unchecked.

The privacy challenge

Unsurprisingly there’s a massive cross-over between dark patterns and negatively impacting on a user’s privacy. The way user interfaces are designed can play a vital role in good or bad privacy.

In the EU, discussions about dark patterns (under the EU GDPR) tend to concentrate on to how dark patterns can be used to manipulate buyers to give their consent – and point out consent would be invalid if it’s achieved deceptively or not given freely.

Here are some specific privacy related examples.

  • Tricking you to installing an application you didn’t want, i.e. consent is not unambiguous or freely given.
  • When the default privacy settings are biased to push you in a certain direction. For example, on cookie notices where it’s much simpler to accept than object, and can take more clicks to object. Confusing language may also be used to manipulate behaviour.
  • ‘Privacy Zuckering’, is a term used for when you’re tricked into publicly sharing more information about yourself than you really intended to. Named after Facebook’s co-founder & CEO, but it isn’t unique to them, for example, LinkedIn have been fined for this.
  • When an email unsubscribe link is hidden within other text.
  • Where more screen space is given to selecting options the business wants you to take, and less space for what might be more preferable options the customer. For example, e.g. the frequency of a subscription, rather than a one-off purchase.

Should businesses avoid using dark patterns?

Many will argue YES! Data ethics is right at the heart of the debate. Businesses should ask themselves if what they are doing is fair and reasonable to try to encourage sales and if their practices could be seen as deceptive. Are they doing enough to protect their customers?

Here are just a few reasons to avoid using dark patterns:

  • They annoy your customers and damage their experience of your brand. A survey by Hubspot found 80% of respondents said they had stopped doing business with a company because of a poor customer experience. If your customers are dissatisfied, they can and will switch to another provider.
  • They could lead to higher website abandon rates.
  • Consent gathered by manipulating consumer behaviour is unlikely to meet the GDPR consent requirements, i.e. freely given, informed, explicit and unambiguous. So your processing could turn out to be unlawful.

Can these effects happen by mistake?

Dark patterns aren’t always deliberate. They can arise due to loss of focus, short-sightedness, poorly trained AI models, or a number of other factors. However they are more likely to occur when designers are put under pressure to deliver on time for a launch date, particularly when commercial objectives are prioritised above all else.

Cliff Kuang, author of “User Friendly”, says there’s a tendency to make it easy for users to perform the tasks that suit the company’s preferred outcomes. The controls for limiting functionality or privacy controls can sometimes be an afterthought.

What can businesses do to prevent this?

In practice it’s not easy to strike the right balance. We want to provide helpful information help our website / app users to make decisions. It’s likely we want to ‘nudge’ them in the ‘right’ direction. But we should be careful we don’t do this in ways which confuse, mislead or hurry users into doing things they don’t really want to do.

It’s not like the web is unique in this aim (it’s just that we have a ton of data to help us). In supermarkets, you used to always see sweets displayed beside the checkout. A captive queuing audience, and if it didn’t work on you, a clear ‘nudge’ to your kids! But a practice now largely frowned upon.

So how can we do ‘good sales’ online without using manipulation or coercion? It’s all about finding a healthy balance.

Here’s a few suggestions which might help your teams:

  • Train your product developers, designers & UX experts – not only in data protection but also in dark patterns and design ethics. In particular, help them recognise dark patterns and understand the negative impacts they can cause. Explain the principles of privacy by design and the conditions for valid consent.
  • Don’t allow business pressure and priorities to dictate over good ethics and privacy by design.
  • Remember data must always be collected and processed fairly and lawfully.

Can dark patterns be regulated?

The proliferation of dark patterns over recent years has largely been unrestricted by regulation.

In the UK & Europe, where UK & EU GDPR are in force, discussions about dark patterns have mostly gravitated around matters relating to consent – where that consent may have been gathered by manipulation and may not meet the required conditions.

In France, the CNIL (France’s data protection authority) has stressed the design of user interfaces is critical to help protect privacy. In 2019 CNIL took the view that consent gathered using dark patterns does not qualify as valid freely given consent.

Fast forward to 2022 and the European Data Protection Board (EDPB) has released guidelines; Dark patterns in social media platform interfaces: How to recognise and avoid them.

These guidelines offer examples, best practices and practical recommendations to designers and users of social media platforms, on how to assess and avoid dark patterns in social media interfaces which contravene GDPR requirements. The guidance also contains useful lessons for all websites and applications.

They remind us we should take into account the principles of fairness, transparency, data minimisation, accountability and purpose limitation, as well the requirements of data protection by design and by default.

We anticipate EU regulation of dark patterns may soon be coming our way. The International Association of Privacy Professionals (IAPP) recently said, “Privacy and data protection regulators and lawmakers are increasingly focusing their attention on the impacts of so-called ‘dark patterns’ in technical design on user choice, privacy, data protection and commerce.”

Moves to tackle dark patterns in the US

The US Federal Trade Commission has indicated it’s giving serious attention to business use of dark patterns and has issued a complaint against Age of Learning for its use of dark patterns for their ABC Mouse service.

Looking at state-led regulations, in California modifications to the CCPA have been proposed to tackle dark patterns. The Colorado Privacy Act also looks to address this topic.

What next?

It’s clear businesses should be mindful of dark patterns and consider taking an ethical stance to protect their customers / website users. Could your website development teams be intentionally or accidentally going too far? Its good practice to train website and development teams so they can prevent dark patterns occurring, intentionally or by mistake.

What does the IKEA CCTV story tell us?

April 2022

Only set up video surveillance if underpinned by data protection by design and default

What happened?

Following an internal investigation, IKEA was forced to apologise for placing CCTV cameras in the ceiling voids above the staff bathroom facilities in their Peterborough depot. The cameras were discovered and removed in September 2021, but the investigation has only just concluded in late March 2022.

An IKEA spokesman said:

 “Whilst the intention at the time was to ensure the health and safety of co-workers, we understand the fact that colleagues were filmed unknowingly in these circumstances will have caused real concern, and for this we are sincerely sorry.”

The cameras were installed following “serious concerns about the use of drugs onsite, which, owing to the nature of work carried out at the site, could have very serious consequences for the safety of our co-workers”.

They had been sanctioned following “multiple attempts to address serious concerns about drug use, and the use of false urine samples as a way of disguising it”.

“The cameras placed within the voids were positioned only to record irregular activity in the ceiling voids,” he said.

“They were not intended to, and did not, record footage in the toilet cubicles themselves. However, as aresult of ceiling tiles becoming dislodged, two cameras inadvertently recorded footage of the communal areas of two bathrooms for a period of time in 2017. The footage was not viewed at the time and was only recovered as part of these investigations.”

Apology and new ICO guidance

The key question raised by this incident is where to draw the line. When is it inappropriate to set up CCTV? In this instance, the company had concerns about drug misuse – but was that a good enough reason? I think a lot of us intuitively felt the answer was no. 

This apology conveniently coincides with the recent publication of some new guidance on video surveillance from ICO regarding UK GDPR and Data Protection Act 2018.

This guidance is not based on any changes in the legislation – more an update to provide greater clarity about what you should be considering.

Video surveillance definition

The ICO guidance includes all the following in a commercial setting:

  • Traditional CCTV
  • ANPR (automatic number plate recognition)
  • Body Worn Video (BWV)
  • Facial Recognition Technology (FRT)
  • Drones
  • Commercially available technologies such as smart doorbells and dashcams (not domestic settings)

Guidance for domestic use is slightly different.

Before setting up your video surveillance activity 

As part of the system setup, it’s important to create a record of the activities taking place. This should be included in the company RoPA (Record of Processing Activities).

As part of this exercise, one needs to identify:

  • the purpose of the lawful use of surveillance
  • the appropriate lawful basis for processing
  • the necessary and proportionate justification for any processing
  • identification of any data-sharing agreements
  • the retention periods for any personal data

 As with any activity relating to the processing of personal data, the organisation should take a data protection by design and default approach when setting up the surveillance system.

Before installing anything, you should also carry out a DPIA (Data Protection Impact Assessment) for any processing that’s likely to result in a high risk for individuals. This includes:

  • Processing special category data
  • Monitoring publicly accessible places on a large scale
  • Monitoring individuals at a workplace

A DPIA means you can identify any key risks as well as potential mitigation for managing these. You should assess whether the surveillance is appropriate in the circumstances.

In an employee context it’s important to consult with the workforce, consider their reasonable expectations and the potential impact on their rights and freedoms. One could speculate that IKEA may not have gone through that exercise.

Introducing video surveillance

Once the risk assessment and RoPA are completed, other areas of consideration include:

  • Surveillance material should be securely stored – need to prevent unauthorised access
  • Any data which can be transmitted wirelessly or over the internet requires encryption to prevent interceptions
  • How easily data can be exported to fulfil DSARs
  • Ensuring adequate signage is in place to define the scope of what’s captured and used.

Additional considerations for Body Worn Video  

  • It’s more intrusive than CCTV so the privacy concerns are greater
  • Whether the data is stored centrally or on individual devices
  • What user access controls are required
  • Establishing device usage logs
  • Whether you want to have the continuous or intermittent recording
  • Whether audio and video should be treated as two separate feeds

In any instance where video surveillance is in use, it’s paramount individuals are aware of the activity and understand how that data is being used.