Data Protection Officers Myth Buster

March 2024

Why we don't ALL need a DPO!

Most small organisations, and many medium-sized businesses don’t have to appoint a Data Protection Officer. This is only a mandatory requirement under GDPR, and it’s British spin-off UK GDPR, if your organisation’s activities meet certain criteria.

However, this doesn’t mean you can’t voluntarily choose to appoint a DPO. However, it is worth bearing in mind the role of a Data Protection Officer is clearly defined in law. EU/UK GDPR sets out the position of a DPO, specific tasks they’re responsible for, and how the organisation has a duty to support the DPO to fulfil their responsibilities.

In the UK there are controversial plans to remove the role from data protection legislation. Whether this comes into effect all depends on the progress of the UK Data Protection and Digital Information Bill. I’ll come onto this in a bit later on.

The DPO Confusion!

I believe GDPR (perhaps inadvertently, through media coverage and elsewhere) created a degree of confusion about who needed a DPO and what the role actually entails.

It led many businesses to voluntarily appoint one, thinking they really should. It led clients to include ‘do you have a DPO?’ in their due diligence questionnaires. Suppliers to think, ‘oh we better have one.’

Some organisations understood the DPO requirements, others perhaps less so. Many will have correctly informed the ICO (or relevant EU regulator) who their DPO is, others won’t.

Some DPOs will be striving to fulfil their designated tasks, others won’t have the resources to do this, some may be blissfully unaware of the legal obligations their role carries with it.

When is it currently mandatory to have a DPO?

The law tells us you NEED to appoint a DPO if you’re a Controller or a Processor and the following apply:

  • you’re a public authority or body (except for courts acting in their judicial capacity); or
  • your core activities require large-scale, regular and systematic monitoring of individuals (for example, online behaviour tracking); or
  • your core activities consist of large-scale processing of special categories of data or data relating to criminal convictions and offences.

This raises questions about what’s meant by ‘large-scale’ and what happens if you are found not to have appointed a DPO when you should have.  The truth is many smaller businesses and not-for-profits don’t have to have one.

When it comes to interpreting ‘large-scale’ the European Data Protection Board Guidelines on Data Protection Officers, provide some examples.

What are your current options if you don’t fall under mandatory requirements?

The ICO tells us all organisations need to have ‘sufficient staff and resources to meet the organisation’s obligations under the GDPR’. If you don’t fall under the mandatory requirement, you currently have a choice:

  • voluntarily appoint a DPO, or
  • have a team or individual responsible for overseeing data protection, in a proportionate way based on the size or your organisation and the nature of the personal data you handle.

What is the ‘position’ of the DPO?

If you appoint a DPO, UK/EU GDPR tells us they must:

  • report directly to the highest level of management
  • be given the independence and autonomy to perform their tasks
  • be given sufficient resources to be able to perform their tasks
  • be an expert in data protection
  • be involved, in a timely manner, in all issues relating to data protection.

In short, not just anybody can be your DPO.

They can be an internal or external appointment.  In some cases a single DPO can be appointed for represent several organisations. They can perform other tasks, but there shouldn’t be a conflict of interests.  For example a Head of Marketing also being the DPO might be an obvious conflict.

A DPO must also be easily accessible, for individuals, employees and the ICO.  Their contact details should be published (e.g. in your privacy notice – this doesn’t have to be their name) and the ICO should be informed who they are.

What tasks should a DPO fulfil?

The DPO role currently has a formal set of accountabilities and duties, laid down within the GDPR.

  • Duty to inform and advise the organisation and its employees about their obligations under GDPR and other data protection laws. This includes laws in other jurisdictions which are relevant to the organisation’s operations.
  • Duty to monitor the organisation’s compliance with the GDPR and other data protection laws. This includes ensuring suitable data protection polices are in place, training staff (or overseeing this), managing data protection activities, conducting internal reviews & audits and raising awareness of data protection issues & concerns so they can be tackled effectively.
  • Duty to advise on, and to monitor data protection impact assessments (DPIAs).
  • Duty to be the first point of contact for individuals whose data is processed, and for liaison with the ICO.

In short, you can’t appoint a DPO in name only.

It’s also worth noting, if you don’t listen to the advice of your DPO you should document why you didn’t follow up on their recommended actions. Also a DPO cannot be dismissed or penalised for performing their duties.

What changes are on the cards in the UK?

The mandatory requirement to appoint a DPO is set to be dropped, IF the Data Protection and Digital Information Bill becomes law (without changes to the current draft text being made). Instead the DPDI Bill includes a new requirement to appoint a ‘senior responsible individual’ (SRI) for data protection, who is part of the organisation’s senior management.

It irks me somewhat the removal of this requirement is cited as way of easing the legislative burden on small businesses. As said, many small to medium sized businesses don’t fall under the current requirement to appoint one.

It seems this role won’t have the strict independence requirements of a DPO under GDPR and the proposed change raises a number of questions. What happens to existing DPOs? Will they need to be appointed to senior management?  Or will a member of the senior management team need to be appointed as SRI and be able to delegate tasks to the existing DPO? What about organisations who operate in Europe and need a DPO under EU GDPR?

Clarity on this would be very welcome. But it remains to be seen whether the DPDI Bill will become law.

Cookies – Consent or Pay?

March 2024

UK and EU data protection regulators are grappling with the compliance of the so-called ‘consent or pay’ model, also known as ‘pay or okay’. Put simply, this model means accessing online content or services is dependent on users either consenting to being tracked for advertising purposes (using cookies or similar technologies), or paying for access without tracking and ads.

This model – and the varying approaches to it – raises questions about whether this can be fair, and whether consent can be ‘freely given’. But it also touches on far more than data protection. It speaks to acceptable business practices, competition models, consumer protection laws, accessible credible journalism and more.

Ad-funded online content and services

‘Consent or pay’ is one of a number of solutions intended to address issues surrounding online advertising and its use of cookies. None of them, it has to be said, are perfect.

This is all coming to a head as data protection regulators in Europe and the UK push for compliance with cookie laws (e.g. PECR in the UK). For example, the UK’s ICO says for the necessary consent to be valid website operators must make sure it’s as easy for people to ‘Reject all’ advertising cookies as it is to ‘Accept all’. More UK companies to be targeted for non-compliant cookies

This causes a problem. As increasing numbers click ‘Reject all’, advertising revenues will take a significant hit. And advertising matters. When a US Senator asked Mark Zuckerberg how Facebook remained free, he famously and simply answered; “We run ads”.

It’s a point that can be made more broadly – we’ve all enjoyed a vast amount of free online content and services because of personalised advertising. Lots of the content and services we routinely access online are ad-funded and rely on a large percentage of users accepting cookies to target these ads. It’s why we can waste time (or relax) playing online games for free.

Online content and service providers have to pay people to create content, run websites, create apps and so on. Commercial businesses also want to turn a profit. The balance lies between the quality, value and integrity of the content they offer, and the advertising revenues which can be gained by personalised advertising.

We’ve all been tracked and served adverts as we browse the internet. Personalised ads mean we have a better chance of being shown ads for products and services which match our interests and needs. Yes, some of this activity is annoying, trades on our habits and may sometimes even be downright harmful. That isn’t to say all of it is problematic; again, this is a question of balance. Regulators have to tread a delicate line between protecting end-users without hampering business from offering us fair products, content and services.

We may not want to be tracked, but online publishers and service providers can’t be expected to provide something for nothing. Businesses aren’t under any obligation to provide us with stuff completely for free.

Which brings us back to the concept of ‘consent or pay’. This concept hit the headlines last year when Meta introduced a payment option to users of Facebook and Instagram in the EU (not in the UK), offering an ad-free experience for a fee. This is currently the subject of complaints by consumer rights groups in Europe. Meanwhile the ‘consent or pay’ approach has been adopted by some of Germany’s major newspapers, and others.

Just pay

Another option is for all content to be put behind a pay wall. For example, in the UK you have to subscribe and pay to read online articles published by the Telegraph, The Times and the Spectator magazine. Often a limited number of free articles are provided before you have to pay.

Cookie free solutions

Other cookie-less ad solutions are being rapidly developed, such as contextual advertising. You can read more about the options here: Life after cookies

But with solutions which don’t use third-party tracking cookies still in their infancy, and concerns they won’t be able to produce the same return on investment as cookie-driven advertising, there’s a need to plug the funding gap fast.

‘Consent or pay’ – compliant or not compliant?

In the UK, the ICO hasn’t decreed whether ‘consent or pay’ is a fair approach or not. It’s asked for feedback, and in doing so set out its initial ‘view’.

While stating UK data protection law doesn’t prohibit ‘consent or pay’, the Regulator says organisations must focus on people’s interests, rights and freedoms, making sure people are fully aware of their options in order to make free and informed choices. It’s worth noting that in the EU, ‘consent or pay’ is not prohibited either.

The ICO has set out four areas which need to be addressed when adopting this model, and has asked for feedback on any other factors which should be taken into account.

1. Imbalance of power

The ICO says consent for advertising will not be freely given in situations where people have little or no choice about whether to use a service or not. This could be where the provider is a public service or has a ‘position of market power’.

2. Equivalence of services

If the ad-free service bundles in other additional ‘premium’ extras, this could affect the validity of consent for the ad-funded service.

3. Appropriate fee

Consent for targeted advertising is, in the ICO’s view, unlikely to be freely given if the alternative is an “unreasonably high fee”. The Regulator is suggesting the fee should be set at a level which gives people a realistic choice between the options.

4. Privacy by design

Any consent request choices should be presented equally and fairly. The ICO says people should be given clear, understandable information about each option. Consent for advertising is unlikely to be freely given if people don’t understand how their personal information is going to be used.

Another key consideration is how people can exercise their right to withdraw their consent. The ICO reiterates it must be as easy for people to withdraw their consent as it is to give it. Organisations also need to make sure users can withdraw their consent without detriment. This may be a tricky circle to square.

In all of this there’s an important point – whilst consent must be ‘freely given’ under EU/UK data protection law, this doesn’t translate into meaning people must get content and services free too. The ‘consent or pay’ model, essentially offers a choice between pay with your data, or pay with your money.

Etienne Drouard is a Partner at Hogan Lovells (Paris) and his view is; “The very nature of consent is being offered an informed choice. ‘Pay or OK’ ( ‘Pay or Consent’) is, per se, a valid alternative. It requires a case-by-case and multi-disciplinary analysis. Not a ban.”

Have your say – UK ICO Call for Feedback on Consent or Pay

Time to plan ahead

Fedelma Good, Data Protection and ePrivacy Consultant, and former board member of the UK Data & Marketing Association, urges advertisers and publishers to plan ahead; “To say that online advertising is entering a period of turmoil is putting it mildly. Combining the issues of ‘consent or pay’ with Google’s cookie deprecation plans and you have an environment of uncertainty which advertisers and publishers alike will ignore at their peril. My advice to anyone reading this article is not only to track developments in these areas carefully, but perhaps more importantly to make sure you understand your own circumstances and options and plan ahead.”

Privacy and consumer rights groups

It’s clear privacy and consumer rights groups are pushing for change. Back in 2021 cookie banners were the focus, with the privacy rights group noyb.eu firing off hundreds of complaints to companies for using ‘unlawful banners’. The group developed software to recognise various types of unlawful banners and automatically generate complaints.

Max Schrems, Chair of noyb said: “A whole industry of consultants and designers develop crazy click labyrinths to ensure imaginary consent rates. Frustrating people into clicking ‘okay’ is a clear violation of the GDPR’s principles. Under the law, companies must facilitate users to express their choice and design systems fairly. Companies openly admit that only 3% of all users actually want to accept cookies, but more than 90% can be nudged into clicking the ‘agree’ button.”

Now the attention has turned to ‘consent or pay’, Meta’s use of this model has led to eight consumer rights groups filing complaints with different data European data protection authorities. The claims focus on concerns Meta makes it impossible for consumer to know how the processing changes if they choose one option or another. It’s argued the choice given is meaningless.

The fundamental right to conduct business

There’s a complex balance here between people’s fundamental privacy rights and the fundamental right to conduct business. For publishers and other online services, advertising is a crucial element of conducting business. In the distant past, advertising was expensive.

As Sachiko Scheuing European Privacy Officer at Acxiom & Co Chairwoman, FEDMA succinctly puts it; “Advertising used to be a privilege enjoyed by huge brands. Personalised advertisement democratised advertising to SMEs and start-ups.”

The growth of the internet and the advent of personalised advertising technologies has undoubtedly made digital advertising affordable and effective for smaller businesses and not-for-profits.

Well-established brands are more likely to be able to put up a paywall. People already trust their content, or enjoy their service and are prepared to pay. There’s a risk lesser-known brands and start-ups won’t be able to compete.

Is credible journalism under threat?

A Data Protection Officer at one premium UK publisher, who wishes to remain anonymous, fears the drive for cookie compliance risks damaging the ability to produce high quality journalism.

“In the face of unprecedented industry challenges, as more content is consumed on social media platforms, the vital ad revenues that support public interest journalism are under threat from cookie compliance, of all things. It seems like data regulators either don’t understand, or don’t care, about the damage they’re already inflicting on the news media’s ability to invest in journalism.

If publishers comply and implement “reject all” they lose ad revenue through decimated consent rates. If they fight their corner, they face enforcement action. Either way, publishers are emptying already dwindling coffers on legal fees, or buying novel consent or pay solutions.

Unless legislative change comes quickly, or the regulators realise that cookie compliance should not be an enforcement priority, local and national publishers may disappear, just at a time when trusted sources of news have never been more needed.”

Broader societal considerations

There’s a risk as more content hides behind paywalls, we’ll create a world where only those who can afford to pay will be able to access quality, trustworthy content.

‘Consent or Pay’ may be far from perfect, but it does allow people who can’t afford to pay to have equal access to content and online services. Albeit they get tracked, and those who have money to spend can choose to pay and go ad-free.

If the consent or pay model fails, and cookie-less solutions fail to deliver a credible alternative, I fear more decent journalism will go completely behind pay walls . If that’s the only option to plug the funding gap.

I am in my mid-50s and can afford to pay. My son, in his late teens, can’t. I worry poor quality journalism, fake news and AI-generated dross might soon be all he and his generation will be able to access. That’s not to say there isn’t some great user-generated content out there. But it does mean having difficult and honest conversations about regulation and the right of businesses to make a profit in an age of politicised, fraudulent and bogus online content.

Life after cookies

March 2024

“The past is a different country: they do things differently there”.

I’m pretty certain when LP Hartley wrote this wistful line the changing world of advertising, data and privacy weren’t foremost in his mind. However, in five years from now, when all the current arguments surrounding the elimination of third-party cookies are long gone, that’s likely how we’ll view the universal use (and abuse) of a simple text file and the data it unlocked.

From one perspective, life after third-party cookies is very simple.

The majority of media is transacted without third party cookies already. Whether by media type, first-party user preferences, device or regulatory mandates, lots of money already moves around without reference to third-party cookies. As the saying goes “The future is already here, it’s just not very evenly distributed”.

That’s deliberately rather glib. Some sections of the media still rely upon third-party cookies and not every media owner has an obvious opportunity to build a first-party relationship with consumers. The advantages of an identifier that allows streamlining of experience for consumers whilst delivering audience targeting and optimisation for media owners and advertisers haven’t gone away.

When we look to life after third-party cookies, we need to understand the ways replacement identifiers have evolved to ameliorate the worst aspects of cookies, whilst leaving some advantages in place. One leader I interviewed on this topic back in 2020 said “It’s not the fault of the cookie, it’s what you did with the data” and that’s a useful measure to have in mind when looking at any alternative solutions.

Put very simply, the choices for a brand post the third-party cookie are:

  • Use a different identity approach
  • Buy into use of a walled/fenced garden toolset
  • Use another signal to match between media and audience that isn’t anchored directly to the user, such as contextual.

Alternative identity solutions

The advantage of these is they come with some aspect of permissioning and consumer controls – after the cookie arguments and much legislation in the UK, Europe and US, the industry has learnt these tools are critical. However, it remains a moot point as to whether consumers have much knowledge around any consent or legitimate interest options that are put in front of them – the ICO in the UK is currently clamping down on consent practices. More cookie action

Equally moot is whether the majority of consumers are really that bothered. Much consent gathering is viewed by both parties as an unwanted hurdle in a customer journey. The basic requirements for a consumer to know who has their data, for what purposes and for how long remain, but how to achieve the requisite communication and control is still work in progress.

On a global scale these identity solutions revolve either around a “daisy chain,” using hashed email as the ID link, or use a combination of signals from a device with other attributes to have some certainty around individual identity. Any linkage built with a single identity variable risks being fractured by a single consent withdrawal.

The solutions built on a combination of signals have potentially more durability because they are less dependent on any single signal as the anchor of their fidelity, but many device signals are controlled by browser or operating system vendors, who may obscure or withdraw access to these as Apple has done in recent years.

Walled garden toolset

Much discussion is made around Google’s Privacy Sandbox initiative. This is the ambition from Google to deliver some of the advantages of third-party cookies within the Chrome browser whilst not revealing individual data.

It’s been a much longer journey than envisaged at the start when Google first made their announcement in 2020. Google’s commitment, made under the shadow of the Digital Markets Act, has been that they will not remove third-party cookies from the Chrome ecosystem until the UK competition regulator, the CMA, has approved their plans.

As of March 2024, those closely following the travails of Google, the CMA and the opinions tabled from the IAB Tech Lab (amongst others) would be hard pressed to give a cast iron opinion that the current timescale will be met. Privacy and competitive advantage have become inextricably intertwined in these arguments, which is fair. However, slicing through this Gordian Knot was probably not on the CMA or Google’s agenda when they signed up to this process. But that’s about timing, not a permanent stay of execution for the third-party cookie.

Non-user signals

The final approach is to use tools that do not rely on individual level signals. What an individual reads or consumes online says much about them – more than a century of classified advertising is testament to this.

The contextual solutions of 2024 are faster, smarter and better integrated than ever before. They have their downsides – closed loop measurement is a significant challenge hampering some of the campaign optimisations that became common place in the ear of the third-party cookie. And they became common place because they were easy and universal, however, paraphrasing the aphorism, what is measured came to matter, when it should really be the other way round.

And here we come into the greatest change that is being ushered in by the gradual demise of third-party cookies. Measuring what actually matters.

In the late 2010’s when cookies were centre stage as the de facto identifier of choice in media and advertising, their invisible synchronisation gave almost universal, if imperfect, coverage. One simple solution, accessible to all.

As we enter 2024, many alternative identifiers struggle to get much beyond 30% coverage. Contextual solutions can deliver 100% coverage but have their own measurement challenges. This has driven a greater interest in a combination of broad business- and commercial objective-based approaches such as Marketing Mix Modelling (MMM) and attribution-based metrics where appropriate. Advances in data management and analysis have enabled MMM to deliver more frequent insights than the traditional annual deep dive, making it a core component for post cookie media management.

Underpinning any and all of these solutions is the need for first-party data. Whether to build models for customer targeting, collaborate with media and other partners to access first-party data assets or measure more efficiently and effectively, having a structured, accessible and usable set of tools around first-party data is critical to working in the current landscape of solutions.

The growth of cloud storage solutions takes some of the burden away from making this a reality, but the applications used to understand and activate that data asset are many and various. Taking time and advice to build understanding in this area is a knowledge base critical to prospering after the third-part cookie.

Life beyond the third-party cookie is far from fully defined.

Some of the longer-term privacy and competition elements are not that hard to envisage, but exactly how the next 24 months plays out is much, much harder to predict. It’s still really work in progress, especially around measurement and optimisation. For the user of data in advertising and marketing it’s essentially “back to basics”.

Your customer data is more valuable than anyone else’s, so capture and hold it carefully. Test many things in a structured way because the future is about combinations. And know what matters to your business and work out how to measure it properly, not just easily.

Guide to identifying and managing data protection risks

March 2024

Data protection risks come in all shapes, sizes and potential severities. We need to be able to identify the risks associated with our use of personal data, manage them and where necessary put appropriate measures in place to tackle them.

How can we make sure  good risk management practices are embedded in our organisation? In this short guide we cover the key areas to focus on to make sure you’re alert to, and aware of risks.

1. Assign roles and responsibilities

Organisations can’t begin to identify and tackle data risks without clear roles and responsibilities covering personal data. Our people need to know who is accountable and responsible for the personal data we hold and the processing we carry out.

Many organisations apply a ‘three lines of defence’ (3LoD) model for risk management. This model is not only used for data protection, but is also effective for handling many other types of risk a business may face.

  • 1st line: where the leaders of the business functions that process data are appointed as ‘Information Asset Owners’ and they ‘own’ the risks from their function’s data processing activities.
  • 2nd line: Specialists like the DPO, CISO & Legal Counsel support and advise these the 1st line, helping them understand their obligations under data laws, so they can make well informed decisions about how best to tackle any privacy risks. They provide clear procedures for the 1st line to follow.
  • 3rd line: An internal or external audit function provides independent assurance.

3 lines of defence for data protection

For example, risk owners, acting under advice from a Data Protection Officer or Chief Privacy Officer, must make sure appropriate technical and organisational measures are in place to protect the personal data they’re accountable for.

In this model, the second line of defence should never become risk owners. Their role is to provide advice and support to the first line risk owners. They should try to remain independent and not actually make decisions on behalf of their first line colleagues.

2. Decide if you should appoint a DPO

Under the GDPR, a Data Protection Officer’s job is to inform their organisation about  data protection obligations and advise the organisation on risks relating their processing of personal data.

The law tells us you need to appoint a DPO if your organisation is a Controller or Processor and one or more of the following applies:

  • you are a public authority or body (except for courts acting in their judicial capacity); or
  • your core activities require large-scale, regular and systematic monitoring of individuals (for example, online behaviour tracking); or
  • your core activities consist of large-scale processing of special categories of data or data relating to criminal convictions and offences.

In reality, most small organisations are unlikely to fall under the current UK or EU GDPR requirements to appoint a DPO. In fact, many medium-sized business won’t necessarily need a DPO either. Find out more in our DPO myth buster.

3. Conduct data mapping & record keeping

Mapping your data and creating a Record of Processing Activities (RoPA) is widely seen as the best foundation for any successful privacy programme. After all, how can you properly look after people’s data if you don’t have a good handle on what personal data you hold, where it’s located, what purposes it’s used for and how it’s secured?

Even smaller organisations, which may benefit from an exemption from creating a full RoPA, still have basic record keeping responsibilities which should not be overlooked and could still prove very useful. Also see Why is data mapping so crucial?

4. Identify processing risks

Under data protection laws, identifying and mitigating risks to individuals (e.g. employees, customers, patients, clients etc) is paramount.

Risks could materialise in the event of a data breach, failure to fulfil individual privacy rights (such as a Data Subject Access Request), complaints, regulatory scrutiny, compensation demands or even class actions.

We should recognise our service and technology providers, who may handle personal data on our behalf, could be a risk area. For example, they might suffer a data breach and our data could be affected, or they might not adhere to contractual requirements.

It’s good to be mindful about commercial and reputational risks too which can arise from an organisation’s use of personal or non-personal data.

International data transfers are another are where due diligence is required to make sure these transfers are lawful, and if not, recognise that this represents a risk.

Data-driven marketing activities could also be a concern, if these activities are not fully compliant with ePrivacy rules – such as the UK’s Privacy and Electronic Communications Regulations (known as PECR). Even just one single complaint to the ICO could result in a business finding themselves facing a PECR fine and the subsequent reputational damage. GDPR, marketing & cookies guide

Data protection practitioners share tips on identify and assessing risks

5. Risk assessments

In the world of data protection, we have grown used to, or even grown tired of, the requirement to carry out a Data Protection Impact Assessment (DPIA) or a Privacy Impact Assessment (PIA) as it called in some jurisdictions.

Build in a process of assessing whether projects would benefit from a DPIA, or legally require one.  DPIAs are a great way to pinpoint risks and mitigate them early on before they become a bigger problem.

The value of risk assessments in the world of data protection compliance and Quick Guide to DPIAs

6. Issues arising from poor governance or lack of data ownership

In the real world, the three lines of defence model can come under strain. Sometimes those who should take responsibility as risk owners can have slippery shoulders and refuse to take on the risks.

Some processing doesn’t seem to sit conveniently with any one person or team. Things can fall through the cracks when nobody takes responsibility for making key decisions. On these occasions a DPO might come under pressure to take risk ownership themselves. But should they push back?

Strictly speaking, DPOs shouldn’t ‘own’ data risks; their role is to inform and advise risk owners. GDPR tells us; data protection officers, whether or not they are an employee of the controller, should be in a position to perform their duties and tasks in an independent manner” (Recital 97).

The ICO, in line with European (EDPB) guidelines, says; …the DPO cannot hold a position within your organisation that leads him or her to determine the purposes and the means of the processing of personal data. At the same time, the DPO shouldn’t be expected to manage competing objectives that could result in data protection taking a secondary role to business interests.”

So, if the DPO takes ownership of an area of risk, and plays a part in deciding what measures and controls should be put in place, could they may be considered to be ‘determining the means of the processing’? This could lead to a conflict of interest when their role requires them to act independently.

Ultimately, accountability rests with the organisation. It’s the organisation which uses the data, collects the data and runs with it. Not the DPO.

7. Maintain an up-to-date risk register

When you identify a new risk it should be logged and tracked on your Data Risk Register. The ICO expects organisations to: identify and manage information risks in an appropriate risk register, which includes clear links between corporate and departmental risk registers and the risk assessment of information assets.

To do this you’ll need to integrate any outcomes from risk assessments (such as DPIAs) into your project plans, update your risk register(s) and keep these registers under continual review by the DPO or responsible individuals.

Workplace use of facial recognition and fingerprint scanning

February 2024

Just because you can use biometric data, doesn’t mean you should

The use of biometric data is escalating, and recent enforcement action by the UK Information Commissioner’s Office (ICO) concerning its use for workplace monitoring is worth taking note of. We share 12 key considerations if you’re considering using facial recognition, fingerprint scanning or other biometric systems.

In a personal context, many use fingerprint or iris scans to open their smartphones or laptops. In the world of banking facial recognition, voice recognition, fingerprint scans or retina recognition have become commonplace for authentication and security purposes. The UK Border Force is set to trial passport free travel, using facial recognition technology. And increasingly organisations are using biometrics for security or employee monitoring purposes.

Any decision to use biometric systems shouldn’t be taken lightly. If biometric data is being used to identify people, it falls under the definition of Special Category Data under UK GDPR. This means there are specific considerations and requirements which need to be met.

What is biometric data?

Biometric data is also special category data whenever you process it for the purpose of uniquely identifying an individual. To quote the ICO;

Personal information is biometric data if it:

  • relates to someone’s physical, physiological or behavioural characteristics (e.g. the way someone types, a person’s voice, fingerprints, or face);
  • has been processed using specific technologies (e.g. an audio recording of someone talking is analysed with specific software to detect qualities like tone, pitch, accents and inflections); and
  • can uniquely identify (recognise) the person it relates to.

Not all biometric data is classified as ‘special category’ data but it is when you use it, or intend to use it, to uniquely identify someone. It will also be special category data if, for example, you use it to infer other special category data; such as someone’s racial/ethnic origin or information about people’s health.

Special category data requirements

There are key legal requirements under data protection law when processing special category data. In summary, these comprise:

  • Conduct a Data Protection Impact Assessment
  • Identify a lawful basis under Article 6 of GDPR.
  • Identify a separate condition for processing under Article 9. There are ten different conditions to choose from.
  • Your lawful basis and special category condition do not need to be linked.
  • Five of the special category conditions require additional safeguards under the UK’s Data Protection Act 2018 (DPA 2018).
  • In many cases you’ll also need an Appropriate Policy Document in place.

Also see the ICO Special Category Data Guidance.

ICO enforcement action on biometric data use in the workplace

The Regulator has ordered Serco Leisure and a number of associated community leisure trusts to stop using Facial Recognition Technology (FRT) and fingerprint scanning to monitor workers’ attendance. They’ve also ordered the destruction of all biometric data which is not legally required to be retained.

The ICO’s investigation found the biometric data of more than 2,000 employees at 38 leisure centres was being unlawfully processed for the purpose of attendance checks and subsequent payment.

Serco Leisure was unable to demonstrate why it was necessary or proportionate to use FRT and fingerprint scanning for this purpose. The ICO noted there are less intrusive means available, such as ID cards and fobs. Serco Leisure said these methods were open to abuse by employees, but no evidence was produced to support this claim.

Crucially, employees were not proactively offered an alternative to having their faces and fingers scanned. It was presented to employees as a requirement in order to get paid.

Serco Leisure conducted a Data Protection Impact Assessment and a Legitimate Interests Assessment, but these fell short when subject to ICO scrutiny.

Lawful basis

Serco Leisure identified their lawful bases as contractual necessity and legitimate interests. However, the Regulator found the following:

1) While recording attendance times may be necessary to fulfil obligations under employment contracts, it doesn’t follow that the processing of biometric data is necessary to achieve this.

2) Legitimate interests will not apply if a controller can reasonably achieve the same results in another less intrusive way.

Special category condition

Initially Serco Leisure had not identified a condition before implementing biometric systems. It then chose the relevant condition as being for employment, social security and social protection, citing Section 9 of the Working Time Regulations 1998 and the Employment Rights Act 1996.

The ICO found the special category condition chosen did not cover processing to purely meet contractual employment rights or obligations. Serco Leisure also failed to produce a required Appropriate Policy Document.

Read more about this ICO enforcement action.

12 key steps when considering using biometric data

If you’re considering using biometrics systems which will be used to uniquely identify individuals for any purpose, we’d highly recommend taking the following steps:

1. DPIA: Carry out a Data Protection Impact Assessment.

2. Due diligence: Conduct robust due diligence of any provider of biometric systems.

3. Lawful basis: Identify a lawful basis for processing and make sure you meet the requirements of this lawful basis.

4. Special category condition: Identify an appropriate Article 9 condition for processing special category biometric data. The ICO says explicit consent is likely to most appropriate, but other conditions may apply depending on your circumstances.

5. APD: Produce an Appropriate Policy Document where required under DPA 2018.

6. Accuracy: Make sure biometric systems are sufficiently accurate for your purpose. Test and mitigate for biases. For example, bias and inequality may be caused by a lack of diverse data, bugs and inconsistencies in biometric systems.

7. Safeguards: Consider what safeguards will be necessary to mitigate the risk of discrimination, false acceptance and rejection rates.

8. Transparency: Consider how you will be open and upfront about your use of biometric systems. How will you explain this in a clear, concise, and easy to access way? If you are relying on consent, you’ll need to clearly tell people what they’re consenting to, and consent will need to be freely given. Consent: Getting it Right

9. Privacy rights: Assess how people’s rights will apply, and have processes in place to recognise and respond to individual privacy rights requests.

10. Security: Assess what security measures will be needed by your own organisation and by any biometric system provider.

11. Data retention: Assess how long you will need to keep the biometric data. Have robust procedures in place for deleting it when no longer required.

12. Documentation: Keep evidence of everything!

More detail can be found in the ICO Biometric Data Guidance.

The value of risk assessments in the world of data protection compliance

February 2024

In the world of data protection, we have grown used to, or even grown tired of, the requirement to carry out a Data Protection Impact Assessment (DPIA) or a Privacy Impact Assessment (PIA) as it called in some jurisdictions.

What are DPIA and PIA?

They are processes that help assess privacy risks to individuals in the collection, use and disclosure of personal information. They identify privacy risks, improve transparency and promote best practice.

In a report by Trilateral Research & Consulting, commissioned by the ICO in 2013 , it was recommended that “Ensuring the “buy-in” of the most senior people within the organisation is a necessary pre-condition for a successful integration of privacy risks and PIA into the organisation’s existing processes. PIA processes need to be connected with the development of privacy awareness and culture within the company. Companies need to devise effective communication and training strategies to sustain a change in the mindsets of, and in the development of new skills for, project managers. The organisation needs to deliver a clear message to all project managers that the PIA process must be followed and that PIAs are an organisational requirement. Simplicity is the key to achieve full implementation and adoption of internal PIA guidelines and processes.”

The GDPR and guidance from Data Protection Authorities make it clear projects that may require a DPIA include:

  • A new IT system for storing and accessing personal data;
  • Using existing data for a new and unexpected purpose;
  • A new database acquisition
  • Corporate restructuring
  • Monitoring in the workplace

A DPIA will become mandatory in the following cases:

  • Systematic and extensive evaluation of personal aspects of natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects on the individual or similarly affect the individual
  • Processing on a large scale of special categories of data or data relating to criminal offences
  • Systematic monitoring of publicly accessible areas on a large scale

Some data protection authorities have published guidance on how and when to effectively use a DPIA and the DPIA process is best broken down into several distinct phases which are:

  • Identify the need for the project to have a PIA
  • Describe information flows
  • Identify privacy risks
  • Identify privacy solutions
  • Record outcomes and obtain sign-off
  • Integrate outcomes of PIA into project plan

But it is not as simple as set out above.

My experience is that if a DPIA is a risk management tool and is to be considered at the outset of a project, then almost every project or new processing activity needs a pre-DPIA screening process. This at least flags up if a full DPIA is needed and will highlight any areas of risk.

These risks may not only relate to possible infringements of fundamental rights but also to business and reputational risks and infringements of other laws.

Assuming a full DPIA is needed then it is not long in the process before we are assessing the lawful grounds for processing and if we are relying on Legitimate Interests then we need to do a Legitimate Interests Assessment.

Legitimate Interests Assessments (LIAs) – the “balancing test”

An essential part of the concept of Legitimate Interests is the balance between the interests of the Controller and the rights and freedoms of the individual:

‘processing is necessary for the purposes of the legitimate interests pursued by the controller or by a Third Party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of Personal Data, in particular where the data subject is a child.’ GDPR Article 6(1)(f)

If a Controller wishes to rely on Legitimate Interests for processing Personal Data it must carry out an appropriate assessment, called a Legitimate Interests Assessment, or LIA. When carrying out an LIA, the Controller must balance its right to process the Personal Data against the individuals’ data protection rights.

In certain circumstances an LIA may be straight forward. However, under the accountability provisions of the GDPR, the Controller must maintain a written record that it has carried out an LIA and the reasons why it came to the conclusion that the balancing test was met.

International Data Transfer Risk Assessments

In so many projects and data sharing activities we find that personal data is being transferred and the EDPB guidance on risk assessment must be followed and for Controllers in the UK then the ICO guidance applies. There are six steps:

The six steps:

Note that, in order to meet the GDPR’s accountability requirements, each of these steps would need to be documented, and the documentation provided to the supervisory authorities on request.

Step 1: Know your transfers

Understand what data you are transferring outside the EEA and/or UK, including by way of remote access. Perhaps fairly self-evident, but can be challenging when it comes to onward transfers by processors (to sub- processor, or even sub-sub-processors).

Step 2: Identify your transfer tool(s)

Identify what lawful mechanism you are relying on to transfer the data.

Step 3: Assess whether the transfer mechanism is effective in practice

Now we come to the crucial question: in practice, is the transferred personal data afforded a level of protection in the third country that is essentially equivalent to that guaranteed in the EEA/UK?

The EDPB recommends considering multiple aspects of the third country’s legal system, but in particular the rules granting public authorities rights of access to data. Most countries allow for some form of access for law enforcement and national security, and so the assessment should focus on whether those laws are limited to what is necessary and proportionate in a democratic society.

If, after this assessment, you decide your transfer mechanism, ensures an equivalent level of protection, you can stop there. If, however, you decide that the local law does impinge on the protection afforded by your transfer mechanism, you must proceed to Step 4.

Step 4: Adopt supplementary measures

The EDPB separates potential supplementary measures into three categories: technical, contractual, or organisational.

Step 5: Procedural steps if you identified any supplementary measures

This step may lead you to impose regular audits on the importing party.

Step 6: Re-evaluate at appropriate intervals

Monitor developments in the recipient country which could impact your initial assessment. The obligations on the data importer under solutions like the EU Standard Contractual Clauses should help here, as it is required to inform the data exporter of a change of law which impacts its ability to comply with the SCCs.

AI, analytics and new technologies

The EU AI Act is intended to apply to any business that puts AI or uses AI on or in the EU market and so is extra-territorial in its reach. More than that, the AI Act will integrate with and co-exist alongside existing legislation such as the General Data Protection Regulation, the Digital Services Act and the draft Cyber Resilience Act.

The use of new technologies such as smart devices, internet of things and artificial intelligence, coupled with the economic and humanitarian uses of big data analytics, means that there has to be a balance between the acquisition of personal data and the rights of citizens.

Beyond GDPR, PECR, Digital Services Act and so on, assessing your supply chain is more important now than ever, particularly as we rely so much on international suppliers and distributors as well as physical and digital supply chains. We have learned to address issues in the supply chain, such as bribery, competition, modern slavery, and intellectual property; however, more recently we have had to consider geopolitical issues, import and export controls, and other compliance and ethics issues. Now in 2024, we must also consider environmental, sustainability, cyber resilience, digital safety, and accessibility of physical products and digital services that we provide.

Harmful Design in Digital Markets

A position paper on Harmful Design in Digital Markets by the ICO and the CMA is targeted to firms that deploy design practices in digital markets (such as on websites or other online services), as well as product and UX designers that create online interfaces for firms. It provides:

  • an overview of how design choices online can lead to data protection, consumer and competition harms, and the relevant laws regulated by the ICO and CMA that could be infringed by these practices; and
  • practical examples of design practices that are potentially harmful under our respective regimes when they are used to present choices about personal data processing. These practices are “harmful nudges and sludge”, “confirmshaming”, “biased framing”, “bundled consent” and “default settings”.

It now needs us to assess how we manage Data Protection by Design and how we respect consumer choices. Yet another assessment to minimise potential risks!

Nearly 6 years on from the General Data Protection Regulation, we now face a growing list of assessments that we need to carry out, from Legitimate Interest Assessments, Transfer Risk Assessments, Privacy by Design Assessments, Accessibility Assessments, Children’s Code compliance, and now Online Safety, AI and Cyber Resilience….and the list goes on. Have we reached the point where we need an Assessments Handbook that incorporates these various assessments I have outlined and ensure they integrate with each organisations overall risk management policy?

Used appropriately, I find that these assessments really do manage risk and not only protect the rights of individuals but also protect the business from reputational and brand damage. Sometimes, the use of a risk assessment at the start of or even at an early stage of a project, can act as a “Stop” sign and cause the project team and compliance team to say “just because we can doesn’t always mean we should”.

Data protection by design and default – what does that mean really?

January 2024

It’s an obligation in GDPR, but it can still be confusing to get to the heart of what ‘data protection by design and default’ really means, and what you need to do as a result.

It’s often used as a proxy for ‘do a DPIA’ but it’s so much more than that. I’ve seen it described as building or baking in data protection from the start. That’s more accurate but still leaves the question “yes but what does that actually mean?”

What does GDPR say?

It helps to go back to the source, not just article 25 GDPR (1) and (2) but also recital 78. These tell you what the legislators were concerned about, namely implementing the principles. They specifically call out the following.

  • Data minimisation
  • Purpose limitation
  • Retention
  • Wide disclosure / accessibility of the data
  • Transparency
  • Making sure individuals know what’s going on
  • Allowing organisations to set up and improve security measures.

Both the article and the recital mention pseudonymisation as one way to implement the data minimisation principle. It’s an example though, not a mandatory requirement.

The end of recital 78 seems to be directed at those producing software and systems, as well as public tender processes. My take on this is that the legislators were keen to make sure that organisations needing to comply with GDPR didn’t fall at the first hurdle due to a lack of thought for data protection compliance in systems and software. Their expectation is clear for those designing these things for organisations, and I think their hope was that it would lead to software and systems producers having to raise their game to stay competitive.

How to put data protection by design and default into practice

To put this obligation into practice, people often jump to PETs (privacy-enhancing technologies, not Charlie the golden retriever). There have been loads of studies, reports and articles on a range of specific technical measures, too much to go into here. In reality, and in my experience, these more sophisticated methods and third-party software solutions tend to only be available to those with deep pockets and an internal tech team.

The reality for many organisations is to embed DP compliance into processes and policies. When I work with clients on this, I tend to start by looking at how things get done in their organisation.

  • How do you go from “I’ve had a great idea” to “it’s gone live today”?
  • Where are the gatekeepers and / or the decision makers for ideas and projects?
  • How do you manage and record the different steps, phases or tasks needed?
  • How do you identify who needs to be involved, and then involve them?

Once you understand the process and who’s involved, look at what already exists. Many organisations use certain tools to project manage, to assign tasks to people, to report and manage bugs or other issues and so on. Many organisations already have a form or process to suggest ideas, ask for approvals, and to get items on the to do list.

I have found an effective approach is to build on what is already there. No-one will thank you for introducing a new 4-page form on data protection stuff. No-one will fill it in. Where are the points in the processes, or on the forms, or in the tool template to add the DP stuff? Can you add in questions, checklists, templates, approval points and the like?

Examples of data protection by design and default solutions

Speaking of checklists and templates, one of the most effective DP by design ‘solutions’ was where the organisation created a ‘standard build template’ that incorporated key DP principles. So anytime anything needed building or expanding, the build requirements already included things like specific security measures, elements of data minimisation and controls for the end users.

With another organisation there was a two-part solution. One part was for those developing the idea and deciding on the data collection. This involved adapting their project documentation to include key elements from a DPIA and a way to check each data field requested was actually necessary, as well as identify where it might be sensitive or need greater care. The other part was a checklist for those implementing the new thing that set out some core requirements such as the ability for the end user to take certain actions with their data, and what settings or controls had to be off by default. This approach also encouraged better communication between the two groups, as they worked together on which solutions would best implement the core requirements.

Think of the people

Getting to a practical implementation of DP by design and default involves taking a people-centred approach. Both in terms of collaborating with the relevant people internally, as well as thinking about impacts on and risks to the individuals whose personal data you are doing things with.

You also need to consider your organisation’s position on DP compliance. The law might want everyone to do the gold standard, but that’s not the reality we live in. How far you go in implementing DP by design and what your measures look like will be influenced by things like your organisation’s risk appetite, their approach to risk management, any relevant vision or principles they have, or any frameworks or internal committees in place covering compliance, ethics, and so on.

Your colleagues can also be a great asset. Your business development people will have information on what corporate customers are asking for, your end user support people will have intel on what the users complain about. How far do these overlap and where do they conflict?

For example, I have seen the same transparency issue in multiple organisations where both corporate customers and end users want to understand how the product or software works. The corporate customers need to do compliance diligence, and the end users want to know if they can trust the organisation with their data. Producing something for both audiences on how it all works not only helps implement the transparency point of article 25, it also checks if different parts of the organisation have the same understanding of how it works, and flags up discrepancies.

Sometimes, doing something because it leads to a good consumer experience or higher levels of satisfaction gets you to the right place, and can be an easier sell than labelling it a ‘DP compliance requirement’.

Some organisations have the resources to do user research and surveys, and the results of these can be very useful to the DP people. If you can work with and understand the objectives and the pain points of colleagues (such as those managing infrastructure, information security, compliance, risk management, customer support and even the executive team), you’ll be in a good place to see where you slide in your DP stuff, and piggyback on other initiatives.

This is one area of GDPR that takes an outcome-based approach. And the joy that is that it is scalable and adaptable to each situation, and to each organisation’s resources and capabilities. So by focusing on the required outcomes, and putting people at the centre, achieving data protection by design and default can be a lot easier than it first appears.

Quick Guide to UK GDPR, Marketing and Cookies

January 2024

How UK GDPR and PECR go hand-in-hand

Most have heard of GDPR. However, data protection law existed way before this new kid arrived on the block in 2018. And let’s not forget in the UK, GDPR has an equally important cousin called PECR.

The UK’s Privacy and Electronic Communications Regulations (PECR) have been around since 2003 before the days of smartphones and apps. Organisations need to consider both UK GDPR and PECR when it comes to marketing and cookies.

Why marketers need to pay attention

There are more fines issued by the Information Commissioner’s Office (ICO) for falling foul of the PECR marketing rules than there are under UK GDPR. Under UK data reform plans, the amount the Regulator can fine under PECR could be set to increase substantially to a maximum of around £17 million. Currently the maximum fine under PECR is £500k. So it’s worth taking notice.

This is a quick overview, and we’d encourage you to check the ICO’s detailed marketing guidance and cookie guidance.

What’s the difference between UK GDPR and PECR?

In a nutshell…

UK GDPR

✓ Tells us how we should handle personal data – information which could directly or indirectly identify someone.
✓ Sets out requirements organisations need to meet and their obligations.
✓ Provides us with seven core data protection principles which need to be considered whenever we handle personal data for any purpose, including marketing.
✓ Defines the legal standard for consent, which is relevant for direct marketing
✓ Gives people privacy rights, including an absolute right to object to direct marketing.

One of the principles is that processing of personal data must be lawful, fair and transparent. This includes making sure we have a lawful basis for our activities.

PECR

✓ Sets out specific rules for marketing to UK citizens, for example by emails , text messages or conducting telemarketing calls to UK citizens.
✓ Sets out specific rules when using cookies and similar technologies (such as scripts, tracking pixels and plugins).

PECR is derived from an EU directive, and EU countries have their own equivalent regulation which, whilst covering similar areas, may have different requirements, when marketing to their citizens.

We’ve written about the specific rules for email marketing and telemarketing here:
UK email marketing rules
UK telemarketing rules
The ‘soft opt-in’ – are you getting it right

How do UK GDPR and PECR work together?

Direct marketing

Marketers need to consider the core principles of UK GDPR when handling people’s personal information. Furthermore, they need to have a lawful basis for each data activity. Of the six lawful bases, two are appropriate for direct marketing activities; Consent and Legitimate Interests.

Consent: PECR tells us, for certain electronic marketing activity, we have to get people’s prior consent. UK GDPR tells us the standards we need to meet for this consent to be valid. Consent – Getting it right

Legitimate interests: If the types of marketing we conduct don’t require consent under PECR , we may choose to request consent anyway, or we could rely on legitimate interests. For example, marketing to business contacts rather than consumers.

Under GDPR, we need to be sure to balance our legitimate interests with the rights and interests of the people whose personal information we are using – i.e. the people we want to market to. ICO Legitimate Interests Guidance 

What about cookies?

PECR requires opt-in consent for most cookies or similar tech, regardless of whether they collect personal data or not. And we’re told this consent must meet the UK GDPR standards.

In simple terms, the rules are:

✓ Notify new users your website/app users about your use of cookies or similar technologies and provide adequate transparent information about what purposes they are used for.
✓ Consent is required for use of cookies, except a narrow exclusion for those which are ‘strictly necessary’ (also known as ‘essential’ cookies).
✓ Users need to be able to give or decline consent before the cookies are dropped on their device and should be given options to manage their consents at any time (e.g. opt-out after initially giving consent).

Changes are on the cards

The Data Protection and Digital Information Bill is currently progressing through Parliament. It’s not law yet, but if passed will usher in some changes to both UK GDPR and PECR.

The core data protection principles aren’t going away, nor are the lawful bases under UK GDPR, nor the rules for email marketing, text messages and telemarketing. However one proposal could see charities being able to take advantage of the soft opt-in for email/text marketing. What could the marketing ‘soft opt-in’ mean for charities?