Guide to identifying and managing data protection risks

March 2024

Data protection risks come in all shapes, sizes and potential severities. We need to be able to identify the risks associated with our use of personal data, manage them and where necessary put appropriate measures in place to tackle them.

How can we make sure  good risk management practices are embedded in our organisation? In this short guide we cover the key areas to focus on to make sure you’re alert to, and aware of risks.

1. Assign roles and responsibilities

Organisations can’t begin to identify and tackle data risks without clear roles and responsibilities covering personal data. Our people need to know who is accountable and responsible for the personal data we hold and the processing we carry out.

Many organisations apply a ‘three lines of defence’ (3LoD) model for risk management. This model is not only used for data protection, but is also effective for handling many other types of risk a business may face.

  • 1st line: where the leaders of the business functions that process data are appointed as ‘Information Asset Owners’ and they ‘own’ the risks from their function’s data processing activities.
  • 2nd line: Specialists like the DPO, CISO & Legal Counsel support and advise these the 1st line, helping them understand their obligations under data laws, so they can make well informed decisions about how best to tackle any privacy risks. They provide clear procedures for the 1st line to follow.
  • 3rd line: An internal or external audit function provides independent assurance.

3 lines of defence for data protection

For example, risk owners, acting under advice from a Data Protection Officer or Chief Privacy Officer, must make sure appropriate technical and organisational measures are in place to protect the personal data they’re accountable for.

In this model, the second line of defence should never become risk owners. Their role is to provide advice and support to the first line risk owners. They should try to remain independent and not actually make decisions on behalf of their first line colleagues.

2. Decide if you should appoint a DPO

Under the GDPR, a Data Protection Officer’s job is to inform their organisation about  data protection obligations and advise the organisation on risks relating their processing of personal data.

The law tells us you need to appoint a DPO if your organisation is a Controller or Processor and one or more of the following applies:

  • you are a public authority or body (except for courts acting in their judicial capacity); or
  • your core activities require large-scale, regular and systematic monitoring of individuals (for example, online behaviour tracking); or
  • your core activities consist of large-scale processing of special categories of data or data relating to criminal convictions and offences.

In reality, most small organisations are unlikely to fall under the current UK or EU GDPR requirements to appoint a DPO. In fact, many medium-sized business won’t necessarily need a DPO either. Find out more in our DPO myth buster.

3. Conduct data mapping & record keeping

Mapping your data and creating a Record of Processing Activities (RoPA) is widely seen as the best foundation for any successful privacy programme. After all, how can you properly look after people’s data if you don’t have a good handle on what personal data you hold, where it’s located, what purposes it’s used for and how it’s secured?

Even smaller organisations, which may benefit from an exemption from creating a full RoPA, still have basic record keeping responsibilities which should not be overlooked and could still prove very useful. Also see Why is data mapping so crucial?

4. Identify processing risks

Under data protection laws, identifying and mitigating risks to individuals (e.g. employees, customers, patients, clients etc) is paramount.

Risks could materialise in the event of a data breach, failure to fulfil individual privacy rights (such as a Data Subject Access Request), complaints, regulatory scrutiny, compensation demands or even class actions.

We should recognise our service and technology providers, who may handle personal data on our behalf, could be a risk area. For example, they might suffer a data breach and our data could be affected, or they might not adhere to contractual requirements.

It’s good to be mindful about commercial and reputational risks too which can arise from an organisation’s use of personal or non-personal data.

International data transfers are another are where due diligence is required to make sure these transfers are lawful, and if not, recognise that this represents a risk.

Data-driven marketing activities could also be a concern, if these activities are not fully compliant with ePrivacy rules – such as the UK’s Privacy and Electronic Communications Regulations (known as PECR). Even just one single complaint to the ICO could result in a business finding themselves facing a PECR fine and the subsequent reputational damage. GDPR, marketing & cookies guide

Data protection practitioners share tips on identify and assessing risks

5. Risk assessments

In the world of data protection, we have grown used to, or even grown tired of, the requirement to carry out a Data Protection Impact Assessment (DPIA) or a Privacy Impact Assessment (PIA) as it called in some jurisdictions.

Build in a process of assessing whether projects would benefit from a DPIA, or legally require one.  DPIAs are a great way to pinpoint risks and mitigate them early on before they become a bigger problem.

The value of risk assessments in the world of data protection compliance and Quick Guide to DPIAs

6. Issues arising from poor governance or lack of data ownership

In the real world, the three lines of defence model can come under strain. Sometimes those who should take responsibility as risk owners can have slippery shoulders and refuse to take on the risks.

Some processing doesn’t seem to sit conveniently with any one person or team. Things can fall through the cracks when nobody takes responsibility for making key decisions. On these occasions a DPO might come under pressure to take risk ownership themselves. But should they push back?

Strictly speaking, DPOs shouldn’t ‘own’ data risks; their role is to inform and advise risk owners. GDPR tells us; data protection officers, whether or not they are an employee of the controller, should be in a position to perform their duties and tasks in an independent manner” (Recital 97).

The ICO, in line with European (EDPB) guidelines, says; …the DPO cannot hold a position within your organisation that leads him or her to determine the purposes and the means of the processing of personal data. At the same time, the DPO shouldn’t be expected to manage competing objectives that could result in data protection taking a secondary role to business interests.”

So, if the DPO takes ownership of an area of risk, and plays a part in deciding what measures and controls should be put in place, could they may be considered to be ‘determining the means of the processing’? This could lead to a conflict of interest when their role requires them to act independently.

Ultimately, accountability rests with the organisation. It’s the organisation which uses the data, collects the data and runs with it. Not the DPO.

7. Maintain an up-to-date risk register

When you identify a new risk it should be logged and tracked on your Data Risk Register. The ICO expects organisations to: identify and manage information risks in an appropriate risk register, which includes clear links between corporate and departmental risk registers and the risk assessment of information assets.

To do this you’ll need to integrate any outcomes from risk assessments (such as DPIAs) into your project plans, update your risk register(s) and keep these registers under continual review by the DPO or responsible individuals.

Workplace use of facial recognition and fingerprint scanning

February 2024

Just because you can use biometric data, doesn’t mean you should

The use of biometric data is escalating, and recent enforcement action by the UK Information Commissioner’s Office (ICO) concerning its use for workplace monitoring is worth taking note of. We share 12 key considerations if you’re considering using facial recognition, fingerprint scanning or other biometric systems.

In a personal context, many use fingerprint or iris scans to open their smartphones or laptops. In the world of banking facial recognition, voice recognition, fingerprint scans or retina recognition have become commonplace for authentication and security purposes. The UK Border Force is set to trial passport free travel, using facial recognition technology. And increasingly organisations are using biometrics for security or employee monitoring purposes.

Any decision to use biometric systems shouldn’t be taken lightly. If biometric data is being used to identify people, it falls under the definition of Special Category Data under UK GDPR. This means there are specific considerations and requirements which need to be met.

What is biometric data?

Biometric data is also special category data whenever you process it for the purpose of uniquely identifying an individual. To quote the ICO;

Personal information is biometric data if it:

  • relates to someone’s physical, physiological or behavioural characteristics (e.g. the way someone types, a person’s voice, fingerprints, or face);
  • has been processed using specific technologies (e.g. an audio recording of someone talking is analysed with specific software to detect qualities like tone, pitch, accents and inflections); and
  • can uniquely identify (recognise) the person it relates to.

Not all biometric data is classified as ‘special category’ data but it is when you use it, or intend to use it, to uniquely identify someone. It will also be special category data if, for example, you use it to infer other special category data; such as someone’s racial/ethnic origin or information about people’s health.

Special category data requirements

There are key legal requirements under data protection law when processing special category data. In summary, these comprise:

  • Conduct a Data Protection Impact Assessment
  • Identify a lawful basis under Article 6 of GDPR.
  • Identify a separate condition for processing under Article 9. There are ten different conditions to choose from.
  • Your lawful basis and special category condition do not need to be linked.
  • Five of the special category conditions require additional safeguards under the UK’s Data Protection Act 2018 (DPA 2018).
  • In many cases you’ll also need an Appropriate Policy Document in place.

Also see the ICO Special Category Data Guidance.

ICO enforcement action on biometric data use in the workplace

The Regulator has ordered Serco Leisure and a number of associated community leisure trusts to stop using Facial Recognition Technology (FRT) and fingerprint scanning to monitor workers’ attendance. They’ve also ordered the destruction of all biometric data which is not legally required to be retained.

The ICO’s investigation found the biometric data of more than 2,000 employees at 38 leisure centres was being unlawfully processed for the purpose of attendance checks and subsequent payment.

Serco Leisure was unable to demonstrate why it was necessary or proportionate to use FRT and fingerprint scanning for this purpose. The ICO noted there are less intrusive means available, such as ID cards and fobs. Serco Leisure said these methods were open to abuse by employees, but no evidence was produced to support this claim.

Crucially, employees were not proactively offered an alternative to having their faces and fingers scanned. It was presented to employees as a requirement in order to get paid.

Serco Leisure conducted a Data Protection Impact Assessment and a Legitimate Interests Assessment, but these fell short when subject to ICO scrutiny.

Lawful basis

Serco Leisure identified their lawful bases as contractual necessity and legitimate interests. However, the Regulator found the following:

1) While recording attendance times may be necessary to fulfil obligations under employment contracts, it doesn’t follow that the processing of biometric data is necessary to achieve this.

2) Legitimate interests will not apply if a controller can reasonably achieve the same results in another less intrusive way.

Special category condition

Initially Serco Leisure had not identified a condition before implementing biometric systems. It then chose the relevant condition as being for employment, social security and social protection, citing Section 9 of the Working Time Regulations 1998 and the Employment Rights Act 1996.

The ICO found the special category condition chosen did not cover processing to purely meet contractual employment rights or obligations. Serco Leisure also failed to produce a required Appropriate Policy Document.

Read more about this ICO enforcement action.

12 key steps when considering using biometric data

If you’re considering using biometrics systems which will be used to uniquely identify individuals for any purpose, we’d highly recommend taking the following steps:

1. DPIA: Carry out a Data Protection Impact Assessment.

2. Due diligence: Conduct robust due diligence of any provider of biometric systems.

3. Lawful basis: Identify a lawful basis for processing and make sure you meet the requirements of this lawful basis.

4. Special category condition: Identify an appropriate Article 9 condition for processing special category biometric data. The ICO says explicit consent is likely to most appropriate, but other conditions may apply depending on your circumstances.

5. APD: Produce an Appropriate Policy Document where required under DPA 2018.

6. Accuracy: Make sure biometric systems are sufficiently accurate for your purpose. Test and mitigate for biases. For example, bias and inequality may be caused by a lack of diverse data, bugs and inconsistencies in biometric systems.

7. Safeguards: Consider what safeguards will be necessary to mitigate the risk of discrimination, false acceptance and rejection rates.

8. Transparency: Consider how you will be open and upfront about your use of biometric systems. How will you explain this in a clear, concise, and easy to access way? If you are relying on consent, you’ll need to clearly tell people what they’re consenting to, and consent will need to be freely given. Consent: Getting it Right

9. Privacy rights: Assess how people’s rights will apply, and have processes in place to recognise and respond to individual privacy rights requests.

10. Security: Assess what security measures will be needed by your own organisation and by any biometric system provider.

11. Data retention: Assess how long you will need to keep the biometric data. Have robust procedures in place for deleting it when no longer required.

12. Documentation: Keep evidence of everything!

More detail can be found in the ICO Biometric Data Guidance.

The value of risk assessments in the world of data protection compliance

February 2024

In the world of data protection, we have grown used to, or even grown tired of, the requirement to carry out a Data Protection Impact Assessment (DPIA) or a Privacy Impact Assessment (PIA) as it called in some jurisdictions.

What are DPIA and PIA?

They are processes that help assess privacy risks to individuals in the collection, use and disclosure of personal information. They identify privacy risks, improve transparency and promote best practice.

In a report by Trilateral Research & Consulting, commissioned by the ICO in 2013 , it was recommended that “Ensuring the “buy-in” of the most senior people within the organisation is a necessary pre-condition for a successful integration of privacy risks and PIA into the organisation’s existing processes. PIA processes need to be connected with the development of privacy awareness and culture within the company. Companies need to devise effective communication and training strategies to sustain a change in the mindsets of, and in the development of new skills for, project managers. The organisation needs to deliver a clear message to all project managers that the PIA process must be followed and that PIAs are an organisational requirement. Simplicity is the key to achieve full implementation and adoption of internal PIA guidelines and processes.”

The GDPR and guidance from Data Protection Authorities make it clear projects that may require a DPIA include:

  • A new IT system for storing and accessing personal data;
  • Using existing data for a new and unexpected purpose;
  • A new database acquisition
  • Corporate restructuring
  • Monitoring in the workplace

A DPIA will become mandatory in the following cases:

  • Systematic and extensive evaluation of personal aspects of natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects on the individual or similarly affect the individual
  • Processing on a large scale of special categories of data or data relating to criminal offences
  • Systematic monitoring of publicly accessible areas on a large scale

Some data protection authorities have published guidance on how and when to effectively use a DPIA and the DPIA process is best broken down into several distinct phases which are:

  • Identify the need for the project to have a PIA
  • Describe information flows
  • Identify privacy risks
  • Identify privacy solutions
  • Record outcomes and obtain sign-off
  • Integrate outcomes of PIA into project plan

But it is not as simple as set out above.

My experience is that if a DPIA is a risk management tool and is to be considered at the outset of a project, then almost every project or new processing activity needs a pre-DPIA screening process. This at least flags up if a full DPIA is needed and will highlight any areas of risk.

These risks may not only relate to possible infringements of fundamental rights but also to business and reputational risks and infringements of other laws.

Assuming a full DPIA is needed then it is not long in the process before we are assessing the lawful grounds for processing and if we are relying on Legitimate Interests then we need to do a Legitimate Interests Assessment.

Legitimate Interests Assessments (LIAs) – the “balancing test”

An essential part of the concept of Legitimate Interests is the balance between the interests of the Controller and the rights and freedoms of the individual:

‘processing is necessary for the purposes of the legitimate interests pursued by the controller or by a Third Party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of Personal Data, in particular where the data subject is a child.’ GDPR Article 6(1)(f)

If a Controller wishes to rely on Legitimate Interests for processing Personal Data it must carry out an appropriate assessment, called a Legitimate Interests Assessment, or LIA. When carrying out an LIA, the Controller must balance its right to process the Personal Data against the individuals’ data protection rights.

In certain circumstances an LIA may be straight forward. However, under the accountability provisions of the GDPR, the Controller must maintain a written record that it has carried out an LIA and the reasons why it came to the conclusion that the balancing test was met.

International Data Transfer Risk Assessments

In so many projects and data sharing activities we find that personal data is being transferred and the EDPB guidance on risk assessment must be followed and for Controllers in the UK then the ICO guidance applies. There are six steps:

The six steps:

Note that, in order to meet the GDPR’s accountability requirements, each of these steps would need to be documented, and the documentation provided to the supervisory authorities on request.

Step 1: Know your transfers

Understand what data you are transferring outside the EEA and/or UK, including by way of remote access. Perhaps fairly self-evident, but can be challenging when it comes to onward transfers by processors (to sub- processor, or even sub-sub-processors).

Step 2: Identify your transfer tool(s)

Identify what lawful mechanism you are relying on to transfer the data.

Step 3: Assess whether the transfer mechanism is effective in practice

Now we come to the crucial question: in practice, is the transferred personal data afforded a level of protection in the third country that is essentially equivalent to that guaranteed in the EEA/UK?

The EDPB recommends considering multiple aspects of the third country’s legal system, but in particular the rules granting public authorities rights of access to data. Most countries allow for some form of access for law enforcement and national security, and so the assessment should focus on whether those laws are limited to what is necessary and proportionate in a democratic society.

If, after this assessment, you decide your transfer mechanism, ensures an equivalent level of protection, you can stop there. If, however, you decide that the local law does impinge on the protection afforded by your transfer mechanism, you must proceed to Step 4.

Step 4: Adopt supplementary measures

The EDPB separates potential supplementary measures into three categories: technical, contractual, or organisational.

Step 5: Procedural steps if you identified any supplementary measures

This step may lead you to impose regular audits on the importing party.

Step 6: Re-evaluate at appropriate intervals

Monitor developments in the recipient country which could impact your initial assessment. The obligations on the data importer under solutions like the EU Standard Contractual Clauses should help here, as it is required to inform the data exporter of a change of law which impacts its ability to comply with the SCCs.

AI, analytics and new technologies

The EU AI Act is intended to apply to any business that puts AI or uses AI on or in the EU market and so is extra-territorial in its reach. More than that, the AI Act will integrate with and co-exist alongside existing legislation such as the General Data Protection Regulation, the Digital Services Act and the draft Cyber Resilience Act.

The use of new technologies such as smart devices, internet of things and artificial intelligence, coupled with the economic and humanitarian uses of big data analytics, means that there has to be a balance between the acquisition of personal data and the rights of citizens.

Beyond GDPR, PECR, Digital Services Act and so on, assessing your supply chain is more important now than ever, particularly as we rely so much on international suppliers and distributors as well as physical and digital supply chains. We have learned to address issues in the supply chain, such as bribery, competition, modern slavery, and intellectual property; however, more recently we have had to consider geopolitical issues, import and export controls, and other compliance and ethics issues. Now in 2024, we must also consider environmental, sustainability, cyber resilience, digital safety, and accessibility of physical products and digital services that we provide.

Harmful Design in Digital Markets

A position paper on Harmful Design in Digital Markets by the ICO and the CMA is targeted to firms that deploy design practices in digital markets (such as on websites or other online services), as well as product and UX designers that create online interfaces for firms. It provides:

  • an overview of how design choices online can lead to data protection, consumer and competition harms, and the relevant laws regulated by the ICO and CMA that could be infringed by these practices; and
  • practical examples of design practices that are potentially harmful under our respective regimes when they are used to present choices about personal data processing. These practices are “harmful nudges and sludge”, “confirmshaming”, “biased framing”, “bundled consent” and “default settings”.

It now needs us to assess how we manage Data Protection by Design and how we respect consumer choices. Yet another assessment to minimise potential risks!

Nearly 6 years on from the General Data Protection Regulation, we now face a growing list of assessments that we need to carry out, from Legitimate Interest Assessments, Transfer Risk Assessments, Privacy by Design Assessments, Accessibility Assessments, Children’s Code compliance, and now Online Safety, AI and Cyber Resilience….and the list goes on. Have we reached the point where we need an Assessments Handbook that incorporates these various assessments I have outlined and ensure they integrate with each organisations overall risk management policy?

Used appropriately, I find that these assessments really do manage risk and not only protect the rights of individuals but also protect the business from reputational and brand damage. Sometimes, the use of a risk assessment at the start of or even at an early stage of a project, can act as a “Stop” sign and cause the project team and compliance team to say “just because we can doesn’t always mean we should”.

Data protection by design and default – what does that mean really?

January 2024

It’s an obligation in GDPR, but it can still be confusing to get to the heart of what ‘data protection by design and default’ really means, and what you need to do as a result.

It’s often used as a proxy for ‘do a DPIA’ but it’s so much more than that. I’ve seen it described as building or baking in data protection from the start. That’s more accurate but still leaves the question “yes but what does that actually mean?”

What does GDPR say?

It helps to go back to the source, not just article 25 GDPR (1) and (2) but also recital 78. These tell you what the legislators were concerned about, namely implementing the principles. They specifically call out the following.

  • Data minimisation
  • Purpose limitation
  • Retention
  • Wide disclosure / accessibility of the data
  • Transparency
  • Making sure individuals know what’s going on
  • Allowing organisations to set up and improve security measures.

Both the article and the recital mention pseudonymisation as one way to implement the data minimisation principle. It’s an example though, not a mandatory requirement.

The end of recital 78 seems to be directed at those producing software and systems, as well as public tender processes. My take on this is that the legislators were keen to make sure that organisations needing to comply with GDPR didn’t fall at the first hurdle due to a lack of thought for data protection compliance in systems and software. Their expectation is clear for those designing these things for organisations, and I think their hope was that it would lead to software and systems producers having to raise their game to stay competitive.

How to put data protection by design and default into practice

To put this obligation into practice, people often jump to PETs (privacy-enhancing technologies, not Charlie the golden retriever). There have been loads of studies, reports and articles on a range of specific technical measures, too much to go into here. In reality, and in my experience, these more sophisticated methods and third-party software solutions tend to only be available to those with deep pockets and an internal tech team.

The reality for many organisations is to embed DP compliance into processes and policies. When I work with clients on this, I tend to start by looking at how things get done in their organisation.

  • How do you go from “I’ve had a great idea” to “it’s gone live today”?
  • Where are the gatekeepers and / or the decision makers for ideas and projects?
  • How do you manage and record the different steps, phases or tasks needed?
  • How do you identify who needs to be involved, and then involve them?

Once you understand the process and who’s involved, look at what already exists. Many organisations use certain tools to project manage, to assign tasks to people, to report and manage bugs or other issues and so on. Many organisations already have a form or process to suggest ideas, ask for approvals, and to get items on the to do list.

I have found an effective approach is to build on what is already there. No-one will thank you for introducing a new 4-page form on data protection stuff. No-one will fill it in. Where are the points in the processes, or on the forms, or in the tool template to add the DP stuff? Can you add in questions, checklists, templates, approval points and the like?

Examples of data protection by design and default solutions

Speaking of checklists and templates, one of the most effective DP by design ‘solutions’ was where the organisation created a ‘standard build template’ that incorporated key DP principles. So anytime anything needed building or expanding, the build requirements already included things like specific security measures, elements of data minimisation and controls for the end users.

With another organisation there was a two-part solution. One part was for those developing the idea and deciding on the data collection. This involved adapting their project documentation to include key elements from a DPIA and a way to check each data field requested was actually necessary, as well as identify where it might be sensitive or need greater care. The other part was a checklist for those implementing the new thing that set out some core requirements such as the ability for the end user to take certain actions with their data, and what settings or controls had to be off by default. This approach also encouraged better communication between the two groups, as they worked together on which solutions would best implement the core requirements.

Think of the people

Getting to a practical implementation of DP by design and default involves taking a people-centred approach. Both in terms of collaborating with the relevant people internally, as well as thinking about impacts on and risks to the individuals whose personal data you are doing things with.

You also need to consider your organisation’s position on DP compliance. The law might want everyone to do the gold standard, but that’s not the reality we live in. How far you go in implementing DP by design and what your measures look like will be influenced by things like your organisation’s risk appetite, their approach to risk management, any relevant vision or principles they have, or any frameworks or internal committees in place covering compliance, ethics, and so on.

Your colleagues can also be a great asset. Your business development people will have information on what corporate customers are asking for, your end user support people will have intel on what the users complain about. How far do these overlap and where do they conflict?

For example, I have seen the same transparency issue in multiple organisations where both corporate customers and end users want to understand how the product or software works. The corporate customers need to do compliance diligence, and the end users want to know if they can trust the organisation with their data. Producing something for both audiences on how it all works not only helps implement the transparency point of article 25, it also checks if different parts of the organisation have the same understanding of how it works, and flags up discrepancies.

Sometimes, doing something because it leads to a good consumer experience or higher levels of satisfaction gets you to the right place, and can be an easier sell than labelling it a ‘DP compliance requirement’.

Some organisations have the resources to do user research and surveys, and the results of these can be very useful to the DP people. If you can work with and understand the objectives and the pain points of colleagues (such as those managing infrastructure, information security, compliance, risk management, customer support and even the executive team), you’ll be in a good place to see where you slide in your DP stuff, and piggyback on other initiatives.

This is one area of GDPR that takes an outcome-based approach. And the joy that is that it is scalable and adaptable to each situation, and to each organisation’s resources and capabilities. So by focusing on the required outcomes, and putting people at the centre, achieving data protection by design and default can be a lot easier than it first appears.

Quick Guide to UK GDPR, Marketing and Cookies

January 2024

How UK GDPR and PECR go hand-in-hand

Most have heard of GDPR. However, data protection law existed way before this new kid arrived on the block in 2018. And let’s not forget in the UK, GDPR has an equally important cousin called PECR.

The UK’s Privacy and Electronic Communications Regulations (PECR) have been around since 2003 before the days of smartphones and apps. Organisations need to consider both UK GDPR and PECR when it comes to marketing and cookies.

Why marketers need to pay attention

There are more fines issued by the Information Commissioner’s Office (ICO) for falling foul of the PECR marketing rules than there are under UK GDPR. Under UK data reform plans, the amount the Regulator can fine under PECR could be set to increase substantially to a maximum of around £17 million. Currently the maximum fine under PECR is £500k. So it’s worth taking notice.

This is a quick overview, and we’d encourage you to check the ICO’s detailed marketing guidance and cookie guidance.

What’s the difference between UK GDPR and PECR?

In a nutshell…

UK GDPR

✓ Tells us how we should handle personal data – information which could directly or indirectly identify someone.
✓ Sets out requirements organisations need to meet and their obligations.
✓ Provides us with seven core data protection principles which need to be considered whenever we handle personal data for any purpose, including marketing.
✓ Defines the legal standard for consent, which is relevant for direct marketing
✓ Gives people privacy rights, including an absolute right to object to direct marketing.

One of the principles is that processing of personal data must be lawful, fair and transparent. This includes making sure we have a lawful basis for our activities.

PECR

✓ Sets out specific rules for marketing to UK citizens, for example by emails , text messages or conducting telemarketing calls to UK citizens.
✓ Sets out specific rules when using cookies and similar technologies (such as scripts, tracking pixels and plugins).

PECR is derived from an EU directive, and EU countries have their own equivalent regulation which, whilst covering similar areas, may have different requirements, when marketing to their citizens.

We’ve written about the specific rules for email marketing and telemarketing here:
UK email marketing rules
UK telemarketing rules
The ‘soft opt-in’ – are you getting it right

How do UK GDPR and PECR work together?

Direct marketing

Marketers need to consider the core principles of UK GDPR when handling people’s personal information. Furthermore, they need to have a lawful basis for each data activity. Of the six lawful bases, two are appropriate for direct marketing activities; Consent and Legitimate Interests.

Consent: PECR tells us, for certain electronic marketing activity, we have to get people’s prior consent. UK GDPR tells us the standards we need to meet for this consent to be valid. Consent – Getting it right

Legitimate interests: If the types of marketing we conduct don’t require consent under PECR , we may choose to request consent anyway, or we could rely on legitimate interests. For example, marketing to business contacts rather than consumers.

Under GDPR, we need to be sure to balance our legitimate interests with the rights and interests of the people whose personal information we are using – i.e. the people we want to market to. ICO Legitimate Interests Guidance 

What about cookies?

PECR requires opt-in consent for most cookies or similar tech, regardless of whether they collect personal data or not. And we’re told this consent must meet the UK GDPR standards.

In simple terms, the rules are:

✓ Notify new users your website/app users about your use of cookies or similar technologies and provide adequate transparent information about what purposes they are used for.
✓ Consent is required for use of cookies, except a narrow exclusion for those which are ‘strictly necessary’ (also known as ‘essential’ cookies).
✓ Users need to be able to give or decline consent before the cookies are dropped on their device and should be given options to manage their consents at any time (e.g. opt-out after initially giving consent).

Our data, tech and the app-ocalypse

January 2024

In 2013, after Edward Snowden leaked thousands of secret files, the Kremlin’s security bureau did something interesting. They swapped computers for manual typewriters. Russian spooks reasoned hard copies were easier to protect than digital files. Furthermore, hackers might be able to infiltrate sensitive systems, but the old-school art of safe-cracking? It seemed to have fallen by the wayside.

As I get older, I’m beginning to think the Kremlin might have been onto something. Why?

Maybe it’s a generational issue. I’m Gen ‘X’. I grew up without mobile phones or the internet, but became familiar with the technology as it developed from the 1990s onwards. I enjoy technology. I respect it. I’m also, however, sceptical in a way many of my Millennial and Gen ‘Z’ colleagues may not be.

For me it boils down to two concerns – trust and over-reliance . Given how there’s now an app for everything, I have to ask – is the App-ocalypse Nigh ? What happens to the increasingly personal and intrusive levels of personal data entered into these ‘everything apps’.

Just because data’s aggregated into zeros and ones, it doesn’t mean it’s ‘tidy’. In fact, I suspect too many digital ‘data warehouses’ resemble the hoarder’s houses you might have seen on daytime TV, with stuff scattered everywhere.

It’s not just apps – the endless requirement to populate online forms is relentless. Now I hear more ‘frictionless facial recognition’ is planned at airports in the UK and elsewhere. And it’s making me uneasy. Technology is wonderful for creating efficiencies and streamlining processes. In my world alone, I see how clever privacy technology solutions ease the burden of data protection compliance.

But is technology always wonderful? Why am I uneasy?

An example – I needed to renew my driving licence. I went on to the Government website and duly entered a great deal of sensitive data. This included my passport number, my mother’s maiden name, my date of birth, my home address and my National Insurance number. This started me thinking… ‘How secure is this platform? What are the Government really doing to prevent my data falling into malicious hands?’

At the other end of the scale, I needed to reschedule a beautician’s appointment (much needed after eating my body weight in chocolate and cheese over Christmas). My call was met by a recorded message. I duly pressed ‘2’ to cancel/change an appointment. I was then informed I must (yes, they did say must) download the app to cancel/change appointments. A look at the app’s privacy information didn’t fill me with confidence, so I rang again, selecting ‘3’ for all other enquiries. After ten minutes of listening to promotions about fantastic rejuvenating treatments, I gave up. What if I prefer not to be forced to register and share my personal details via your app? I’m getting a face treatment, not applying for a pilot’s licence!

At this point, a shout out to the Kennel Club’s customer service. I took out their insurance for my puppy this year. They’re great. I’ve had to call twice, and each time a prompt pick-up from a lovely human. Somewhat of a rarity these days.

I recently read EasyPark Group, the owner of brands like RingGo and Park Mobile, were hacked. Yes, like many others I have RingGo. I was forced to download the app to use a station car park – there was no choice. I also have other parking apps. Oh the joys of standing in the rain in a car park trying to download yet another parking app. Handing over my data to yet another company. Will these companies protect it? What security measures and controls do they have? Did they conduct a DPIA? Was it outsourced to an app developer, possibly outside the UK/EU? Did they do any due diligence?

As well as my fears around data, I also worry for the significant minority disenfranchised by the widescale embrace of what my colleague Simon calls the ‘Mobilical Cord’. It’s so very true – I’m unable to properly function without my smartphone implanted in my paw. I use it to access the internet, my emails, messages, banking and so on. It’s also a crucial part of our company security – to authenticate I am really me.

The 2021 UK Census showed 90% of households had a home computer. 93% had access to a mobile phone. I suspect it’s higher now, but it’s still not everyone. As of 2023, according to research by Statista 98% of 16-24 year olds have a smartphone. However, this drops to 80% for the over 65s. Less tech-savvy and particularly the elderly are being left behind. My mother is 84. I got her a smartphone, but she hates it and doesn’t understand it. Apps? An enigma. She’s also terrified of online scams, knowing how the elderly are disproportionately targeted.

So, now we also face the prospect of passport-free travel. UK Border Force is set to trial an e-gate schemes similar to those rolled out in Dubai and Australia. This negates the need to show a passport, instead using facial recognition technology (FRT).

Phil Douglas, the Director General of Border Force has said “I’d like to see a world of completely frictionless borders where you don’t really need a passport. The technology already exists to support that.” He added: “In the future, you won’t need a passport – you’ll just need biometrics.”

According to the Times the biometric details of British and Irish travellers are already held after being collected in the passport application process. What does Phil Douglas feel about our personal biometrics being potentially harvested by countries with dodgy human rights records?

Too many people will shrug – an end to lengthy queues? Yes please. But who controls my facial map? How will it be used? Will it be shared? How will it be kept secure? Facial recognition tech also raises issues of bias in algorithms, and the potential for mistakes, with serious consequences.

I suspect, one day, there’ll be the kind of disaster one sees in movies, where the Internet collapses for a significant period. What then? I also wonder if, eventually, ambulance-chasers will identify companies using apps to disproportionately harvest data – and playing fast and loose with the safeguards set up to protect us. Will this become the next big Personal Indemnity Insurance (PII) style business opportunity?

What I do know is businesses who put all their eggs in one basket without contingencies, or fail to anticipate risk, are those likeliest to suffer when the app-ocalypse (however it manifests itself) is nigh!

Now, did I mention AI…?

The three foundations of good data governance

January 2024

People, processes and technologies

Creating a clear data governance strategy is crucial to making sure data is handled in line with your organisation’s aims and industry best practice.

Data governance is often thought of as the management process by which an organisation protects its data assets and ensures compliance with data laws, such as GDPR. But it’s far broader than compliance. It’s a holistic approach to data and should have people at its very heart. People with defined roles, responsibilities, processes and technologies which help them make sure data (not just personal data) is properly looked after and wisely used throughout its lifecycle.

How sophisticated your organisation’s approach needs to be will depend on the nature and size of your business, the sensitivity of the data you hold, the relationships you have with business partners, and customer or client expectations.

Benefits of good data governance

There are many benefits this activity can bring, including:

  • Minimising risks to the business, your employees, customers and suppliers
  • Giving your people clarity around expected behaviours and best practices
  • Embedding compliance requirements

A strong data governance approach can also help an organisation to make the most of their data assets, improve customer experience and benefits, and leverage competitive advantage.

Data governance – where to start?

There are three foundational elements which underpin successful data governance – People, Processes and Technologies.

Data governance people processes technologies

People

Engaging with stakeholders across the organisation to establish and embed key roles and responsibilities for data governance.

Many organisations look to establish a ‘Data Ownership Model’ which recognises data governance is an organisational responsibility which requires close collaboration across different roles and levels, including the delegation of specific responsibilities for data activities.

Here’s some examples of roles you may wish to consider:

  • Data strategy lead – such as Chief Data Officer / Chief Digital Officer
  • Data protection lead – such as Data Protection Officer (DPO), if you have one
  • Information security lead – such as Chief Information Security Officer (CISO) or Chief Technology Officer
  • Information asset owners (or data owners) – leaders of business functions / teams which collect and/or use personal data for particular purposes. Such as HR, Marketing & Sales, Finance, Operations, and so on.
  • Data specialists – heavy users of complex datasets, such as data analysts and data scientists.
  • System owners – the people who manage the key systems which hold personal data, such as IT managers.

Processes

Think about all the processes, policies, operating procedures and specialist training provided to guide your employees and contractors to enable them to handle data in line with your business expectations – as well to comply with the law. For example:

Without these in place and regularly updated, your people can’t possibly act in the ways you want and expect them to.

In my experience, success comes from keeping these items concise, and as relevant and engaging as possible. They can easily be forgotten or put in the ‘maybe later’ pile…  a little time and effort can really pay dividends!

Technologies

The technologies which underpin all data activities across the data lifecycle. For example, your HR, marketing & CRM, accounting and other operational systems you use regularly. Data governance requires those responsible for adopting technologies to ensure appropriate standards and procedures are in place which ensure appropriate:

  • Accessibility and availability standards
  • Data accuracy, integrity and quality management
  • Privacy and security

Looking at privacy technology in particular, the solutions available have really progressed in recent years in terms of both their capability and ease of use. Giving DPOs and others with an interest in data protection clear visibility of where the risks lie, help to prioritise them and pointers to relevant solutions. They can also help provide clear visibility and oversight to the senior leadership team.

The ‘Accountability Principle’

Data governance goes hand in hand with accountability – one of the core principles under GDPR. This requires organisations to be ready to demonstrate the measures and controls they have to protect personal data and in particular, show HOW they comply with the other data protection principles.

Appropriate measures, controls and records need to be in place to evidence accountability. For example, a Supervisory Authority (such as the ICO) may expect organisations to have:

  • Data protection programme, with clear data ownership & governance and regular reporting up to business leaders
  • Training and policies to guide staff
  • Records of data mapping exercises and processing reviews, such as an Information Asset Register and Record of Processing Activities
  • Risk assessments, such as Data Protection Impact Assessments and Legitimate Interests Assessments
  • Procedures for handling of individual privacy rights and data breaches
  • Contracts in place between organisations which include the relevant data protection clauses, including arrangement for restricted international data transfers
  • Data sharing agreements

Ready to get started?

If you’re keen to reap the benefits of improved compliance and reduced risk to the business, the first and crucial step is getting buy-in from senior leadership and a commitment from key stakeholders, so I’d suggest you kick-off by seeking their support.

Managing the right to erasure

November 2023

Ten tips to tackle erasure requests

What data should you erase? When can you refuse? And, on a technical level, how do you make sure everything is actually deleted, especially if held on multiple systems?

Fulfilling people’s privacy rights aren’t easy, and GDPR’s Right to Erasure can raise complex challenges. Add to this the tight timeframe to action requests, or bulk requests from third parties, and it can turn into a bit of a minefield.

We’ve got some tips to help navigate around the quicksand. But first, a little refresher on what the Right of Erasure means.

What is the Right to Erasure?

As the name suggests, a person has the right to request their personal data is erased from your systems if you’ve no longer have a compelling reason to keep it.

You may hear it referred to as the ‘Right to be Forgotten’. This stems from a decision in 2014 by the Court of Justice of the EU which recognised the right of EU citizens to request the removal of links to personal information on search engines.

GDPR took this ruling a step further and enshrined a broader right into EU law, taking it beyond the context of publicly available personal information. Under the post-Brexit spin-off, UK GDPR the right remains the same.

People have the right to submit an erasure request to any organisation operating within the UK/EU or organisations in other territories which handle the data of UK/EU citizens. It’s not an absolute right, and there are circumstances in which it can be denied.

When does the right to erasure apply?

You need to fulfil a person’s request for erasure in the following circumstances;

  • It’s no longer necessary for the organisation to hold onto the personal data of an individual for the purposes it was collected
  • They gave you their consent and now wish to withdraw this consent
  • You’re relying on legitimate interests as your lawful basis to handle their data, they object to this, and you have no compelling and overriding legitimate interest to continue
  • They gave you their details for direct marketing purposes and no longer want to receive communications. (You are permitted to keep a minimised record on a suppression file).
  • You’re fulfilling a legal ruling or legal obligation to erase the data
  • You’re processing a child’s data to provide information services (i.e. online services)
  • You’re handing their data unlawfully

The last point, a general ‘catch-all’, is a tricky one to balance, as there may be many reasons why personal data could be processed unlawfully.

For example, the handling of personal data might be considered unlawful if it’s inaccurate, or if necessary information about your processing has not been provided in a privacy notice.

When can you refuse an erasure request?

The right to erasure doesn’t apply when you’re holding personal data for the following reasons:

  • to exercise the right of freedom of expression and information
  • to comply with a legal obligation
  • for the establishment or defence of legal claims
  • to perform a task carried out in the public interest or when exercising and organisation’s official authority
  • for public interest in the area of public health
  • for archiving purposes in the public interest, scientific or historical research or statistical purposes (where erasure would make this impossible or seriously impair your objectives)

Under UK GDPR there are two specific circumstances where the right to erasure doesn’t apply to special category data. Further information about these exemptions can be found in the ICO erasure guidance.

It’s also important to consider whether you have a contract in place with the individual, which requires the processing of their data, and the impact on this of the erasure request.

There may also be grounds for a refusing a request where you can justify it’s manifestly unfounded or excessive. See the ICO’s guidance on exemptions.

If you refuse to comply with a request, you must explain why and tell the individual they have the right to raise a complaint with the ICO (or other relevant supervisory authority).

There are many variables at play; each request needs to be assessed on a case-by-case basis. This is where the devil really is in the detail.

10 tips for handling erasure requests

1. Awareness

Someone can request their data is erased, either in writing or verbally. They might make this request to anyone in your organisation. So, everyone needs to know how to recognise this type of request, what to do if they receive one, who to direct it to and so on.
Awareness campaigns, training and easy-to-understand policies all play their part in getting key messages across to all staff.

2. Identity verification

You clearly don’t want to delete someone’s details unless you are absolutely sure they are who they say they are. Sometimes this will be obvious, but in other circumstances you’ll need to ask for verification of identity. However, if the deletion would have no negative impact on the individual, for example they are only on your marketing lists, you may feel asking for proof of identification is unnecessary.

When asking for proof of id only ask for the minimum amount of information necessary to confirm identity. Don’t accumulate more information such as copies of passports or driving licences, unless it’s justified, and remember to delete these too!

If a request is received via another organisation, make sure this third party definitely has the authority to act on behalf of the individual in question. The responsibility lies with the third party to provide any necessary evidence to prove this. Bear this in mind if you’re the third party!

3. Technical measures

Your customers might think deleting their data is as simple as clicking a button. If only it were that easy!

It can be difficult to locate, identify, assess and properly delete data – especially if it’s held on many different systems. You might hold records on emails, backed-up systems, on the cloud… all must be deleted.

Make sure your systems, applications and databases allow the easy identification and deletion of individuals. You may also need to assess the implications of deletion; it can impact on how different software works.

This is where the concept of Data Protection by Design really supports businesses. If from the outset of any new project or initiative you make sure you factor in managing individual data rights, it will make life much easier in the long run.

It’s worth reiterating – the right to erasure extends to deleting data from backups. However, the ICO recognises the inherent difficulties here and says, “the key issue is to put the backup data ‘beyond use’, even if it cannot be immediately overwritten.”

4. Timeline

You don’t have long to comply with requests, so keeping track of time is crucial. The request must be actioned without ‘undue delay,’ and in any case within one calendar month of receiving it.

You may be able to extend this by up to two months if it’s particularly complex. If you need to extend, make sure you tell the individual before the first month is up, giving them clear reasons for the delay – reasons you must be ready to explain to the regulator if necessary.

5. Who else holds their data?

The right to erasure doesn’t just apply to the records your organisation holds. You’re also expected to tell other organisations to whom you’ve disclosed the personal data.

Having a clear understanding of all your suppliers, any other organisations you share personal data with, means you can efficiently contact them and inform them of erasure requests.

You don’t have to do this if it would prove impossible or involves disproportionate effort. (But again, you must be able to justify this is the case).

6. Public domain data

The Right to Erasure also applies to personal data which has been made public in an online environment (‘The Right to be Forgotten’).
You need to be ready to take reasonable steps to inform other organisations who are handling the personal data; asking them to erase links to, copies of, or replication of the data.

What’s ‘reasonable’ will depend on available technology and the cost of implementation. This expectation scales with size; the bigger your organisation and the more resources you have, the more you’ll be expected to do.

7. Children’s specific rights

Children have special protection under data protection law, and the right to erasure is particularly relevant when a child has given their consent (or their parent/guardian) and at a later stage (even when they’re an adult) want their personal information removed, especially if it’s available on the internet. Baking in the ability to delete children’s information from the start is crucial.

8. Exemptions

It’s helpful to have a clear checklist of the exemptions that might apply. They don’t all apply in the same way, so be sure to examine each exemption on a case-by-case basis. The ICO exemptions guide is a good starting point.

9. Maintain a log

How do we delete someone, but also prove we have done it? Feels ambiguous doesn’t it?

You’re allowed to keep a log of erasure requests, actions taken and justifications for these. You need to do this to demonstrate compliance.
However, be sure to make sure this is kept securely and only keep the minimum amount of information necessary. I know some organisations who’ve taken the step of making sure this log is pseudonymised for extra protection.

10. Minimisation and retention

The right to erasure (and indeed other privacy rights, such as DSARs) can be less complex if we try to stick to two of the core data protection principles; data minimisation and data retention (storage limitation).

By collecting less data in the first place, using it in limited ways and only keeping it for as long as we need it, means there’s less data to trawl through when we get a request to delete it.

Sounds simple, less easy in practice, but worth the effort. Data retention guide