Data Sharing Checklist

June 2024

Controller to Controller Data Sharing

Data protection law doesn’t stop us sharing personal data with other organisations, but does place on us a requirement to do so lawfully, transparently and in line with other key data protection principles.

Organisations often need to share personal data with other parties. This could be reciprocal, one-way, a regular activity, ad-hoc or a one off.

Quick Data Sharing Checklist

Here’s a quick list of questions to get you started on how to share personal data compliantly.

(The focus here is on sharing data with other controllers, i.e. other organisations who will use personal data for there own purposes. There are separate considerations when sharing data with processors, such as suppliers and service providers).  Controller or processor, what are we?

1. Is it necessary?

It may be possible to achieve your objective without sharing personal data at all, or perhaps the data could be anonymised.

2. Do we need to conduct a risk assessment?

Check if what you’re planning to do falls under the mandatory requirement to complete a Data Protection Impact Assessment. Depending on the nature and sensitivity of the data it might be a good idea to conduct one anyway. Quick DPIA Guide.

3. Do people know their data is being shared?

Transparency is key, so it’s important to make sure sure people know their personal details are being shared. Would they reasonably expect their personal data to be shared in this way?

4. Is it lawful?

To be lawful we need a lawful basis and we need to meet the relevant conditions of the basis we’ve chosen. For example, if we’re relying on consent is this specific, informed and an unambiguous indication of the person’s wishes. If we’re relying on legitimate interests, have we balanced our interests with those of the people whose data we’re sharing? Quick guide to lawful bases.

5. Can we reduce the amount of data being shared?

Check what data the other organisation actually needs, you may not need to share a whole dataset, a sub-set may suffice.

6. Is it secure?

Agree appropriate security measures to protect the personal data, both when it’s share and at rest. This includes security measures where the other organisation is being given access to your systems. Are controls in place to make sure only those who need access, have access?

7. Can people still exercise their privacy rights?

Both parties should be clear about their responsibilities to fulfil privacy rights, and it should be easy for people to exercise them.

8. How long with the personal data be kept for?

Consider if it’s appropriate to have specific arrangements in place for the shared data to be destroyed after a certain period of time.

9. Is the data being shared with an organisation overseas?

If the personal data is being shared with a business located outside the UK, it will be necessary to consider the international data transfer rules.

10. Do we need a data sharing agreement?

UK GDPR does not specify a legal requirement to have a agreement in place when data is shared between organisations acting as controllers. However, the UK ICO considers it ‘good practice’ as and agreement can set out what happens to the data at each stage, and agreed standards, roles and responsibilities. ICO Data Sharing Agreement guidance.

Other data sharing considerations 

Are we planning to share children’s data?

Proceed with care if you are sharing children’s data. You need to carefully assess how to protect children from the outset, and will need a compelling reason to share data relating to under 18s. This is likely to be a clear case of conduct a DPIA!

Is the other organisation using data for a ‘compatible purpose’?

Consider the original purpose the data was collected for, and whether the organisation you’re sharing it with will use it for a similar purpose. It’s worth noting the UK Department of Education came a cropper for sharing data for incompatible purposes.

Is data being shared as part of a merger or acquisition?

If data is being shared as part of a merger or acquisition, the people the data relates to should be made aware this is happening. You’d want to be clear the data should be used for a similar purpose. Robust due diligence is a must, and perhaps a DPIA to assess and mitigate any risks.

Is it an emergency situation?

We’ve all heard the tales about people being scared they’ll be breaching data protection rules if they share personal data with paramedics, doctors or others in emergency situations. The ICO is clear on this point: in an emergency you should go ahead and share data as is necessary and proportionate.

The ICO has a Data Sharing Code of Practice, full useful information about how the Regulator would expect organisations to approach this.

What would you change about GDPR?

June 2024

Any regrets about the demise of the UK Data Protection and Digital Information Bill?

Data reform in the UK is dead, well at least for the time-being, and possibly permanently. The announcement of a 4th July General Election means the DPDI Bill has been dropped.

The Bill was controversial. Some feared it would weaken data protection laws in the UK and risked the European Commission overturning the much valued ‘adequacy’ decision for the UK. Others welcomed a more flexible, business-friendly approach. Some saw it as mixed bag of good, bad and indifferent ideas, including changes seemingly made for the sake of demonstrating change.

The text of GDPR was finalised eight years ago. It’s spin-off the UK GDPR is pretty much the same as its EU counterpart and there are those in both the UK and EU who feel it may be time to update and refresh the legislation.

Here are some thoughts from data protection practitioners on nuggets in the DPDI Bill they wished had been passed, or an aspect of GDPR they would change if they could.

DPDI regrets

Fedelma Good, Data Protection and ePrivacy Specialist

Putting aside all the hours spent reading and assessing all the proposed changes, my biggest regret is that with the demise of the DPDI we will lose the harmonisation of language between the GDPR and the Privacy and Electronic Communications Regulations (PECR) as well as some of the common-sense changes which were being proposed in relation to analytic cookies. It’s sad too, to see that charities will not get the promised access to soft opt-in for their fund-raising activities. Additionally, I feel for the ICO where a huge amount of effort must have already been put into preparing for the proposed changes to their operating model.

Simon Blanchard, Data Protection Network Associates

I liked the concept of ‘recognised’ legitimate interests, where there would be an exemption from the requirement to conduct a Legitimate Interests Assessment in certain situations where there is a clear and compelling benefit – such as national security, public security, defence, emergencies, preventing crime and safeguarding.

Sachiko Scheuing, European Privacy Officer, Acxiom

The Bill proposed giving legal certainty to legitimate interest as a legal ground for the use of data for marketing purposes, by bringing the existing Recital 47 into the main articles. This would have been a welcome move.

Philippa Donn, Data Protection Network Associates

I supported the ‘vexatious and excessive requests’ DPDI proposal – allowing organisations to assess if a DSAR was intended to cause distress, made in bad faith or was an abuse of power. In my experience on occasion this right is exploited. If I’m allowed to dream? I’d advocate for leeway around the time organisations are given to respond to requests – at least a ‘pause the clock’ for bank holidays and Christmas! I think urgency is good, but making busy organisations rush a request is bad.

Ideas for data protection reform

Robert Bond, Senior Counsel, Privacy Partnerships Law

I would change Article 8 of the GDPR to make the protection of children and their personal data applicable to all controllers and not just those that supply information society services. Article 8 only impacts information society service providers in relation to the obtaining of consent of a child, but I feel the provision of any services to a child require a greater degree of compliance. The ICO’s Children’s Code is valuable, and more controllers need to be focused on the protection of the fundamental rights of the child.

Dominic Batchelor, Head of IP & Privacy, Royal Mail Group

I would update the types of data afforded special protection to reflect modern sensibilities better. Many people would be surprised that data revealing trade union membership, or veganism (if viewed as a philosophical belief), are more tightly regulated than financial data, and that specific parental oversight applies to children’s consent to processing for online services but not necessarily any processing of their data (and that even this control doesn’t apply over the age of 13).

Emma Butler, Creative Privacy

I would take the controller-processor obligations and accountability principle and merge them to create an accountability obligation on all organisations to achieve certain outcomes: the principles, risk assessment, rights, security, transfers and DP by design. All parties in a chain would be legally obliged to understand and determine (and put in a contract) who is doing what with what data, who has which obligations, and who has what liability to whom. Organisations could make arrangements based on facts rather than be shoehorned into a definition based on a legal fiction.

Claire Robson, Governance Director, Chartered Insurance Institute

I would like to see the reintroduction of the term “data controllers in common”. In practice, I found this to be a helpful description which differentiated those circumstances where two organisations held shared data but needed to retain independence of their processing. Without this distinction, I have found myself in many a complex conversation explaining why we are not entering into a joint data controller relationship!

Redouane Serroukh, Head of Information Governance and Risk / DPO, NHS Hertfordshire and West Essex ICB

I’d welcome clarity on the wording surrounding the right of access. Specifically, on its apparent purpose (‘to be aware of, and verify, the lawfulness of processing’, recital 63) and the ability to refuse a request if it is deemed to be ‘manifestly unfounded or excessive’, art 12(5). Why? Currently there is no requirement for a data subject to provide a reason or motive to make Subject Access Request and therefore makes it difficult for a data controller to confidently challenge a request or use the provisions above. While some guidance/interpretation exists, there appears to be a regulatory gap in the wording.

Mark Roebuck, Prove Privacy

The current regulation is not effective enough to ensure that the regulators are consistent in their approach to sanctions. For example, it is widely discussed on professional social media the UK’s ICO is ineffective in applying sanctions to UK organisations compared with other EU regulators. Article 63 provides for a ‘consistency mechanism’ but is itself only one paragraph long and provides no binding commitment on regulators to align enforcement.

So there you go! Some ideas from the coalface should data reform ever rear its head again, either in the UK or EU.

Tackling AI and data protection

Raising staff awareness of data protection risks from their use of AI

The growth of AI continues at a tremendous rate. Its use in the workplace has plenty of benefits including streamlining processes, automating repetitive tasks, and helping employees to be do their jobs ‘better’ and more effectively.

While many people are jumping in with both feet, others have growing concerns about the implications for individuals and their personal data. There are also very real concerns surrounding intellectual property and commercially sensitive information which may be being ‘leaked’ out of the business through AI applications.

As employees increasingly bring AI into the workplace, the risks grow. A recent Microsoft and LinkedIn Report found all generations of workers are bringing their own AI tools to work – ‘ 73% of Boomers’ through to ‘85% of Gen Z’. The report found many are hiding their use of AI tools from their employers, possibly fearing their jobs may be at risk.

Generative AI is a key focus for data protection authorities. The ICO has recently concluded a year-long investigation into Snap Inc’s launch of the ‘My AI’ chatbot, following concerns data protection risks had not been adequately assessed. The regulator is warning all organisations developing or using generative AI that they must consider data protection from the outset, before bringing products to the market or using them in the workplace.

In this article I’ve taken a look at how Generative AI works, the main concerns and what employers can do to try and mitigate the risks. And most importantly how to control the use of AI in the workplace.

Generative AI and Large Language Models

Generative artificial intelligence relates to algorithms, such as ChatGPT, which can be used to create new content like text, images, video, audio, code and so on. Recent breakthroughs in generative AI has huge potential to impact our whole approach to content creation.

ChatGPT for instance relies on a type of machine learning called Large Language Models (LLMs). LLMs are usually VERY large deep-neural-networks, trained on giant datasets such as published webpages. Recent technology advances have enabled LLMs to become much faster and more accurate.

What are the main AI concerns?

With increased capabilities and the growth in adoption of AI come existing and emergent risks. We are at trigger point, where governments and industry alike are keen to realise the benefits to drive growth. The public too are inspired to try out AI models for themselves.

There’s an obvious risk of jobs being displaced, as certain tasks carried out by humans are replaced by AI technologies. Concerns recognised in the technical report accompanying GPT-4 include:

  • Generating inaccurate information
  • Harmful advice or buggy code
  • The proliferation of weapons
  • Risks to privacy and cyber security

Others fear the risks posed when training models using content which could be inaccurate, toxic or biased – not to mention illegally sourced!

The full scope and impact of these new technologies is not yet unknown and new risks continue to emerge. But there are some questions that need to be answered sooner rather than later, such as:

  • What kinds of problems are these models best capable of solving?
  • What datasets should (and should not) be used to create and train generative AI models?
  • What approaches and controls are required to protect the privacy of individuals?
  • What are the main data protection concerns?

AI data inputs

The datasets used to train generative AI systems are often likely to contain personal data that might not have been lawfully obtained. In many AI models, the data used may be obtained by “scraping” (the automated gathering of data online), which often violates most privacy principles.

Certain information may have been used without consideration of intellectual property rights, where the owners have not been approached nor given their consent for use.

The Italian Data Protection Authority (Garante) blocked ChatGPT, citing its illegal collection of data and the absence of systems to verify the age of minors. Some observers have pointed out these concerns are broadly similar to why Clearview AI received an enforcement notice.

AI data outputs

AI not only ingests personal data, but may also generate it. Algorithms can produce new data that may unexpectedly exposes personal details, which leaves individuals with limited control over their data.

There are many other concerns such as transparency, algorithmic bias and inaccurate predictions and the risk of discrimination. Fundamentally, there are concerns that appropriate accountability for AI is often lacking.

Key considerations for organisations looking to adopt AI

We need to understand what people across the business are already doing with AI, or planning to do. Get clarity about any personal data they are using; particularly any sensitive or special category data. Make sure they are aware of the potential risks and know what questions to ask, rather than dive straight in.

We suggest you start by talking business leaders and their teams to identify emerging uses of AI across your business. It’s a good idea to carry out Data Protection Impact Assessment (DPIA) to assess privacy risks and identify proportionate privacy measures.

Rather than adopting huge ‘off-the-shelf’ generative AI models like Chat GPT (and what may come next), businesses may consider adopting smaller, more specialised AI models trained on the most relevant, compliantly gathered datasets.

Do we need an AI Policy for employees?

To make sure AI is being used responsibly in your organisation its crucial employees are provided with clear guidance on considerations and expected behaviour when using AI tools. A robust AI Policy can go some way to mitigate risks, such as those relating to inaccurate or harmful outputs, data protection, intellectual property and commercially sensitive information and so on. Here are some pointers for areas to cover in an AI Policy:

1. Your approach to AI: Does your company permit, limit or ban the use of AI in the workplace? What tasks is it permitted to be used for? What tasks must it never be used for?

2. Internal procedures and rules: Set out clear steps employees must follow. Be clear where the red lines are and who they should contact if they have questions or concerns, or if they need specialist support.

3. AI risks: Clearly explain the risks and you are likely to want to prohibit employees from using sensitive data of a personal, commercial or confidential nature.

4. Review of AI-generated work: Humans should review all AI generated outpusts as these may be may be inaccurate or completely wrong. Human review should be baked in to your procedures. Also will you hold employees accountable for errors in their AI generated work?

5. List of permitted AI tools/platforms

Regularly update and circulate the policy to take account of developments.

In all of this, organisations need to be mindful of emerging AI regulations around the globe, and in particular the jurisdictions in which your organisation operates.

Differing regulatory approaches

EU – The EU has adopted the world’s first Artificial Intelligence Act. It’s taking a ‘harm and risk’ approach which bans ‘unacceptable’ use of artificial intelligence and introduces specific rules for AI systems proportionate to the risk they pose. It imposes extensive requirements on those developing and deploying high-risk AI systems, yet be lighter touch for low risk/low harm AI applications.

Some have questioned whether existing data protection and privacy laws are appropriate for addressing AI risk. We should be mindful AI can increase privacy challenges and add new complexities to them. IAPP EU AI Cheat Sheet

UK –Despite calls for targeted AI regulation, the UK has no EU-equivalent legislation and currently looks unlikely to get one in the foreseeable future. The current Tory Government says it’s keen not to rush in and legislate on AI, fearing specific rules introduced too swiftly could quickly become outdated or in effective. For the time being the UK is sticking to a non-statutory principles-based approach, focusing on the following:

  • Safety, security, and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance; and
  • Contestability and redress.

Key regulators such as the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA) and others are being asked to take the lead. Alongside this a new advisory service; the AI and Digital Hub has been launched.

There’s a recognition advanced General Purpose AI may require binding rules. The government’s approach is set out in its response to the consultation on last year’s AI Regulation White Paper. ICO guidance can be found here: Guidance on AI and data protection. Also see Regulating AI: The ICO’s strategic approach April 2024

US – In the US a number of AI guidelines and frameworks have been published. The National AI Research and Development Strategic Plan was updated in 2023. This stresses a co-ordinated approach to international collaboration in AI research.

As for the rest of the world, the IAPP has helpfully published a Global AI Legislation Tracker 

Wherever you operate it is vital data protection professions seek to understand how their organisations are planning to use AI, now and in the future. Evaluate how the models work and assess any data protection and privacy risks before adopting them.

Access controls: Protecting your systems and data

Is your data properly protected?

Do existing staff or former employees have access to personal data they shouldn’t have access to?  Keeping your business’ IT estate and personal data safe and secure is vital.  One of the key ways to achieve this is by having robust access controls.

Failure to make sure you have appropriate measures and controls to protect your network and the personal data on it could lead to a data breach. This could have very serious consequences for your customers and staff, and the business’ reputation and finances.

How things can go wrong

  • Recently a former management trainee at a car rental company was found guilty and fined for illegally obtaining customer records. Accessing this data fell outside his role at the time.
  • In 2023 a former 111 call centre advisor was found guilty and fined for illegally accessing the medical records of a child and his family.
  • In 2022 a former staff advisor for an NHS Foundation was recently found guilty of accessing patient records without a valid reason.

Anecdotally, we know of cases of former employees being found to be using their previous employer’s personal data once they have moved onto a new role.

The ability to access and either deliberately or accidentally misuse data is a common risk for all organisations. Add to this the risk of more employees and contractors working remotely, and it’s clear we need to take control of who has access to what.

High-level check list

1. Apply the ‘Principle of Least Privilege’

There’s a useful security principle, known as ‘the principle of least privilege’ (PoLP).  This sets a rule that employees should have only the minimum access rights needed to perform their job functions.

Think of it in the same way as the ‘minimisation’ principle within GDPR.  You grant the minimum access necessary for each user to meet the specific set of tasks their role requires, with the specific datasets they need.

By adopting this principle, you can prevent the risk of employees gaining more access rights over time.  You’ll need to periodically check to make sure they still need the existing access rights they have. For example, when someone changes role, their access needs may also change.

If your access controls haven’t been reviewed for a long time, adopting PoLP can give you great start point to tighten up security.

2. Identity and Access Management

IAM is a broad term for the policy, processes and technology you use to administer employee access to your IT resources.

IAM technology can join it all up – a single place where your business users can be authenticated when they sign into the network and be granted specific access to the selected IT resources, datasets and functions they need for their role.  One IAM example you may have heard of is Microsoft’s Active Directory.

3. Role-based access

Your business might have several departments and various levels of responsibility within them.  Most employees won’t need access to all areas.

Many businesses adopt a framework in which employees can be identified by their job role and level, so they can be given access rights which meets the needs of the type of job they do.

4. Security layers

Striking the right balance between usability and security is not easy.   It’s important to consider the sensitivity of different data and the risks if that data was breached.  You can take a proportionate approach to setting your security controls.

For example personal data, financial data, special category or other sensitive personal data, commercially sensitive data (and so on) will need a greater level of security than most other data.

Technologies can help you apply proportionate levels of security.  Implementing security technologies at the appropriate levels can give greater protection to certain systems & data which demand a high level of security (i.e. strictly-controlled access), while allowing non-confidential or non-sensitive information to be accessed quickly by a wider audience.

5. Using biometrics

How do you access your laptop or phone? Many of us use our fingerprint or facial recognition which give a high level of security, using our own biometrics data.  But some say, for all their convenience benefits, they are not as secure as a complex password!

But then, how many of us really use complex passwords? Perhaps you use an app to generate and store complex passwords for you.  Sadly lots of people use words, names or memorable dates within their passwords. Security is only going to be as good as your weakest link.

6. Multi-factor authentication (MFA)

Multi-factor authentication has become a business standard in many situations, to prevent fraudulent use of stolen passwords or PINs.

But do make sure it’s set up effectively. I’ve seen some examples where MFA has to be activated by the user themselves. So if they fail to activate it, there’s little point having it.  I’ve heard about data breaches happening following ineffective implementation of MFA, so do be vigilant.

There are an array of measures which can be adopted. This is just a taster, which I hope you found useful – stay safe and secure!

Managing data deletion, destruction and anonymisation

How to keep what you need and get rid of what you don't

Clearing out personal data your business no longer needs is a really simple concept, but in practice it can be rather tricky to achieve! It throws up key considerations such as whether to anonymise or how to make sure its deleted or securely destroyed. Let’s take a look at the key considerations and how to implement a robust plan.

Data retention requirements and risks

Data protection law stipulates organisations must only keep personal data as long as necessary and only for the purposes they have specified. There are risks associated with both keeping personal data too long, or not keeping it long enough. These risks include, but are not limited to:

  • causing the impact of a personal data breach to be significantly worse – i.e. it involves personal data which an organisation has no justification for keeping. Regulatory enforcement action could be more severe and the damage to an organisation’s reputation worse This also raises the risk of class actions or individual compensation claims.
  • falling foul of relevant laws by failing to keep records for legally-defined periods.
  • an inability to respond to complaints, litigation or regulatory enforcement for failing to keep data necessary to meet contractual or commercial terms.

Data retention policy and schedule

To manage this legal obligation successfully, you’ll need to start with an up-to-date data retention policy and schedule. These should clearly identify which types of personal data your business processes, for what purposes, how long each should typically be kept and under what circumstances you might need to hold it for longer.

If your data retention policy or schedule is lacking, first focus on making sure these are brought up to scratch. Our Data Retention Data Retention Guidance has some useful templates.

5 Key steps when the retention period is reached

When an agreed retention period is reach (as per your retention schedule), we’d recommend taking the following steps:

  1. Identify the relevant records which have reached their retention period
  2. Notify the relevant business owner to confirm the data is no longer needed
  3. Consider any changes in circumstances which may require longer retention of the data
  4. Make a decision on what happens to the data
  5. Document the decision and keep evidence of the action

Making the right decision when the retention period is reached

There are different approaches an organisation can take when the data retention period is reached, such as:

  • Delete it – usually the default option
  • Anonymise it
  • Securely destroy it – for physical records, such as HR files

Deletion of records might seem the obvious choice, and it’s often the best one too, but take care how you delete data. Sometimes deleting whole records can affect key processes on your systems such as reporting, algorithms and other programs. Check with your IT colleagues first.

Anonymisation

Most organisations want to extract increasing information and value from their digital assets. In some situations, it can be helpful to remove any personal identifiers so you can keep the data that remains after the retention period has been reached. For example,

  • You might want to continue to provide management information or historical analysis, which you can do an anonymised form. This is quite common
  • If you have data of historic marketing campaign responders, you may wish to keep certain non-personal campaign data in an anonymised form for reporting or analytical purposes, such as response volumes by segment, phasing of responses, and so on
  • If you hold records of job applicants you may wish to keep certain demographics (such as gender or diversity information) in an anonymised form. This might support your equal opportunities endeavours

To be clear, anonymisation is the process of removing ALL information which could be used to identify a living person, so the data that remains can no longer be attributed back to any unique individuals.

Once these personal identifiers are deleted, data protection laws do not apply to the anonymised information that remains, so you may continue to hold it. But you have to make sure it is truly anonymised.

The ICO stresses you should be careful when attempting to anonymise information. For the information to be truly anonymised, you must not be able to re-identify individuals.  If at any point reasonably available means could be used to re-identify the individuals, the data will not have been effectively anonymised, but will have merely been pseudonymised. This means it should still be treated as personal data.

Whilst pseudonymising data does reduce the risks to data subjects, in the context of retention, it is not sufficient for personal data you longer need to keep.

How to manage deletion

There are software methods of deleting data, which may involve removing whole records from a dataset or overwriting them. For example, using of zeros and ones to overwrite the personal identifiers in the data.

Once the personal identifiers are overwritten, that data will be rendered unrecoverable, and therefore it’s no longer classed as personal data.

This deletion process should include backup copies of data. Whilst personal data may be instantly deleted from live systems, personal data may still remain within the backup environment, until it is overwritten.

If the backup data cannot be immediately overwritten it must be put ‘beyond use’, i.e. you must make sure the data is not used for any other purpose and is simply held on your systems until it’s replaced, in line with an established schedule.

Examples of where data may be put ‘beyond use’ are:

  • When information should have been deleted but has not yet been overwritten
  • Where information should have been deleted but it is not possible to delete this information without also deleting other information held in the same batch

The ICO (for example) will be satisfied that information is ‘beyond use’ if the data controller:

  • is not able, or will not attempt, to use the personal data to inform any decision about any individual or in a way that affects them;
  • does not give any other organisation access to the personal data;
  • has in place appropriate technical and organisational security; and
  • commits to permanently deleting the information if, or when, this becomes possible.

Destruction of physical records

Destruction is the final action for about 95% of most organisations’ physical records. Physical destruction may include shredding, pulping or burning paper records.

Destruction is likely to be the best course of action for physical records when the organisation no longer needs to keep the data, and when it does not need to hold data in an anonymised format.

Controllers are accountable for the way personal data is processed and consequently, the disposal decision should be documented in a disposal schedule.

Many organisations use other organisations to manage their disposal or destruction of physical records. There are benefits of using third parties, such as reducing in-house storage costs.

Remember, third parties providing this kind of service will be regarded as a data processor, therefore you’ll need to make sure an appropriate contract is in place which includes the usual data protection clauses.

Destruction may be carried out remotely following an agreed process. For instance, a processor might provide regular notifications of batches due to be destroyed in line with documented retention periods.

Don’t forget unstructured data!

Retention periods will also apply to unstructured data which contains personal identifiers. The most common being electronic communications records such emails, instant messages, call recordings and so on.

As you can imagine, unstructured data records present some real challenges. You’ll need to be able to review the records to find any personal data stored there, so it can be deleted in line with your retention schedules, or for an erasure request.

Depending on the size of your organisation, you may need to use specialist software tools to perform content analysis of unstructured data.

In summary, whilst data retention as a concept appears straightforward, it does require some planning, clearly assigned responsibilities for implementing retention periods, and the technical means to do so effectively.

Cookies – Consent or Pay?

March 2024

UK and EU data protection regulators are grappling with the compliance of the so-called ‘consent or pay’ model, also known as ‘pay or okay’. Put simply, this model means accessing online content or services is dependent on users either consenting to being tracked for advertising purposes (using cookies or similar technologies), or paying for access without tracking and ads.

This model – and the varying approaches to it – raises questions about whether this can be fair, and whether consent can be ‘freely given’. But it also touches on far more than data protection. It speaks to acceptable business practices, competition models, consumer protection laws, accessible credible journalism and more.

Ad-funded online content and services

‘Consent or pay’ is one of a number of solutions intended to address issues surrounding online advertising and its use of cookies. None of them, it has to be said, are perfect.

This is all coming to a head as data protection regulators in Europe and the UK push for compliance with cookie laws (e.g. PECR in the UK). For example, the UK’s ICO says for the necessary consent to be valid website operators must make sure it’s as easy for people to ‘Reject all’ advertising cookies as it is to ‘Accept all’. More UK companies to be targeted for non-compliant cookies

This causes a problem. As increasing numbers click ‘Reject all’, advertising revenues will take a significant hit. And advertising matters. When a US Senator asked Mark Zuckerberg how Facebook remained free, he famously and simply answered; “We run ads”.

It’s a point that can be made more broadly – we’ve all enjoyed a vast amount of free online content and services because of personalised advertising. Lots of the content and services we routinely access online are ad-funded and rely on a large percentage of users accepting cookies to target these ads. It’s why we can waste time (or relax) playing online games for free.

Online content and service providers have to pay people to create content, run websites, create apps and so on. Commercial businesses also want to turn a profit. The balance lies between the quality, value and integrity of the content they offer, and the advertising revenues which can be gained by personalised advertising.

We’ve all been tracked and served adverts as we browse the internet. Personalised ads mean we have a better chance of being shown ads for products and services which match our interests and needs. Yes, some of this activity is annoying, trades on our habits and may sometimes even be downright harmful. That isn’t to say all of it is problematic; again, this is a question of balance. Regulators have to tread a delicate line between protecting end-users without hampering business from offering us fair products, content and services.

We may not want to be tracked, but online publishers and service providers can’t be expected to provide something for nothing. Businesses aren’t under any obligation to provide us with stuff completely for free.

Which brings us back to the concept of ‘consent or pay’. This concept hit the headlines last year when Meta introduced a payment option to users of Facebook and Instagram in the EU (not in the UK), offering an ad-free experience for a fee. This is currently the subject of complaints by consumer rights groups in Europe. Meanwhile the ‘consent or pay’ approach has been adopted by some of Germany’s major newspapers, and others.

Just pay

Another option is for all content to be put behind a pay wall. For example, in the UK you have to subscribe and pay to read online articles published by the Telegraph, The Times and the Spectator magazine. Often a limited number of free articles are provided before you have to pay.

Cookie free solutions

Other cookie-less ad solutions are being rapidly developed, such as contextual advertising. You can read more about the options here: Life after cookies

But with solutions which don’t use third-party tracking cookies still in their infancy, and concerns they won’t be able to produce the same return on investment as cookie-driven advertising, there’s a need to plug the funding gap fast.

‘Consent or pay’ – compliant or not compliant?

In the UK, the ICO hasn’t decreed whether ‘consent or pay’ is a fair approach or not. It’s asked for feedback, and in doing so set out its initial ‘view’.

While stating UK data protection law doesn’t prohibit ‘consent or pay’, the Regulator says organisations must focus on people’s interests, rights and freedoms, making sure people are fully aware of their options in order to make free and informed choices. It’s worth noting that in the EU, ‘consent or pay’ is not prohibited either.

The ICO has set out four areas which need to be addressed when adopting this model, and has asked for feedback on any other factors which should be taken into account.

1. Imbalance of power

The ICO says consent for advertising will not be freely given in situations where people have little or no choice about whether to use a service or not. This could be where the provider is a public service or has a ‘position of market power’.

2. Equivalence of services

If the ad-free service bundles in other additional ‘premium’ extras, this could affect the validity of consent for the ad-funded service.

3. Appropriate fee

Consent for targeted advertising is, in the ICO’s view, unlikely to be freely given if the alternative is an “unreasonably high fee”. The Regulator is suggesting the fee should be set at a level which gives people a realistic choice between the options.

4. Privacy by design

Any consent request choices should be presented equally and fairly. The ICO says people should be given clear, understandable information about each option. Consent for advertising is unlikely to be freely given if people don’t understand how their personal information is going to be used.

Another key consideration is how people can exercise their right to withdraw their consent. The ICO reiterates it must be as easy for people to withdraw their consent as it is to give it. Organisations also need to make sure users can withdraw their consent without detriment. This may be a tricky circle to square.

In all of this there’s an important point – whilst consent must be ‘freely given’ under EU/UK data protection law, this doesn’t translate into meaning people must get content and services free too. The ‘consent or pay’ model, essentially offers a choice between pay with your data, or pay with your money.

Etienne Drouard is a Partner at Hogan Lovells (Paris) and his view is; “The very nature of consent is being offered an informed choice. ‘Pay or OK’ ( ‘Pay or Consent’) is, per se, a valid alternative. It requires a case-by-case and multi-disciplinary analysis. Not a ban.”

Have your say – UK ICO Call for Feedback on Consent or Pay

Time to plan ahead

Fedelma Good, Data Protection and ePrivacy Consultant, and former board member of the UK Data & Marketing Association, urges advertisers and publishers to plan ahead; “To say that online advertising is entering a period of turmoil is putting it mildly. Combining the issues of ‘consent or pay’ with Google’s cookie deprecation plans and you have an environment of uncertainty which advertisers and publishers alike will ignore at their peril. My advice to anyone reading this article is not only to track developments in these areas carefully, but perhaps more importantly to make sure you understand your own circumstances and options and plan ahead.”

Privacy and consumer rights groups

It’s clear privacy and consumer rights groups are pushing for change. Back in 2021 cookie banners were the focus, with the privacy rights group noyb.eu firing off hundreds of complaints to companies for using ‘unlawful banners’. The group developed software to recognise various types of unlawful banners and automatically generate complaints.

Max Schrems, Chair of noyb said: “A whole industry of consultants and designers develop crazy click labyrinths to ensure imaginary consent rates. Frustrating people into clicking ‘okay’ is a clear violation of the GDPR’s principles. Under the law, companies must facilitate users to express their choice and design systems fairly. Companies openly admit that only 3% of all users actually want to accept cookies, but more than 90% can be nudged into clicking the ‘agree’ button.”

Now the attention has turned to ‘consent or pay’, Meta’s use of this model has led to eight consumer rights groups filing complaints with different data European data protection authorities. The claims focus on concerns Meta makes it impossible for consumer to know how the processing changes if they choose one option or another. It’s argued the choice given is meaningless.

The fundamental right to conduct business

There’s a complex balance here between people’s fundamental privacy rights and the fundamental right to conduct business. For publishers and other online services, advertising is a crucial element of conducting business. In the distant past, advertising was expensive.

As Sachiko Scheuing European Privacy Officer at Acxiom & Co Chairwoman, FEDMA succinctly puts it; “Advertising used to be a privilege enjoyed by huge brands. Personalised advertisement democratised advertising to SMEs and start-ups.”

The growth of the internet and the advent of personalised advertising technologies has undoubtedly made digital advertising affordable and effective for smaller businesses and not-for-profits.

Well-established brands are more likely to be able to put up a paywall. People already trust their content, or enjoy their service and are prepared to pay. There’s a risk lesser-known brands and start-ups won’t be able to compete.

Is credible journalism under threat?

A Data Protection Officer at one premium UK publisher, who wishes to remain anonymous, fears the drive for cookie compliance risks damaging the ability to produce high quality journalism.

“In the face of unprecedented industry challenges, as more content is consumed on social media platforms, the vital ad revenues that support public interest journalism are under threat from cookie compliance, of all things. It seems like data regulators either don’t understand, or don’t care, about the damage they’re already inflicting on the news media’s ability to invest in journalism.

If publishers comply and implement “reject all” they lose ad revenue through decimated consent rates. If they fight their corner, they face enforcement action. Either way, publishers are emptying already dwindling coffers on legal fees, or buying novel consent or pay solutions.

Unless legislative change comes quickly, or the regulators realise that cookie compliance should not be an enforcement priority, local and national publishers may disappear, just at a time when trusted sources of news have never been more needed.”

Broader societal considerations

There’s a risk as more content hides behind paywalls, we’ll create a world where only those who can afford to pay will be able to access quality, trustworthy content.

‘Consent or Pay’ may be far from perfect, but it does allow people who can’t afford to pay to have equal access to content and online services. Albeit they get tracked, and those who have money to spend can choose to pay and go ad-free.

If the consent or pay model fails, and cookie-less solutions fail to deliver a credible alternative, I fear more decent journalism will go completely behind pay walls . If that’s the only option to plug the funding gap.

I am in my mid-50s and can afford to pay. My son, in his late teens, can’t. I worry poor quality journalism, fake news and AI-generated dross might soon be all he and his generation will be able to access. That’s not to say there isn’t some great user-generated content out there. But it does mean having difficult and honest conversations about regulation and the right of businesses to make a profit in an age of politicised, fraudulent and bogus online content.

International Data Transfers Guide

March 2024

A top-level overview of international data transfers

There are restrictions under UK and EU data protection law when transferring personal data to organisations in other countries, and between the UK and EU.

The rules regarding restricted transfers can be an enigma to the uninitiated and their complexity has been magnified by Brexit and by an infamous 2020 European Court ruling known as ‘Schrems II’.

This guide aims to give an overview of what international data transfers are and the key data protection considerations. It does not cover all the intricacies, nor data transfers for immigration and law enforcement purposes. Also please be aware there may be specific restrictions in place under laws in other territories around the world.

As a general rule, controllers based in the UK or EU are responsible for making sure suitable measures are in place for restricted transfers to other controllers, or to processors. A processor will be responsible when they initiate the transfer, usually to a sub-processor.

Some might be thinking; what would be the impact if we just put all of this into the ‘too difficult’ tray? It’s certainly an area which many feel has become unduly complicated and an onerous paperwork exercise.

However, getting the detail right will pay off should things go wrong. For example, if a supplier you use based overseas suffers a data breach, the consequences may be more significant if you have not covered off legal requirements surrounding restricted transfers. It’s an area likely to come under regulatory scrutiny, in the event of a breach or should a complaint be raised.

What is an international data transfer?

An international data transfer refers to the act of sending or transmitting personal data from one country to another. It also covers when an organisation makes personal data available to another entity (‘third party’) located in another country; in other words, the personal data can be accessed from overseas.

There are specific rules about the transfer of personal data from a UK sender to a receiver located outside the UK (under UK GDPR) and similar transfers from EEA senders (under EU GDPR); these are known as restricted transfers. A receiver could be separate company, public body, sole trader, partnership or other organisation.

EU GDPR

Personal data can flow freely within the European Economic Area (EEA). A restricted transfer takes place when personal data is sent or accessible outside the EEA. Where such a transfer takes place, specific safeguards should be in place to make the transfer lawful under EU GDPR.

UK GDPR

A restricted transfer takes place when personal data is transmitted, sent or accessed outside the UK, and safeguards should be in place to ensure the transfer is lawful.

The reason for these rules is to protect people’s legal rights, as there’s a risk people could lose control over their personal information when it’s transferred to another country.

Examples of restricted transfers would be:

  • Sending paper or electronic documents, or any kind of record containing personal data, by email or post to another country
  • Giving a supplier based in another country access to personal data
  • Giving access to UK/EU employee data to another entity in the same corporate group, based in another country.

There are some notable exceptions:

  • Our own employees: A restricted transfer does not take place when sending personal data to someone employed by your company, or them accessing personal data from overseas. However, it does cover the sending, transmitting or making personal data available to another entity within the same corporate group, where entities operate in different countries.
  • Data in transit: Where personal data is simply routed via several other countries, but there is no intention that this data will be accessed or manipulated while it is being routed via other countries, this won’t represent a restricted transfer. ICO guidance says; Transfer does not mean the same as transit. If personal data is just electronically routed through a non-UK country, but the transfer is actually from one UK organisation to another, then it is not a restricted transfer.

What are the safeguards for restricted transfers?

A. Adequacy

Adequacy is when the receiving country has been judged to have a similar level of data protection standards in place to the sender country. An Adequacy Decision allows for the free flow of personal data without any additional safeguards or measures.

Transfers from the EEA
The European Commission has awarded adequacy decisions to a number of countries including the UK, Japan, New Zealand, Uruguay and Switzerland. A full list can be found on the European Commission website – Adequacy Decisions.

Therefore personal data can flow freely between EEA countries and an ‘adequate’ country. These decisions are kept under review. There are some concerns UK Government plans to reform data protection law could potentially jeopardise the UK’s current EC adequacy decision.

EU-US Data Privacy Framework: The EC adopted this framework for transfers from the EU to US in July 2023.  It allows for the free flow of personal data to organisations in the US which have certified and meet the principles of the DPF. A list of self-certified organisations can be found on the U.S Department of Commerce DPF website.

Transfers from the UK
There are provisions which permit the transfer of personal data between the UK and the EEA, and to any countries which are covered by a European Commission ‘adequacy decision’ (as of January 2021). Therefore personal data can flow freely between UK and EEA and any of the countries awarded adequacy by the EC.

The UK Government has the power to make its own ‘adequacy decisions’ on countries it deems suitable for transfers from the UK. More information about UK adequacy decisions can be found here.

UK-US Data Bridge: The UK-US ‘Data Bridge’ was finalised on 21st September 2023 and goes live 12th October 2023. Like the EU-US Data Privacy Framework, organisations based in the US must self-certify to the DPF but they must also sign up to the ‘UK extension’. Read more about the Data Bridge

B. EU Standard Contractual Clauses

In the absence of an EC adequacy decision, Standard Contractual Clauses (SCCs) can be used which the sender and the receiver of the personal data both sign up to. These comprise a number of specific contractual obligations designed to provide legal protection for personal data when transferred to ‘third countries’.

SCCs can be used for restricted transfers from the EEA to other territories (including those not covered by adequacy). The European Commission published new SCCs in 2021 which should be used for new and replacement contracts. The SCCs cover specific clauses which can be used for different types of transfer:

  • controller-to-controller
  • controller-to-processor
  • processor-to-processor
  • processor-to-controller

There’s an option for more than two parties to join and use the clauses through a docking clause. More information can be found on the European Commission website – Standard Contractual Clauses

Two points worth noting:

  • The deadline to update contracts which use the old SCCs has passed – 27th December 2022.
  • Senders in the UK cannot solely rely on EU SCCs, see the point below about the UK Addendum.

C. UK International Data Transfer Agreement (IDTA) or Addendum to EU SCCs

Senders in the UK (post Brexit) have two possible options here as a lawful tool to comply with UK GDPR when making restricted transfers.

  • The International Data Transfer Agreement, or
  • The Addendum to the new EU SCCs

ICO guidance stresses; the new EU SCCs are not valid for restricted transfers under UK GDPR on their own, but using the Addendum allows you to rely on the new EU SCCs. In other words the UK Addendum works to ensure EU SCCs are fit for purpose in a UK context.

In practise, if the transfer is solely from the UK, the UK ITDA would be appropriate. If the transfer includes both UK and EU personal data the, EU SCCs with the UK Addendum would be appropriate, to cover the protection of the rights of EU as well as UK citizens.

It’s worth noting, contracts signed on or before 21 September 2022 can continue to use the old SCCs until 21 March 2024. Contracts signed after 21 September 2022 must use the IDTA or the Addendum to new EU SCC, in order to be effective. See ICO Guidance

The additional requirement for a risk assessment

The ‘Schrems II’ ruling in 2020, invalidated the EU-US Privacy Shield (predecessor of the Data Privacy Framework) and raised concerns about the use of EU SCCs to protect personal data. Concerns raised included the potential access to personal data by law enforcement or national security agencies in receiver countries.

As a result of this ruling there’s a requirement when using the EU SCCs or the UK IDTA to conduct a written risk assessment to determine whether personal data will be adequately protected. In the EU this is known as a Transfer Impact Assessment, and in the UK, it’s called a Transfer Risk Assessment (TRA).

The ICO has published TRA Guidance, which includes a TRA tool; a template document of questions and guidance to help businesses carry out a TRA.

D. Binding Corporate Rules (BCR)

BCRs can be used as a safeguard for transfers within companies in the same group. While some global organisations have gone down this route, it can be incredibly onerous and takes a considerable amount of time to complete BCRs.

BCRs need to be approved by a Supervisory Authority (for example the ICO in the UK, or the CNIL in France).  This has been known to take years, so many groups have  chosen to use EU SCCs (with UK Addendum if necessary) or the IDTA, in preference to going down the BCR route.

E. Other safeguards

Other safeguards measures include;

  • Approved codes of conduct
  • Approved certification mechanisms
  • Legally binding and enforcement instruments between public authorities or bodies.

What are the exemptions for restricted transfers?

It may be worth considering whether an exemption may apply to your restricted transfer. These can be used in limited circumstances and include:

  • Explicit consent – the transfer is done with the explicit consent of the individual whose data is being transferred, and where they are informed of possible risks.
  • Contract – where the transfer is necessary for the performance of a contract between the individual and the organisation or for necessary pre-contractual steps.
  • Public interests – the transfer is necessary for important reasons of public interest.
  • Legal necessity – the transfer is necessary for the establishment exercise or defence of legal claims.
  • Vital interests – the transfer is necessary to protect people’s vital interests (i.e. in a critical life or death situation) where the individual cannot legally or physically give their consent.

The ICO makes the point most of the exemptions include the word ‘necessary’. The Regulator says this doesn’t mean the transfer has to be absolutely essential, but that it “must be more than just useful and standard practice”. An assessment needs to be made as to whether the transfer is objectively necessary and proportionate, and can’t be reasonably achieved another way.

The regulatory guidance says exemptions, such as contractual necessity, are more likely to be proportionate for occasional transfers, a low volume of data and where there is a low risk of harm when the data is transfer.

The above is not an exhaustive list of the exemptions, further details can be found here.

There is no getting away it, international data transfers are a particularly complex and onerous area of data protection law! It pays to be familiar with the requirements and understand the potential risks.

Sometimes organisations will have little control over the terms under which they do business with others. For example, large technology providers might be unwilling to negotiate international transfer arrangements and will only proceed if you agree to their existing safeguards. A balance might need to be taken here on the necessity of entering the contract and the potential risks should restricted transfers not be adequately covered.

Life after cookies

March 2024

“The past is a different country: they do things differently there”.

I’m pretty certain when LP Hartley wrote this wistful line the changing world of advertising, data and privacy weren’t foremost in his mind. However, in five years from now, when all the current arguments surrounding the elimination of third-party cookies are long gone, that’s likely how we’ll view the universal use (and abuse) of a simple text file and the data it unlocked.

From one perspective, life after third-party cookies is very simple.

The majority of media is transacted without third party cookies already. Whether by media type, first-party user preferences, device or regulatory mandates, lots of money already moves around without reference to third-party cookies. As the saying goes “The future is already here, it’s just not very evenly distributed”.

That’s deliberately rather glib. Some sections of the media still rely upon third-party cookies and not every media owner has an obvious opportunity to build a first-party relationship with consumers. The advantages of an identifier that allows streamlining of experience for consumers whilst delivering audience targeting and optimisation for media owners and advertisers haven’t gone away.

When we look to life after third-party cookies, we need to understand the ways replacement identifiers have evolved to ameliorate the worst aspects of cookies, whilst leaving some advantages in place. One leader I interviewed on this topic back in 2020 said “It’s not the fault of the cookie, it’s what you did with the data” and that’s a useful measure to have in mind when looking at any alternative solutions.

Put very simply, the choices for a brand post the third-party cookie are:

  • Use a different identity approach
  • Buy into use of a walled/fenced garden toolset
  • Use another signal to match between media and audience that isn’t anchored directly to the user, such as contextual.

Alternative identity solutions

The advantage of these is they come with some aspect of permissioning and consumer controls – after the cookie arguments and much legislation in the UK, Europe and US, the industry has learnt these tools are critical. However, it remains a moot point as to whether consumers have much knowledge around any consent or legitimate interest options that are put in front of them – the ICO in the UK is currently clamping down on consent practices. More cookie action

Equally moot is whether the majority of consumers are really that bothered. Much consent gathering is viewed by both parties as an unwanted hurdle in a customer journey. The basic requirements for a consumer to know who has their data, for what purposes and for how long remain, but how to achieve the requisite communication and control is still work in progress.

On a global scale these identity solutions revolve either around a “daisy chain,” using hashed email as the ID link, or use a combination of signals from a device with other attributes to have some certainty around individual identity. Any linkage built with a single identity variable risks being fractured by a single consent withdrawal.

The solutions built on a combination of signals have potentially more durability because they are less dependent on any single signal as the anchor of their fidelity, but many device signals are controlled by browser or operating system vendors, who may obscure or withdraw access to these as Apple has done in recent years.

Walled garden toolset

Much discussion is made around Google’s Privacy Sandbox initiative. This is the ambition from Google to deliver some of the advantages of third-party cookies within the Chrome browser whilst not revealing individual data.

It’s been a much longer journey than envisaged at the start when Google first made their announcement in 2020. Google’s commitment, made under the shadow of the Digital Markets Act, has been that they will not remove third-party cookies from the Chrome ecosystem until the UK competition regulator, the CMA, has approved their plans.

As of March 2024, those closely following the travails of Google, the CMA and the opinions tabled from the IAB Tech Lab (amongst others) would be hard pressed to give a cast iron opinion that the current timescale will be met. Privacy and competitive advantage have become inextricably intertwined in these arguments, which is fair. However, slicing through this Gordian Knot was probably not on the CMA or Google’s agenda when they signed up to this process. But that’s about timing, not a permanent stay of execution for the third-party cookie.

Non-user signals

The final approach is to use tools that do not rely on individual level signals. What an individual reads or consumes online says much about them – more than a century of classified advertising is testament to this.

The contextual solutions of 2024 are faster, smarter and better integrated than ever before. They have their downsides – closed loop measurement is a significant challenge hampering some of the campaign optimisations that became common place in the ear of the third-party cookie. And they became common place because they were easy and universal, however, paraphrasing the aphorism, what is measured came to matter, when it should really be the other way round.

And here we come into the greatest change that is being ushered in by the gradual demise of third-party cookies. Measuring what actually matters.

In the late 2010’s when cookies were centre stage as the de facto identifier of choice in media and advertising, their invisible synchronisation gave almost universal, if imperfect, coverage. One simple solution, accessible to all.

As we enter 2024, many alternative identifiers struggle to get much beyond 30% coverage. Contextual solutions can deliver 100% coverage but have their own measurement challenges. This has driven a greater interest in a combination of broad business- and commercial objective-based approaches such as Marketing Mix Modelling (MMM) and attribution-based metrics where appropriate. Advances in data management and analysis have enabled MMM to deliver more frequent insights than the traditional annual deep dive, making it a core component for post cookie media management.

Underpinning any and all of these solutions is the need for first-party data. Whether to build models for customer targeting, collaborate with media and other partners to access first-party data assets or measure more efficiently and effectively, having a structured, accessible and usable set of tools around first-party data is critical to working in the current landscape of solutions.

The growth of cloud storage solutions takes some of the burden away from making this a reality, but the applications used to understand and activate that data asset are many and various. Taking time and advice to build understanding in this area is a knowledge base critical to prospering after the third-part cookie.

Life beyond the third-party cookie is far from fully defined.

Some of the longer-term privacy and competition elements are not that hard to envisage, but exactly how the next 24 months plays out is much, much harder to predict. It’s still really work in progress, especially around measurement and optimisation. For the user of data in advertising and marketing it’s essentially “back to basics”.

Your customer data is more valuable than anyone else’s, so capture and hold it carefully. Test many things in a structured way because the future is about combinations. And know what matters to your business and work out how to measure it properly, not just easily.