Managing data transfers from the UK

February 2022

The new International Data Transfer Agreement (IDTA) and Addendum is a sensible evolution of the old SCCs

International Data Transfers – to recap

Whenever UK-based organisations arrange the transfer of personal data to a third country outside the UK, they need to make sure the transfers are lawful, by confirming the data security and rights of individuals remain protected when data leaves the country.

Since the famous “Schrems II” ruling by the European Court of Justice in 2020, this activity has been thrown into disarray. To remind you, this is the ruling which invalidated the EU-US Privacy Shield and raised concerns about the use of EU Standard Contractual Clauses (SCCs) to protect the data. 

Soon after, the European Commission set to work to update the EU SCCs. These were drafted and enacted fairly swiftly taking effect on 27th June 2021. 

What are the new EU SCCs?

The new EU SCCs were expanded to introduce more flexible scenarios: 

  • SCCs are now modular meaning that they can accommodate different scenarios, where you can pick the parts which relate to your particular situation.
  • The SCCs cover four different transfer scenarios, including processors:
    • Controller to controller
    • Controller to processor
    • Processor to controller
    • Processor to processor
  • More than two parties can accede to the SCCs, meaning additional controllers and processors can be added through the lifetime of the contract. This potentially reduces the administrative burden.

How did this affect the UK? 

On 28th June the UK’s adequacy decision was adopted.  On September 27th 2021, the prior version of the SCCs expired. 

In our webinar last year, it was obvious that everyone was confused. The situation caused by the “Schrems” ruling was compounded by the fact that Brexit had been completed. This meant we could no longer apply the SCCs approved in Europe. The UK needed its own SCCs, but they did not exist. 

The ICO consultation

From August to October 2021, the ICO conducted a consultation to understand how a UK version of these rules should be enacted. Since we had been granted an adequacy agreement by the EU, we all hoped it would be possible to mirror the SCCs arrangements in UK law thus re-instating the means by which we can lawfully export data to places such as the US. 

Anecdotally the resounding view was not to mess with the principles enshrined in the EU SCCs as it would simply add complexity to an already complex situation.

The ICO conclusion

In January, the ICO published the International Data Transfer Agreement (IDTA) and the International Data Transfer Addendum to the EU Commission Standard Contractual Clauses. To the layperson, the EU’s standards have been adopted. 

What’s included in the Agreement and Addendum? 

    1. The International Data Transfer Agreement (IDTA) replaces the old EU SCCs which were relied upon to provide the appropriate safeguards required under the UK GDPR for international data transfers from the UK. There are differences to the new EU SCCs – it is a single all-encompassing agreement that incorporates all the scenarios identified in EU SCCs. One can omit sections and there is no requirement for it to be signed. This is most useful for those creating new data transfer agreements.
    2. The UK Addendum is a far simpler document. It is an addendum to the EU SCCs where references to EU laws are replaced by references to UK laws. It allows businesses to use the EU SCCs for international data transfers from the EU but also from the UK. These are useful for those already using the EU SCCs who want a simple addendum to update the legal context. 

When does this come into force?

The IDTA was laid before Parliament on 2nd February 2022. It comes into force on 21st March if there are no objections. To all intents and purposes, it’s in force now. The Information Commissioner Office (ICO) has stated the IDTA and UK Addendum:

“are immediately of use to organisations transferring personal data outside of the UK, subject to the caveat that they come into force on 21 March 2022 and are awaiting Parliamentary approval“.

What does this all mean?

In practice, UK businesses can breathe a sigh of relief and get on with their lives. There is clarity at last. Existing agreements need to be updated with the UK Addendum and new ones can be put in place with the International Data Transfer Agreement. There will be an administrative burden, but businesses now know what they need to do.  Good sense has prevailed. 


Google Analytics Processing Data in US – is this a problem?

January 2022

Austrian DPA has found that continuous use of Google Analytics violates GDPR

Once again, Google is under fire from a regulator in Europe. This time in Austria. 

The Centre for Digital Rights (noyb), which is based in Austria and led by Max Schrems, filed 101 model complaints following the Schrems II decision in 2020. 

Following the complaint about Google Analytics, the Austrian regulator has determined that the continuous use of Google Analytics violates GDPR: 

“The Austrian Data Protection Authority (DSB) has decided on a model case by noyb that the continuous use of Google Analytics violates the GDPR. This is the first decision on the 101 model complaints filed by noyb  in the wake of the so-called “Schrems II” decision. In 2020, the Court of Justice (CJEU) decided that the use of US providers violates the GDPR, as US surveillance laws require US providers like Google or Facebook to provide personal details to US authorities. Similar decisions are expected in other EU member states, as regulators have cooperated on these cases in an EDPB “task force”. It seems the Austrian DSB decision is the first to be issued.”  Source noyb

What does Google Analytics do?

Google Analytics operates by using cookies to capture information about website visitors. Google Analytics is free to use and it’s ideal for businesses who want to know more about:

  • Who visits their website
  • How their website is used
  • What’s popular on their website, and what’s not
  • Whether visitors return to their website

What information does Google capture?

You are likely to see a range of Google cookies that do different jobs. Here’s a short list showing some possible cookies that might be used:

  • _ga: Used to distinguish users and retained for 2 years
  • _gtd: used to distinguish users and retained for 24 hours
  • _gat: Used to throttle request rate and retained for 1 minute
  • AMP_TOKEN: Contains a token that can be used to retrieve a Client ID from AMP Client ID service and retained from 30 seconds to 1 year
  • _gac_<property-id>: Contains campaign related data for the user. This is used when Google Analytics and Google Ads are connected and retained for 90 days

These cookies range from simple identification to remarketing and advertising cookies which allows you to track and remarket individuals through Google Ads. The more one strays into using this data for remarketing, the more intrusive the data capture becomes. 

What does this mean in reality?

Since the advent of GDPR, the burden to demonstrate that consent has been freely given has become greater. 

In the UK, when the ICO published their cookie (and other technologies) guidance in 2019, many large websites became instantly non-compliant. The requirement to demonstrate that consent had been freely given had become stronger. 

The ICO also clearly highlighted that Performance Cookies (such as Google Analytics) required consent to be used. 

Since 2019, companies have used a variety of methods to notify users about the existence of Google Analytics cookies. Some compliant, some less so. 

It is also clear that many have taken a risk-based approach to what they should do. The ICO’s own guidance provides a level of ambiguity on the topic:

The ICO cannot exclude the possibility of formal action in any area. However, it is unlikely that priority for any formal action would be given to uses of cookies where there is a low level of intrusiveness and low risk of harm to individuals. The ICO will consider whether you can demonstrate that you have done everything you can to clearly inform users about the cookies in question and to provide them with clear details of how to make choices. Source: ICO

What are the issues?

  1. Google is a data processor unless you enable data sharing with Google Ads at which point you become a shared controller – ensuring that your privacy policies reflect these differing relationships is important. 
  2. Google stores most data in USA – since Privacy Shield became illegal this has presented some problems. Google is relying on SCC’s but the main concern is that the US has surveillance laws that require companies such as Google to provide US Intelligence agencies with access to their data. 
  3. Google does use data to improve their services. For a user, this can sometimes seem creepy. 

What could Google or US government do?

A rather obvious solution would be for Google to move the processing of EU data outside the US to server centres in Europe where the US government cannot exercise the same surveillance rights as in the US. 

Alternatively, the US government could introduce better protection for private citizens. Although this was unthinkable under the previous presidential regime, it may be conceivable under Biden/Harris. It still feels like a long shot. 

Realistically it’s quicker and more realistic for the Google’s of this world to set up data centres in Europe. Saas providers such as Salesforce addressed this issue years ago and it feels like it’s about time Google and Facebook did too. 

What should you do? 

  1. Make sure you have correctly set up your cookie banner on your website. Technically, visitors should opt-in to Google Analytics and this permission should be captured before any processing takes place
  2. Provide a clear explanation of what data you are collecting and what that data is used for in an accessible cookie notice supported by a coherent privacy policy. 
  3. Make sure you describe all the Google cookies you are using – from simple tracking through to remarketing and advertising. Ideally each cookie would be included including the technical details, duration and purpose.
  4. If you use Google Analytics a number of settings have been introduced that help protect privacy:
    • Turn on the IP anonymising tool. It removes the last three characters of the IP address and renders the address meaningless. 
    • Make use of the data deletion tool – this is a bulk delete tool and can’t be used for one user
    • Introduce data retention policies – there is a default setting of 26 months before data is deleted but maybe you can delete data sooner. 
    • Consider the use of alternative tracking tools that do not rely on the use of cookies or transferring data overseas. A quick search resulted in a non-exhaustive list of analytics tools that don’t rely on cookies. There will be other suppliers: 
      • Fathom
      • Plausible
      • Simple Analytics
      • Insights
      • Matomo

In conclusion

  • At the moment, this finding by Austrian DPA does not apply in the UK. However it’s possible other DPAs may follow suit. 
  • Having said that, there are plenty of lessons to learn about how to work with Google Analytics and other US-based companies who insist on holding data in the US
  • It’s essential that your cookie notice and privacy policy clearly set out what tools are being used and what data is being processed. This is particularly important if you are linking Google Analytics to Google Ads for remarketing. 
  • Given that the world is slowly turning against cookies, maybe now is the time to start looking at less intrusive performance tracking solutions. 


ICO Opinion on Ad Tech – Old wine in a new bottle?

December 2021

Does the ICO Opinion piece tell us anything new?

The ICO has published an “Opinion” which can be interpreted as a shot across the bows for any Ad Tech company who is planning to launch their new targeting solutions for the post-third-party cookie world. 

If these companies thought new targeting solutions would get waved through because they don’t involve third-party cookies, it’s clear that Google’s difficulties with their Sandbox solution say otherwise. 

Google is currently knee-deep in discussions with both Competition and Marketing Authority (CMA) and ICO to come up with a targeting solution that is fair to consumers whilst also avoiding the accusation of being anti-competitive. 

In the ICO’s opinion piece they set out the clear parameters for developing these solutions in a privacy-friendly manner. You won’t be too surprised to hear all the usual concerns being re-heated in this discussion. To quote the ICO:

  1. Engineer data protection requirements by default into the design of the initiative
  2. Offer users the choice of receiving adverts without tracking, profiling, or targeting based on personal data. 
  3. Be transparent about how and why personal data is processed across the ecosystem and who is responsible for that processing
  4. Articulate the specific purposes for processing personal data and demonstrate how this is fair, lawful, and transparent
  5. Address existing privacy risks and mitigate any new privacy risks that the proposals introduce

This opinion piece is the latest publication from the ICO in a relatively long-running piece of work on the use of cookies and similar technologies for the processing of personal data in online advertising. In their original report in 2019, the ICO reported a wide range of concerns with the following which needed to be rectified:

  • Legal requirements on cookie use;
  • Lawfulness, fairness, and transparency;
  • Security;
  • Controllership arrangements;
  • Data retention;
  • Risk assessments; and
  • Application of data protection by design principles. 

You can read the back story here

The state of play in 2021

Since the ICO has started its investigations in 2019, the market has continued to develop new ways of targeting advertising that does not rely on third-party cookies. The net result is that the world has moved to a less intrusive way of tracking which has been welcomed by ICO. Some examples include: 

  • With Google Chrome’s announcement re: cookies, there is an expectation that third-party cookies will be phased out by end of 2022. 
  • There have been increases in the transparency of online tracking – notably Apple’s “App Tracking Transparency” ATT
  • There are new mechanisms being developed to help individuals indicate their privacy preferences simply and effectively
  • Browser developers are introducing tracking prevention in their software.  A notable example is the Google Privacy Sandbox which will enable targeting with alternative technologies.

How should we interpret this opinion piece?

A lot of what has been included is information from the 2019 reports. In effect, it’s a summary of previous activities plus additional material to bring you up to date. Although it is a rather long piece, there is some clear guidance for the way forward for developers of new solutions. 

Furthermore, it is bluntly warning technology firms that they are in the ICO’s sights: 

“In general, the Commissioner’s view is that these developments are not yet sufficiently mature to assess in detail. They have not shown how they demonstrate participants’ compliance with the law, or how they result in better data protection outcomes compared to the existing ecosystem” Source: ICO

Data protection by design is paramount – no excuses for non-compliance this time

The ICO opinion clearly flags to developers that they will accept no excuses for developing non-compliant solutions. In the past, there have been difficulties because the Ad Tech solutions have been in place for some time with the data protection guidance being retrofitted to an existing ecosystem. 

With the demise of third-party cookies and the advent of a variety of new solutions, there can be no excuse for ensuring that privacy is engineered into the design of the solutions. 

It explicitly highlights the need to respect the interests, rights, and freedoms of individuals. Developers need to evidence that these considerations have been taken into account.  

Users must be given a real choice

In the first instance, users must be given the ability to receive adverts without tracking, profiling, or targeting based on personal data. There must be meaningful control and developers must demonstrate that there is user choice through the data lifecycle. 

Accountability – show your homework

There is an expectation that there will be transparency around how and why personal data is processed and who is responsible for that processing. In the current ecosystem, this is largely impossible to achieve and there is no transparency across the supply chain. 

Articulate the purpose of processing data

Each new solution should describe the purpose of processing personal data and demonstrate how this is fair, lawful, and transparent. Can suppliers assess the necessity and proportionality of this processing? The 2019 report highlighted that the processing appeared excessive relative to the outcomes achieved. How will processors change their ways? 

Addressing risk and reducing harm

As a start, it’s important to articulate the privacy risks, likely through a DPIA, but also explain how those risks will be mitigated. The previous ICO reports indicated their disappointment with the low volume of DPIAs produced by Ad Tech providers. This needed to change. 

To conclude with a useful developer checklist

The ICO provides a checklist of how to apply these principles in practice. You can probably jump to this section if you really want to know what is expected: 

  1. Demonstrate and explain the design choices.
  2. Be fair and transparent about the benefits.
  3. Minimise data collection and further processing.
  4. Protect users and give them meaningful control.
  5. Embed the principle of necessity and proportionality.
  6. Maintain lawfulness, risk assessments, and information rights.
  7. Consider the use of special category data.

The ICO is very clear that the industry must change. There is no appetite to approve solutions that fundamentally adopt the same flawed ways of working. There is also a clear acknowledgment that some solutions are potentially anti-competitive so a partnership with the CMA will continue. You have been warned!

Personal Data Breaches: Can ‘over-reporting’ be curtailed?

November 2021

The Information Commissioner’s Office has said organisations are over-reporting data breaches. One proposal discussed in the UK Government’s consultation on data reform aims to tackle this issue by raising the threshold for when organisations need to report a personal data breach.

Is this a good idea or not?

The number of reported breaches jumped dramatically after GDPR came into effect back in 2018, quadrupling the figures. Pre-GDPR, the ICO would receive around 3,000 notifications a year. Post-GDPR, it rose to more than 3,000 a quarter (2018/19).

You might argue this wasn’t surprising and no bad thing.

GDPR tightened rules around breach reporting, with increased potential penalties for non-compliance. The rise in reporting might suggest companies were taking heed of the legislation and holding their hands up to their mistakes.

Since then the figures have come down to around 2,300 a quarter (July – September 2021).

This still represents sizeable figures, the ICO is clearly overwhelmed and has specifically highlighted some organisations are reporting breaches when they don’t need to.

It’s worth noting most reported breaches aren’t investigated (one would hope because they aren’t serious enough); just 20% result in an investigation. Even then, not all investigations lead to enforcement action.

The UK is not alone, the European Data Protection Board (EDPB) says many supervisory authorities across Europe have experienced over-reporting too.

With this in mind, does the law need changing… or does the problem lie with our reporting habits?

Current data breach reporting obligations

At present, organisations must report a personal data breach unless it is ‘unlikely’ to result in a ‘risk’ to the rights and freedoms of natural persons.

The key to assessing whether to report to the ICO or not is in the supplementary guidance published by the UK Regulator and at a European level from the European Data Protection Board (previously Article 29 Working Party).

In broad terms, the ICO tells us we need to assess the potential adverse consequences of a breach for individuals, basing this on how serious these are and how likely they are to happen.

There is also helpful guidance specifically aimed at small businesses, which includes examples of incidents that would need to be reported and ones which wouldn’t.

The ICO points us towards EDPB guidance, which expands on how to assess the risks and the consequences we should consider, such as discrimination, identity theft or fraud, financial loss or reputational damage.

Proposal to revise the data breach reporting threshold

A reading of the UK data reform consultation reveals the Government considers the current threshold too low, and proposes raising it.

It also suggests current over-reporting is likely to be driven by organisations fearing the financial and/or reputation repercussions should they be found to have failed to comply with the obligation to report breaches.

This ‘better safe than sorry’ approach, the Government believes, is partly responsible for the significant spike in reporting since GDPR was introduced.

The idea, then, is to change the law so organisations must report a breach ‘unless the risk to individuals is not material’ – so organisations would need to consider materiality when deciding whether to report or not.

The ICO would be encouraged to provide new guidance on what would constitute ‘non-material’ risk, along with examples of what kinds of incident would be reportable and which wouldn’t.

Will this make a difference?

Many organisations are likely to welcome the threshold for reporting being higher. In our recent survey it was one of the most popular reform proposals.

Such a move could potentially both save organisations time, energy and costs, as well as easing the burden on the ICO.

However, in practice, organisations will still be required to assess what might be ‘non-material’ and will still be under the time pressure of having to notify a reportable breach within 72 hours of becoming aware of it.

Is there a danger one type of assessment will just be replaced with another, and businesses will still ‘err on the side of caution’, reporting anyway because they’re under the clock?

Whatever form the assessment takes, organisations will still need to be able to justify any decision not to report.

This also doesn’t necessarily address the issue of organisations reporting because they fear the consequences of failing to comply with the obligation to report breaches. There will still be an obligation to report, and within the same timescale.

I wonder if part of the problem is one of culture and perception. Does there need to be more assurance given to organisations? If they’ve acted in good faith, but are still deemed to have got it wrong, how will that impact on penalties for non-reporting?

There’s a difference between honest mistakes by organisations trying their best, and those who ignore the rules to save time and money.

How the courts are handling data breach claims…

A recent case provides some useful insights into how UK courts deal with claims relating to data breaches. Especially ones where, on the face of it, any risk to individuals seems negligible.

In the High Court case of Rolfe & Ors v Veal Wasbrough Vizards, the defendants were lawyers representing a private school. The case centres on an email regarding outstanding fees incorrectly sent to the wrong recipient. This person who received it immediately highlighted the error and confirmed they’d deleted it.

Nonetheless, the people who should’ve received the email brought a claim for damages for the misuse of confidential information, breach of confidence, negligence and damages under data protection law.

In a clear case of common sense jurisprudence, the Court found no credible case that distress or damage could be proved. It found the claim to be ‘plainly exaggerated’ and the suggestion that the Claimants could have suffered distress or worry was ‘frankly an implausible suggestion’ in the case of a single breach which was quickly remedied.

This case should offer a level of comfort to organisations, should they face low-level data breach claims (possibly facilitated by legal companies chasing post-GDPR data breach claims).

It also reinforces the fact that the ICO doesn’t need to be troubled with minor incidents, which may fall under the definition of a personal data breach, but are highly unlikely to have adverse consequences.

As the saying goes, de minimis non curat lex – ‘the law does not concern itself with trifles’.

ICO says most public sector messages are not direct marketing

August 2021

One of the unwelcome side effects of the pandemic has been the proliferation of bogus emails and texts trying to illegally elicit personal data from us.

I speak with my elderly mother almost daily, repeating the same lines; ‘don’t click on the link’, ‘don’t respond if someone is asking you to enter your details’, ‘hang up’, ‘delete it’, ‘you haven’t ordered a package, please ignore it’.

However, we’ve also all received other communications which I feel have been largely helpful. Messages such as pandemic update emails from our local councils, notifications about vaccines from our GPs, and text messages about the NHS app.

But would some of these be regarded as direct marketing messages? Did some contravene the rules under PECR (the Privacy and Electronic Communications Regulations)?

Possibly, perhaps in some cases definitely (under existing guidance). But does it matter? Surely, there’s an argument to say some communications may not be strictly necessary but are informative and useful, and don’t unduly impact on our privacy.

This is clearly an area the ICO felt needed addressing. The Regulator has issued new guidance, which appears to alter the long-standing interpretation of direct marketing.

What does the new guidance say?

The ICO says public sector organisations can send ‘promotional’ messages which would not be classed as direct marketing, if they are necessary for a public task or function.

This is significant. ‘Promotional’ messages have always been considered as ‘direct marketing’ before, regardless of whether they are sent by commercial companies, not-for-profits or the public sector.

It also means, in the eyes of the Regulator, such public sector ‘promotional’ emails, SMS messages and telephone calls do not fall within the scope of the UK’s Privacy and Electronic Communications Regulations (PECR).

In a blog announcing the new guidance the ICO states:

“Any sector or type of organisation is capable of engaging in direct marketing. However the majority of messages that public authorities send to individuals are unlikely to constitute direct marketing.”

Anthony Luhman, ICO Director, goes on to say:

“Our new guidance will help you understand how to send promotional messages in compliance with the law. Done properly the public should have trust and confidence in promotional messaging from the public sector.”

As said, until now any ‘promotional’ message was considered direct marketing. So this new guidance raises some questions:

  • Has the long-standing interpretation of the definition of direct marketing been changed?
  • Is this a sensible new interpretation?
  • Will this open the floodgates to us being spammed by public authorities?

What is the definition of ‘direct marketing’?

The definition is broad. Under section 122(5) of the DPA 2018 the term ‘direct marketing’ means “the communication (by whatever means) of advertising or marketing material which is directed to particular individuals”.

A definition which also applies for PECR.

What exactly is meant by ‘advertising or marketing material’ is not clarified in the DPA 2018 or PECR, but the long-standing interpretation of this has been that it is not limited to commercial marketing and includes any material which promotes ‘aims and ideals’.

This interpretation is clear in the ICO’s Direct Marketing Guidance and more recently in the draft Direct Marketing Code, published in January 2020, which says of directly marketing;:

“It is interpreted widely and covers any advertising or marketing material, not just commercial marketing. For example it includes the promotion of aims and ideals as well as advertising goods or services. This wide interpretation acknowledges that unwanted, and in some cases nuisance, direct marketing is not always limited to commercial marketing.”

When is a promotional public sector message not direct marketing?

In a nutshell, the new guidance states;

  • If you’re a public authority and your promotional messages are necessary for your public task or function, these messages are not direct marketing
  • If your messages by telephone, text or SMS are not direct marketing, you don’t need to comply with PECR. (But you still need to comply with UK GDPR).

The ICO is now drawing a distinction between promotional messages necessary to fulfil a public task or function, as opposed to messages from public authorities promoting services which a user pays for (such as leisure facilities) or fundraising activities. The latter would still be considered direct marketing.

The new guidance provides the following interpretation;

“In many cases public sector promotions to individuals are unlikely to count as direct marketing. This is because promotional messages that are necessary for your task or functions do not constitute direct marketing. We do not consider public functions specified by law to count as an organisation’s aims or ideals.”

This is in marked contrast to the wording of the draft Direct Marketing Code which says:

‘If, as a public body, you use marketing or advertising methods to promote your interests, you must comply with the direct marketing rules.”

What types of messages are direct marketing and which aren’t?

The following examples are given of the types of promotional content a public authority might communicate which would NOT constitute direct marketing;

  • new public services
  • online portals
  • helplines
  • guidance resources

The ICO says promotional messages likely to be classed as direct marketing include:

  • fundraising; or
  • advertising services offered on a quasi-commercial basis or for which there is a charge (unless these are service messages as part of the service to the individual)

How do you decide if messages are necessary for public task or function?

The ICO says it accepts all public authorities will have what it describes as ‘incidental powers’ to promote their services and engage with the public.
It therefore says it is not necessary for a public authority to identify an ‘explicit statutory function’ to engage with promotional activity which is deemed ‘necessary’ for a task or function.

However, the ICO does stipulate you can’t just say a direct marketing message is no longer direct marketing because the lawful basis has been stated as public task.

Nor can you just decree a promotional message is ‘in the public interest’, this won’t automatically mean it isn’t direct marketing.

What the Regulator expects is for public authorities to identify a relevant task or function for the communication they wish to send.

There’s a risk here the ICO has not been clear enough. This could cause confusion and I suspect plenty of deliberation over which messages are or are not direct marketing.


It’s made clear that even if you determine certain promotional messages are not direct marketing, this doesn’t mean you can ignore other basic data protection principles.

You still need to make sure people know what you are doing with their personal data, and this must be within their reasonable expectations.

In other words public authorities must make it clear to people they intend to send promotional messages which are necessary for a public task or function. Which may mean updating their privacy notices.

Right to object

People have an absolute right to object to direct marketing, but they also have a general right under data protection law to object to processing, which includes when organisations are relying on the lawful basis of public task. A right people should be made aware of.

The guidance makes it clear – if someone objects to a promotional message from a public authority, it will only be possible to continue sending messages if ‘compelling legitimate grounds’ to do so can be demonstrated.

The ICO makes the point it would be difficult to justify continuing to send unwanted promotional messages if this goes against someone’s wishes.

My advice would be to include a clear ability to opt-out on any promotional message; any message which isn’t an essential service message.

(Albeit, this could cause some configuration issues for public authorities who don’t have sophisticated systems which can distinguish between different types of messages and opt-outs).

Lawful basis for promotional non-marketing messages

The ICO points to two lawful bases under UK GDPR for sending promotional messages necessary for a public task or function, either public task or consent.

The guidance suggests just because you can rely on public task, doesn’t mean you shouldn’t consider consent, which may be considered appropriate for public trust reasons.

The ICO accepts that Public Authorities may be reluctant to rely on consent, due to a potential imbalance of power, but says it may be considered appropriate if the individual has a genuine free choice to give or refuse to consent to promotional messages.

A change in interpretation

This new guidance certainly seems to represent a marked change in the ICO’s previous interpretation of direct marketing.

It’s interesting to note the following pertinent examples which are present in the draft Direct Marketing Code (which I suspect may be altered in the final version).


Scenario A
A GP sends the following text message to a patient: ‘Our records show you are due for x screening, please call the surgery on 12345678 to make an appointment.’
As this is neutrally worded and relates to the patient’s care it is not a direct marketing message but rather a service message.

Scenario B
A GP sends the following text message to a patient: ‘Our flu clinic is now open. If you would like a flu vaccination please call the surgery on 12345678 to make an appointment.’

This is more likely to be considered to be direct marketing because it does not relate to the patient’s specific care but rather to a general service that is available.

It seems to me Scenario B, under the new guidance could be classed as a promotional message, but NOT direct marketing.

(Personally, I would never have complained about Scenario B, it’s a helpful, informative message and hardly in the realms of the untargeted nuisance spam).

The draft Code goes on to confirm the following would be direct marketing;

  • a GP sending text messages to patients inviting them to healthy eating event;
  • a regulator sending out emails promoting its annual report launch;
  • a local authority sending out an e-newsletter update on the work they are doing; and
  • a government body sending personally addressed post promoting a health and safety campaign they are running.

The specific examples from the draft Code were used by people to question whether some of the messages they received during the pandemic contravened PECR.

Would these types of communications now no longer be direct marketing?

It would certainly seem like they aren’t if you go by the clear message from the ICO that; ‘the majority of messages that public authorities send to individuals are unlikely to constitute direct marketing.’

Will the above examples disappear from the final Direct Marketing Code?

In summary

This new guidance is likely to be welcomed by some who have been frustrated, or indeed bewildered their communications could be considered direct marketing.

However, it could also muddy the waters. It leaves the public sector needing to clearly define different types of communications and make sure relevant teams are adequately briefed to understand the difference.

As I see there are three types of communication:

a) Service messages – essential messages relating to the provision of a service
b) Promotional messages for public task or function (which are highly likely to need an opt-out)
c) Direct marketing messages (must have an opt-out to honour the individual’s absolute right to object).

I just wonder whether the term ‘promotional messages’ could have been avoided in this guidance. I am not sure I have a satisfactory alternative, but perhaps something like ‘information messages’ – i.e. messages that are not essential service messages but provide helpful information.

I also wonder whether there could have been a carve out for important health-related messages, rather than applying this new interpretation to any ‘promotional’ message from any public authority.

Let’s hope the public sector now pays due care and attention to transparency, provides an opt-out to all but essential messages, and doesn’t abuse this new-found power to engage with us beyond what is actually necessary.


Need advice on complying with the direct marketing rules? Do your people need refresher training? Our experience team can help you navigate GDPR, PECR and regulatory guidance. CONTACT US.


Artificial Intelligence – helping businesses address the privacy risks

August 2021

The use of artificial intelligence (AI) is increasing at great pace, to drive valuable new benefits across all areas of business and society. We see its applications expanding across many areas of our daily lives anything from social media usage through to self-driving and parking cars, and medical applications.

However, as with any new technology, there can be challenges too. How can we be sure we are protecting people from risk and potential harm when processing their personal data within AI systems?

Like with any other use of personal data, businesses need to ensure they comply with core data protection principles when designing, developing or productionising AI systems which use personal data.

You may recall in April 2021, the European Commission published a proposal for new regulation, harmonising the rules governing artificial intelligence.

The regulation of AI is a tricky balancing act. On the one hand there’s the desire not to hinder research and development from adopting new technologies to bring increasing societal benefits – but those exciting opportunities must be balanced against the need to protect individuals against any inherent risks.

So how can we strike the right balance?

AI privacy ‘toolkit’

The ICO have published an improved ‘beta’ version of their AI toolkit, which aims to help organisations using AI to better understand & assess data protection risks.

It’s targeted at two main audiences; those with a compliance focus such as DPOs, general counsel, risk managers and senior management; alongside technology specialists such as AI/ML developers, data scientists, software developers & engineers and cybersecurity & IT risk managers.

So what is the toolkit?

It’s an Excel spreadsheet which maps key stages of the AI lifecycle against the data protection principles, highlighting relevant risks and giving practical steps you can take to assess, manage and mitigate risks.

It also provides suggestions on technical and organisational measures which could be adopted to tackle any risks. The toolkit focuses on four key stages of the AI lifecycle:

  • Business requirements and design
  • Data acquisition and preparation
  • Training and testing
  • Deployment and monitoring

The ICO have quite rightly recognised that the development of AI systems is not always a linear journey from A to B to C. One stage does not necessarily flow straight into another.

Therefore it will often be best to take a holistic approach and recognise you won’t have all the information available for assessment at ‘product definition’ stage. The engagement for a DPO (or other privacy champion) will need to stretch across all stages of the AI lifecycle.

What kinds of risk are highlighted?

Quite a few actually, including:

  • Failure to adequately handle the rights of individuals
  • Failure to choose and appropriate lawful basis for the different stages of development
  • Issues with training data which could lead to negative impacts on individuals – such as discrimination, financial loss or other significant economic or social disadvantages
  • Lack of transparency regarding the processes, services and decisions made using AI
  • Unauthorised / unlawful processing, accidental loss, destruction or damage to personal data
  • Excessive collection or use of personal data
  • Lack of accountability or governance over the use of AI and the outcomes it gives

AI has become a real focus area for the ICO of late. The toolkit follows on the heels of their Guidance on AI and Data Protection; their co-badged guidance with The Alan Turing Institute on Explaining Decisions Made With AI. This is all connected with their commitment to enable good data protection practice in AI.

In summary

The use of AI is exciting and presents many opportunities and potential benefits, but it‘s clearly not without its risks. There’s more and more guidance emerging to help organisations begin to adopt or continue to expand their use of AI. The clear message from the Regulator is this activity must be handled carefully and data protection must be considered from the outset.

The ICO is keen to work with businesses to make sure its guidance its useful for organisations, so it can continue to support the increasing use of AI.


Is this all going in the right direction? We’d be delighted to hear your thoughts. Alternatively if you’d like data protection advice when designing and developing with AI, we can help. CONTACT US.


Will the new Information Commissioner be able to fend off the critics?

July 2021

Another Commonwealth candidate – this time from New Zealand – has emerged as favourite to replace Elizabeth Denham when her tenure at the helm of the ICO ends this Autumn.

This marks a trend for Anglosphere figures winning top public appointments in the UK, including former Governor of the Bank of England, Mark Carney. Carney, like Denham, is Canadian.

John Edwards, currently New Zealand’s Privacy Commissioner has reportedly been recommended to replace Denham, subject to approval from the Prime Minister. Edwards has so far declined to comment on his potential appointment.

Already dubbed as ‘Facebook-hating’ by ‘The Times’, Edwards has been a vocal critic of social media companies. He gave Facebook a fierce dressing-down after the Christchurch mosque massacre in 2019, which was livestreamed on the platform.

News Edwards is tipped to be the next Commissioner seems to have been released following harsh criticism of Number 10’s handling of the appointment.

The Department of Culture, Media and Sport (DCMS) Committee was meant to start hearings to appoint a new commissioner on 8th July, but was postponed twice.

Just last week, the DCMS Committee chairman, Julian Knight commented; “We understand that despite processes running well, delays centre on Number 10. This mishandling calls into question decision-making at the top of Government.”

Denham’s replacement needs to be in place and ready to start work in November, so the clock’s ticking.

Data Sheriff, or Data Stooge?

The appointment process has been criticised from the start. The original job description, posted in February, didn’t even mention candidates should have experience in regulating data protection.

The advert also indicated the new Commissioner would need to play a ‘key role’ in supporting the rollout of the Government’s controversial National Data Strategy.

This led to fears the Government was seeking a malleable stooge rather than an honest broker. I think it’s fair to say these fears would appear to be unfounded if John Edwards is appointed.

Still is it possible the bumpy appointment process damaged perceptions of the role? Is this why the Government has looked further afield yet again? Is there no senior talent in the UK who wants to take the job on?

ICO under fire

Regulators in any field occasionally find themselves in the position of upsetting everybody, especially in a high-value, high-impact area of business like data.

Denham has not been without her critics. The ICO stands accused of lacking teeth when it comes to dealing with data behemoths like Facebook and Google.

A quick glance at LinkedIn reveals no shortage of criticism by professionals, some suggesting the ICO like other bodies (yes, we’re looking at you, HMRC) tend to go after ‘low-hanging fruit’. Are they more comfortable issuing fines for breaches of the marketing rules, than GDPR?

Conversely, the ICO also faces criticism it has failed to deliver on its bread-and-butter work. Should the regulator only focus on the big picture, or should they focus in on data protection compliance at company level?

It’s a difficult balance to get right.

Part of the problem may be the reactive nature of the ICO, they only appear to investigate when there’s a breach or a significant complaint. Should this change?

Earlier this year the ICO announced it was resuming its ad tech investigations (paused during the pandemic). Work is said to include ‘a series of audits focusing on data management platforms’. Does this represent a more proactive stance or not? We await the outcome.

Other DPAs in the EU would certainly appear to take a more proactive approach, for example back in 2019 the Dutch DPA carried out an audit of approximately 175 websites in various sectors to check their compliance with the requirements for tracking cookies.

Meanwhile, the ICO has been credited with publishing a new Children’s Code, which comes into force in September. But how will this be enforced?

After all the hype of GDPR, businesses may have settled back into feeling no one will ever come after them.

To be fair to the ICO, the task was huge, even before the impact of a global pandemic. It’s common knowledge the backlog caseworkers face is substantial.

There are also claims the ICO is underfunded and under resourced. However, a Deloitte report on Data Protection Authorities in 2019 showed the UK Regulator to be better funded that its EU counterparts.

It’s likely to also be significant that of the ICO’s 680 or so staff, only tiny fraction are in the investigations team.

The challenge ahead for the next Commissioner

Data protection divergence?

Along with running the tight rope of balancing resources, tackling the big issues whilst not ignoring the bread-and-butter, the new regulator will have to tiptoe through the minefield that is Brexit and alignment with the EU.

We’ve already heard more than murmurings about a desire, in some areas of Government, to ditch GDPR and create a more innovation-friendly data protection environment. Cut the EU red tape!

My money is on the Government trying to loosen GDPR’s regulatory grip on some areas of technology deemed high-value and high-profit as we start leaving the EU’s orbit.

With all this in mind, how influential with the new Commissioner be?

Covid Passports and NHS App

The pandemic has led to a huge surge in the collection and use of health data, another sensitive conundrum for legislators to tackle.

The current Commissioner has warned the Government in an interview with the Telegraph that the ICO will be alert to any mission creep with the NHS covid app.

She said: “We will be watching the evolution of the app very carefully. My modus operandi has always been how can we help government get this right and build in privacy to these innovations. At the end of the day, if there is a contravention of the law with the app or overreach in its use then we will take action.”

She stressed the ICO’s will focus on how it is to be used next, and how it will be decommissioned when no longer necessary.

The ICO is currently advising the Government on the domestic use of vaccine passports. Denham is clear ministers must make sure any measures, for example to use passports for nightclub-goers, must be time-limited and not be allowed to evolve into a more permanent post-pandemic regime.
Will the new Commissioner take a similar view, given the New Zealand government’s zero-Covid strategy?

If Mr. Edwards does take the job, we wish him the best of luck. He’ll need it, too, with a serious in-tray of problems to solve.


Data protection team over-stretched? Ease the strain with our no-nonsense advice and support via our flexible Privacy Manager Service. Find out how our experience team can help you. CONTACT US.

Getting to grips with Accountability

Accountability is a key principle underpinning GDPR and has become the foundation of successful data protection and privacy programmes.
It can though be difficult to know where to start and how to keep up the momentum.

Luckily the ICO has developed what I think is a great tool, and it’s just been updated it to make it even more user friendly.

The Accountability Framework can really help DPOs and privacy teams. It takes less than an hour to complete – which sounds to me like an hour well spent!

When working with our clients I often find they benefit from help both to recognise their data compliance gaps and then to scope out practical solutions. Any help from the ICO to support businesses down this road should be encouraged.

The Framework focuses on helping you to assess the effectiveness of the measures you have in place to protect personal data, understand where your weaknesses lie and gain clarity on the areas you need to improve.

It’s aimed at senior management, DPOs and those with responsibility for records management and information security.

Ten core areas of accountability

The Framework identifies ten important areas organisations are accountable for.

1. Leadership and oversight
2. Policies and procedures
3. Training and awareness
4. Individual’s rights
5. Transparency
6. Records of processing and lawful basis
7. Contracts and data sharing
8. Risks and data protection impact assessments
9. Records management and security
10. Breach response and monitoring.

Self-assessment tool and tracker

A vital part of the Framework is the self-assessment tool. It enables you to assess your level of compliance in each of the 10 core areas above.
For each area the Framework lays out the ICO’s expectations and asks you to rate how your organisation performs against key measures.

At the end you receive a report which grades your organisation’s performance on each area and helps you to:

  • understand your current compliance levels
  • identify gaps in your privacy programme
  • confirm the next steps you should take to improve accountability
  • communicate what support is needed from senior management to enhance compliance

If you want to go further, you can use the accountability tracker (provided in Excel) to record more detail and create an action plan so you can your track progress over time.

You may also find this useful when you provide management information, e.g. to your Board and/or to other stakeholders.

Recent improvements to the Framework

After listening to feedback, the ICO has made changes to:

  • improve the Framework’s layout. For example the 10 core topic areas have changed since the original version, making it easier to navigate
  • adjustments to the Accountability Tracker, so it complements people’s existing working practices

An example: training and awareness

The Framework provides practical ways in which you can meet the legal requirements. ‘Training and awareness’ is a great example.

The ICO expects organisations to provide appropriate data protection and information governance training for staff, including induction for new starters prior to accessing personal data and within one month of their start date.

The training must be relevant, accurate and up to date. Refresher training should be provided at regular intervals.

Specialised roles or functions with key data protection responsibilities should receive additional training and professional development, beyond the basic level.

Organisation should be able to demonstrate that staff understand the training, for example, through assessments or surveys.

In addition, you should regularly raise organisational awareness of data protection, information governance and your data policies and procedures in meetings or staff forums and make it easy for staff to access the relevant material.

What next?

The ICO tells us the next steps for the Framework include adding real life case studies which aim to illustrate the innovative ways organisations can demonstrate their accountability.

They also plan to run online workshops to look at how they can adapt and improve the self-assessment tool to better meet business needs. You can register your interest here.

Help for small businesses too

The ICO reminds us that if you work for a smaller organisation you will most likely benefit from their existing resources, available on their SME hub.

For example, you should take a look at their assessment for small business owners and sole traders and you may want to try the data protection self-assessment toolkit. ICO Accountability Framework 


We can help – if you’d like help improving your business’s data protection programme and demonstrate accountability. From delivering practical and engaging training through to helping with Data Subject Access Requests, impact assessments or protecting against a data breach. GET IN TOUCH