AI Risk, Governance and Regulation

June 2025

The Artificial Intelligence landscape’s beginning to remind me of a place Indiana Jones might search for hidden treasure. The rewards are near-magical, but the path is littered with traps. Although, in the digital temple of ‘The New AI’, he’s not going to fall into a pit of snakes or be squished by a huge stone ball. No, Indy is more likely to face other traps. Leaking sensitive information. Litigation. Loss of adventuring advantage to competing explorers. A new, looming regulatory environment, one even Governments have yet to determine.

And the huge stone ball? That will be when the power of the Lost AI goes awry, feeding us with incorrect information, biased outcomes and AI hallucinations.

Yes, regulation is important in such a fast-moving international arena. So is nimble decision-making, as even the European Commission considers pausing its AI Act. Nobody wants to be left behind. Yet, as China and the US vie for AI supremacy, are countries like the UK sitting on the fence?

AI has an equal number of devotees and sceptics, very broadly divided along generational lines. Gen Z and X are not as enamoured with AI as Millennials (those born between 1981 and 1996). A 2025 Mckinsey report found Millennials to be the most active AI users. My Gen Z son, says of AI, ‘I’m not asking a toaster a question.’ He also thinks AI’s insatiable thirst for energy will make it unsustainable in the longer term.

Perhaps he has a point, but I think every industry will somehow be impacted, disrupted and – perhaps – subsumed by AI. And as ever, with transformational new technologies, mistakes will be made as organisations balance risk versus advantage.

How, in this ‘Temple of the New AI,’ do organisations find treasure… without falling into a horrible trap?

How to govern your organisation’s use of AI

While compliance with regulations will be a key factor for many organisations, protecting the business and brand reputation may be an even bigger concern. The key will be making sure AI is used in an efficient, ethical and responsible way.

The most obvious solution is to approach AI risk and governance with a clear framework covering accountability, policies, ongoing monitoring, security, training and so on. Organisations already utilising AI may have already embedded robust governance. For others, here are some pointers to consider:

Strategy and risk appetite

Senior leadership needs to establish the organisation’s approach to AI; your strategy and risk-appetite. Consider the benefits alongside the potential risks associated with AI and implement measures to mitigate them.

AI inventory

Create an inventory to record what AI systems are already in use across the business, the purposes they are used for, and why.

Stakeholders, accountability & responsibilities

Identify which key individuals and/or departments are likely to play a role in governing how AI is developed, customised and/or used in your organisation. Put some clear guard rails in place. Determine who is responsible and accountable for each AI system. Establish clear roles and responsibilities for AI initiatives to make sure there’s accountability for all aspects of AI governance.

Policies and guidelines

Develop appropriate policies and procedures, or update existing policies so people understand internal standards, permitted usage and so on.

Training and AI literacy

Provide appropriate training. Consider if this needs to be role specific, and factor in ongoing training in this rapidly evolving AI world. Remember, the EU AI ACT includes a requirement for providers and developers of AI systems to make sure their staff have sufficient levels of AI literacy.

If you don’t know where to start, Use AI Securely provide a pretty sound free introductory course.

AI risk assessments

Develop and implement a clear process for identifying potential vulnerabilities and risks associated with each AI system.

For many organisations who are not developing AI systems themselves, this will mean a robust method for assessing the risks associate with third-party AI tools, and how you intend to use those tools. Embedding an appropriate due diligence process when looking to adopt (perhaps also customise) third-party AI SAAS solutions is crucial.

Clearly not all AI systems or tools will pose the same level of risk, so having a risk-based methodology to enable you to prioritise risk, will also prove invaluable.

Information security

Appropriate security measures are of critical importance. Vulnerabilities in AI models can be exploited, input data can be manipulated, malicious attacks can target training datasets, unauthorised parties may access sensitive, personal and/or confidential data. Data can be leaked via third party AI solutions.

We also need to be mindful of how online criminals exploit AI to create ever more sophisticated and advanced malware. For example, to automate phishing attacks. On this point, the UK Government has published a voluntary AI cyber security code of practice.

 Transparency and explainability

Are you being open and up front about your use of AI? Organisations need to be transparent about how AI is being used, especially when it impacts on individuals or makes decisions that affect them. A clear example here is AI tools being used for recruitment – is it clear to job seekers you’re using AI? Are they being fairly treated? Using AI Tools in Recruitment

Alongside this there’s the crucial ‘explainability’ piece – the ability to understand and interpret the decision-making processes of artificial intelligence systems.

Audits and monitoring

Implement a method for ongoing monitoring of the AI systems and/or AI tools you are using .

Legal and regulatory compliance

Keep up to date with latest developments and how to comply with relevant laws and regulations in different jurisdictions relevant for your operations.

My colleague Simon and I recently completed the IAPP AI Governance Professional training, led by Oliver Patel. I’d highly recommend his Substack which is packed with tips and detailed information on how to approach AI Governance.

Current regulatory landscape

European Union

The EU AI Act was implemented in August 2024, and is coming into effect in stages. Some people fear this comprehensive and strict approach will hold back innovation and leave Europe languishing behind the rest of the world. It’s interesting the European Commission is considering pausing its entry into application, which DLA Piper has written about here.

On 2nd February this year, rules came into effect in relation to AI literacy requirements, definition of an AI system and a limited number of prohibited AI use cases, which the EU determines pose an unacceptable risk.

Like GDPR, the AI Act has extra-territorial scope, meaning it applies to organisations based outside the EU (as well as inside) where they place AI products on the market or put them into service in the EU, and/or where outputs produced by AI applications are used by people within the EU. We’ve already seen how EU regulation has led to organisations like Meta and Google excluding the EU from use of its new AI products for fear of enforcement under the Act.

The European Commission has published guidelines alongside prohibited practices coming into effect. Guidelines on Prohibited Practices & Guidelines on Definition of AI System

UK

For the time being it looks unlikely the UK will adopt a comprehensive EU-style regulation. A ‘principles-based framework’ is supported for sector specific regulators to interpret and apply. Specific legislation for those developing the most powerful AI models looks the most likely direction of travel.

The Information Commissioner’s Office published a new AI and biometrics strategy on 5th June with a focus on promoting compliance with data protection law, preventing harm but also enabling innovation. Further ICO activity will include:

Developing a statutory code of practice for organisations developing or deploying AI.
Reviewing the use of automated decision making (ADM) systems for recruitment purposes
Conducting audits and producing guidance on the police’s use of facial recognition technology (FRT)
Setting clear expectations to protect people’s personal information when used to train generative AI foundation models
v Scrutinising emerging AI risks and trends.

The soon to be enacted Data (Use and Access) Act will to a degree relax current strict rules in relation to automated decision making which produces legal or similarly significant effects. The ICO for it’s part is committed to producing updated guidance on ADM and profiling by Autumn 2025. DUA Act: 15 key changes ahead

Other jurisdictions are also implementing or developing a regulatory approach to AI, and it’s worth checking the IAPP Global AI Regulation Tracker.

AI is here. It’s transformative and far-reaching. To take the fullest advantage of AI’s possibilities, keeping abreast of developments along with agile and effective AI governance will be key.

Are you collecting more data than you need?

Five good reasons to apply the data minimisation principle

How often when completing an online form, or downloading a new app, do you think, “why do they need this information?”

I often do. I get frustrated when I can’t fathom out why certain fields are mandatory, like phone number or date of birth. Okay, so I work in data protection. I’m highly tuned to being affronted by this stuff, but I doubt I’m alone.

Sometimes we’re forced to grit our teeth and soldier on (standing in the rain, desperately trying to download yet another parking app, forced to hand over our vital details).

But in other situations we can choose not to engage with companies because they ask for too much of our personal information, or immediately delete an app for the same reason. Alternatively, we may be tempted to provide bogus details, where we can’t see any reasonable purpose for the request (or suspect a phone number will purely be used to badger us).

Faced with yet another data-hungry form this week, I began thinking (again) about the benefits of minimising the personal information collected.

Yes, it’s a core data protection principle under GDPR / UK GDPR, meaning organisations are legally required to collect personal data, which is relevant, adequate and limited to what’s necessary for the purpose(s) it’s being used for. But it’s also a sound approach for other reasons….

Here are five more reasons for streamlining data collection…

1. Build trust

If people think you’re collecting more information than necessary, they may be sceptical, not trust you, and decide to disengage. People are more likely to put their trust in organisations who collect data responsibly.

2. Reduce data breach risks

Minimising personal data mitigates the severity of any impact if you suffer a data breach. This could not only reduce the risk for those affected but lessen the negative impact on your organisation. It could even be the difference between a reportable breach and one that’s unlikely to pose a risk. A data breach of purely names and email addresses, won’t routinely be as serious as a breach which also includes telephone numbers, dates of birth, postal addresses etc.

3. Improve accuracy

Data minimisation can improve the quality of your data, reducing the risk of holding outdated and inaccurate information. This in turn helps to meet another data protection principle; personal data must be accurate and kept up to date.

4. Prevent other uses

If you collect more personal details than you need, you’re leaving the door open to employees (perhaps unwittingly) deciding to use it for other, unintended or unauthorised purposes. Or a purpose which you haven’t been transparent about and may lead to complaints or regulatory action. And yes, this helps to meet another principle: purpose limitation.

5. Save time and complexity of privacy rights requests

Minimising the data held, can make the process of handling privacy rights requests more efficient. For example, there’s less data to sift through when responding to a DSAR, or less data to erase. It also saves awkward questions like, “why do you have this information?”

These points all apply more broadly than simply to information collected via online forms or apps. The principle of data minimisation applies to all the personal data an organisation collects, uses and stores. But as a starter for ten? Why not streamline those data collection forms, they’re a window into your attitude to people’s information, and what your potential customers see first.

Individual Privacy Rights: Quick Guide

We all have privacy rights, we should be told about our rights and organisations need to be ready to fulfil them. Some rights are more commonly exercised than others. Some organisations routinely receive multiple requests, while others only get a handful. But even if your organisation has never received a privacy rights request, it pays to be prepared.

The eight privacy rights are:

  1. The right to be informed
  2. The right of access
  3. The right to rectification
  4. The right to erasure
  5. The right to object
  6. The right to restrict processing
  7. The right to data portability
  8. Rights in relation to automated decision making and profiling.

It’s the responsibility of organisations acting as controllers to fulfil privacy rights requests, with their processors assisting, as necessary. (Controller or processor?)

Here are some key actions organisations need to take (by no means an exhaustive list):

Notify

Tell people about their rights and how to exercise them; routinely achieved via a privacy notice.

Training & awareness

Make sure employees understand what rights people have, and importantly what to do if they receive one. Overlooked or missed requests could result in unwelcome complaints.

Specialist skills

Make sure those staff responsible for fulfilling requests have appropriate skills and knowledge.

Privacy by Design

Build systems and processes with privacy rights in mind. Make sure legacy systems are also fit for purpose. Can data be easily retrieved, amended or deleted?

Procedures

Implement robust procedures for handling requests, bearing in mind each right has different requirements and nuances to consider.

Log

Keep a log of all requests received and their status.

Complaints

Tell people of their statutory right to lodge a complaint with a data protection authority (e.g. the UK’s Information Commissioner’s Office).

Summary of individual privacy rights

(Please take any use of the term GDPR to mean the EU version and UK GDPR)

1. The right to be informed

This right is closely aligned with transparency requirements; organisations must be open and upfront about how they’re using people’s personal data. GDPR sets out specific information which must be provided to inform people. Whenever personal data is collected people must be told the purposes it will be used for, who it will be shared with, how long you are likely to keep it and so on.

This is why privacy notices are so important. They should cover legally required privacy information, and be at hand when people provide you with their personal data.

The right to be informed also applies when personal details are acquired indirectly from another source, not directly from the individual themselves. For more detail, see the ICO Right to be informed guidance.

Tip: In most circumstances this right will apply, but it isn’t an absolute right. It doesn’t need to be fulfilled where doing so would prove a ‘disproportionate effort’, or in circumstances where it conflicts with another statutory obligation, for example an obligation of secrecy.

2. The right of access

Commonly referred to as a Data Subject Access Request – DSAR/SAR. This gives people the right to receive a copy of their personal data, plus other supplementary information. A third party (such as a relative or solicitor) can make a request on behalf of another person.

Requests must be fulfilled at the latest within one calendar month. The time period for responding can be extended for particularly complex cases.

Some DSARs are relatively straightforward, while others can be tricky and nuanced to fulfil, with careful judgement calls to make.

You can only refuse to provide information if an exemption or restriction applies, or if you judge a request to be manifestly unfounded or excessive. For more information about how to prepare and fulfil requests, see our DSAR Guide.

Tip: It’s not a right to documentation! Just because someone is referenced in an email or document, doesn’t mean the whole email chain or document is their personal data.

3. The right to rectification

If someone realises the information you hold about them is inaccurate or incomplete they can request it’s corrected or completed. Organisations have up to one calendar month to respond.

Tip: This isn’t an absolute right, and in certain circumstances you can refuse a request, if you dispute the accuracy of what the individual is claiming.

4. The right to erasure

As the name suggests, people have the right to request their personal data is erased from your systems and physical records if you no longer have a compelling lawful reason to keep it. This applies to ALL systems, back-ups and even data held in the cloud. It will apply if the personal data is no longer necessary or a person withdraws their consent.

It’s also sometimes referred to as the ‘Right to be Forgotten’, in an online context.

As with some other rights, you must respond within one calendar month. Even if you lawfully refuse to comply with a request (either in part, or in full), you must still respond to the individual and explain why you can’t delete their data.

In some cases this right can be relatively straightforward to fulfil if you have limited records for an individual and no reason to keep them, but equally important is making sure you don’t inadvertently destroy personal data you should have held on to. This is where having a clear data retention schedule can be really helpful, so you can easily identify where you have lawful justification for not erasing personal data.

Tip: See our 10 tips for managing erasure requests.

5. Right to object

People have the absolute right to object to their personal details being used for direct marketing. Such objections must be honoured in every case. When personal data is used in other ways, people have the right to object to how their information is being processed, but you don’t have to fulfil this right if you can demonstrate compelling legitimate grounds to continue the processing.

Again, you have one calendar month to respond to an objection, and must inform people if you are denying their request, along with your justification.

6. The right to restrict processing

In our experience this right, which gives people the right to restrict your processing of their personal data, is less commonly exercised. But if you receive a request, you can store their data but not use it. Routinely this would be for a limited time period.

This right can be closely associated with other rights such as the right to object or a rectification request. For example, someone might exercise this right if they’re disputing the accuracy of information you hold about them, or objecting to you using their data for a particular purpose. Also see the ICO Right to Restrict Processing Guidance

7. The right to data portability

This right allows people to easily reuse the personal data you hold about them for other purposes, including requesting it’s transferred to another organisation. (In many sectors data portability requests are rare).

This right only applies when your lawful basis for processing the individual’s data is either consent or performance of a contract, and where your processing is automated. The right doesn’t apply if the processing is necessary for a task carried out in public interests or when exercising power from an official authority. Also see the ICO’s Data Portability Guidance.

Tip: It’s worth noting the right to portability applies to data relating to an individual’s behaviour, and could include location data, website history and more.

8. Rights related to automated decision-making including profiling.

People have the right not to be subjected to solely automated decision-making (including profiling) which has a legal or similarly significant effect. For a decision to be solely automated there must be no meaningful human involvement in the process.

Article 22 of GDPR sets out that solely automated decision-making is only permitted when necessary for entry into performance of a contract, is authorised by applicable law or is based on the individual’s consent.

Furthermore, if you’re using special category personal data you can only carry out processing described in Article 22(1) if you have the individual’s explicit consent or the processing is necessary for reasons of substantial public interests. It’s worth noting this is subject to change under the UK Data (Use & Access) Bill.

Organisations are obliged to give people information about solely automated decisions (with legal or similarly significant effect), and individuals have a right to request human intervention or challenge a decision made about them.

Tip: Be aware the increased use of AI tools, especially in recruitment processes, could be leading to more solely automated decisions which could have a legal or similarly significant effect.

Although there are eight privacy rights, some organisations might never receive requests such as restriction or data portability. So while it’s important to be aware of them all, realistically most organisations will focus on making sure they have robust procedures for handling the types of requests they’re most likely to receive. But just remember, even if you are yet to receive a DSAR, when you do you’ll be pleased you planned for it.

GDPR RoPA simplification

Will EU proposals to change Records of Processing Activities requirements have an impact in practice?

As GDPR passes its 7th birthday, there’s been a flutter of excited commentary about European plans to make changes to the ground-breaking data protection law. In particular, potential amendments aimed at easing the compliance burden on small to medium-sized businesses.

So far, it’s fair to say the proposed changes from the European Commission are far from earth-shattering (albeit there could be more in the pipeline). A key proposal relates to Article 30, Records of Processing Activities. The obligation to keep a RoPA would no longer apply to organisations with fewer than 750 employees provided their processing activities are unlikely to pose a ‘high risk‘ to the rights and freedoms of individuals.

The proposal also clarifies the processing of special category data for purposes related to employment, social security and social protection would not, on their own, trigger the requirement to maintain Article 30 records.

For comparison, the existing exception only applies to organisations with less than 250 employees, unless the processing carried out is:

 Likely to result in a risk to the rights and freedoms of data subjects,
 The processing is not occasional, or
The processing includes special category data or personal data relating to criminal convictions and offences.

What impact might this RoPA change have?

As many organisations process special category data (even if just for their employees), and processing activities are often routine, not occasional, the current exception for smaller companies is limited in scope. The proposed wider exemption would clearly apply to far more organisations.

I can absolutely see why the Commission has homed in on RoPA requirements, as in my experience many organisations struggle to maintain an up-to-date RoPA, or don’t have one at all. But how helpful could this change actually be?

In practice, organisations subject to GDPR will still need to assess whether their processing activities involve ‘high risk’ to individuals. To do this they will need to weigh up their purpose(s) for processing, their lawful basis, how long they keep personal data, who it is shared with, whether any international data transfers are involved, what security measures are in place and so on.

It seems a bit of a catch 22 – a RoPA is a great way of capturing this vital information and clearly ascertaining where risk might occur. Alongside this, organisations will still need to meet transparency requirements and the right to be informed. And, yes you guessed it, an accurate RoPA is very helpful ‘checklist’ in making sure a privacy notice is complete.

We’ve written more about the benefits of a RoPA here.

Importantly, if this proposed change goes ahead, it won’t apply to organisations which fall under the scope of UK GDPR (unless the UK Govt decides to adopt a similar change).

Notably, fairly significant changes to UK GDPR’s accountability requirements were on the cards under the previous Conservative Government’s data reform bill. However, seen as too controversial, these were swiftly dropped after the election in the new Labour Government’s Data (Use and Access) Bill (DUA).

It’s possible the UK could regret not being more ambitious in the DUA Bill; there’s an obvious irony given oft-heard criticisms of EU overregulation – here’s a case where the EU’s easing of certain requirements could leave UK organisations with more onerous rules.

GDPR: Consent and why records are crucial

The ICO has fined a telemarketing firm £90k for their inability to demonstrate valid and specific consent was collected from the people they’d contacted. Data was collected directly, via the telemarketer’s website and via a third-party survey company.

Crucially, the firm couldn’t produce evidence of consent. This led me to think about other organisations; you may have gone to great efforts to make sure the consent you collect meets the GDPR standard, but are you keeping adequate records? Occasionally, the old legal adage applies – ‘If it isn’t written down, it didn’t happen.’

If your consent is subject to regulatory scrutiny, proof is highly likely to be requested. A customer might ask for evidence, and could escalate a complaint if you’re unable to produce it.

So, what records do we need to keep?

Here’s a refresher on the consent rules and how to retain adequate evidence. For simplicity’s sake when I refer to GDPR in this article I mean both GDPRs – the EU and UK flavours.

Consent is ONE of SIX lawful bases for processing

Consent is just one of six lawful bases. GDPR requires organisations to select an appropriate lawful basis for each purpose for processing personal data. They’re all equally valid; no single basis is better than another. You should choose the most appropriate basis for each activity. Often consent might not be appropriate, but sometimes consent is required by law for certain activities.

Just be mindful; don’t rely on consent if another lawful basis would be more appropriate. But also be careful not to try and shoe-horn your activities into another lawful basis (such as legitimate interests), when consent really would be the best approach, or is legally required.

What constitutes valid consent

GDPR defines consent as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her”.

Let’s break this down…

Freely given consent

People must be given a genuine choice
People should be able to refuse to give their consent without detriment
Consent should be easy to withdraw
Consent shouldn’t be bundled into T&Cs, unless necessary for the service

It’s also sometimes important to weigh up any ‘imbalance of power’ over the individual whose consent you seek. For example, consent may not be freely given if the individual feels they don’t really have a choice. Consent can therefore be tricky in employer-employee relationships, if staff might feel a degree of pressure, or feel they will be penalised or treated differently if they refuse.

Specific and informed consent

It must be clear who people are giving their consent to. The organisation relying on the consent must be clearly identified. If you want to rely on consent collected for you by a third party, your organisations must be named at the time consent is collected.
Consent must specifically cover all of the purposes for which it’s being collected. Separate consent should be collected, wherever possible, for different activities. For example, collecting separate marketing consents for different marketing channels. This isn’t a hard and fast rule and isn’t required if it would be unduly disruptive, or the activities are clearly interdependent.
It must be clear people can withdraw their consent at any time (and the ICO advises you include details of how to do so).

Remember, there’s specific information you’ll always need to provide when you collect people’s personal details. There are distinct transparency requirements and people have the right to be informed. You may choose to take a layered approach, and it’s advisable to always have a clear link to a Privacy Notice (aka Privacy Policy), or details of how to access this.

Consent by an unambiguous indication and clear affirmative action

Consent must be given by a deliberate and specific action to opt-in or agree. For example; an opt-in box, clicking ‘submit, signing a statement, or verbal confirmation. Failing to opt-out is not consent. Pre-ticked boxes are not consent.

For more information see ICO consent guidance, which covers how to collect consent, how to manage requests to withdraw, and more.

Evidence of consent

GDPR states: “Where processing is based on consent, the controller shall be able to demonstrate that the data subject has consented to processing of his or her personal data.”

This means, organisations must have an audit trail to meet their accountability obligations. This is what the telemarketing firm failed to grasp. In practice, this means keeping records of:

Who consented e.g. their name or other identifier.
When they consented e.g. an online time stamped record, a copy of a dated document or a note of the time and date verbal consent was given.
What they were told at the time e.g. a copy of the consent statement used at the time, along any separate privacy notice or other privacy information used at the time.
How consent was given e.g. a copy of the data capture form or a note of a verbal conversation.
Any withdrawal of consent, and when.

This is why we recommend when your updating consent statements or privacy notice(s) keeping copies of older notices and the dates they were operative. This doesn’t need to extend to keeping copies of every web form, but records held on your CRM or other relevant system need to be accurate. The ICO guidance on keeping records of consent is a useful resource.

Consent isn’t easy

Collecting valid consent can feel like a minefield. It means carefully ticking off requirements and keeping evidence. This isn’t hard once you’ve established a routine and get into the habit of thinking ‘that needs keeping hold of.’ Getting this right, means you’ll breathe a sigh of relief if you’re ever subjected to scrutiny.

For more detail on when consent is legally required under UK ePrivacy law for marketing activities see our guides to the email marketing rules and telemarketing rules.

International Data Transfers: When a Transfer Risk Assessment is required

A recent €530 million fine, issued to TikTok by the Irish Data Protection Commission (DPC) for failing to meet international data transfer rules, demonstrates why cross-border transfers of personal data must be effectively managed.

While in this case an EU fine, UK organisations are not immune to data transfer requirements, nor the potential fallout of non-compliance. Organisations need to be mindful that people risk losing their protection under UK data protection laws if their personal data is transferred outside the UK.

Unless one of the following four conditions can be met, organisations must make sure appropriate safeguards are in place to protect ‘restricted transfers’ overseas:

You have the specific consent of individuals for the international transfer
■ The transfer is absolutely necessary to perform contract with the individual
■ You can rely on an ‘Article 49 derogation’ – where the specific transfer is necessary for important reasons in the public interest, for litigation or for a public register
If there’s an approved Code of Conduct between members, e.g. members of a trade association

Often these conditions won’t be met, and appropriate safeguards will be necessary. In certain circumstances, there’s also a requirement to conduct a Transfer Risk Assessment (TRA). In the EU this is called a Transfer Impact Assessment (TIA) and this requirement was overlooked by TikTok.

A restricted transfer is where an organisation shares personal data with another organisation (i.e. a separate controller), or to a vendor/service provider/supplier (i.e. processor) and the processing will take place in another country. This includes where overseas data sharing takes place between companies which are part of the same group of companies. For example, one based in the UK and one in the USA. When data is anonymised, so that its no longer ‘personal’ data, its not classed as restricted transfer.

Both controllers and processors also need to consider any further transfers in the supply chain to ‘sub-processors’ located in other countries.

Crucially, we need to recognise a ‘transfer’ will take place if there’s ‘access to’ personal data. For example:

A UK based controller permitting a supplier based in India to access the personal data of its customers would represent a restricted transfer.
A UK based processor, permitting one of their suppliers (a ‘sub-processor’) based in France to access the personal data of its client (the controller).
An EU based controller sharing personal data with a separate controller based in San Francisco, USA.

For more detail on what constitutes a restricted transfer see our International Data Transfers Guide or the ICO Guidance.

Do we need to make a restricted transfer?

Before making a restricted transfer, organisations should consider whether they can achieve their requirements without sharing ‘personal’ data. If you share data in an anonymised form, so it’s never possible to identify individuals, it is no longer personal data, so the restrictions do not apply.

Why did TikTok get fined?

The DPC inquiry found the social media platform had infringed GDPR on the following three key points:

Equivalent protection: There was a failure to verify, guarantee and demonstrate that personal data of EEA users, remotely accessed by staff in China, was afforded a level of protection essentially equivalent to that guaranteed within the EU.
Transfer Impact/Risk Assessment: The necessary assessments were not undertaken to address potential access by Chinese authorities to EEA personal data under Chinese anti-terrorism, counter espionage and other laws, which were considered to materially diverge from EU standards.
Transparency: TikTok’s 2021 Privacy Policy (aka Privacy Notice) did not meet necessary transparency requirements to inform EEA users that personal data was stored in servers in the United States and Singapore and was remotely accessible by entities in a number of other countries including China, Malaysia and the Philippines. An updated 2022 Privacy Policy rectified this particular infringement.

When is a Transfer Risk Assessment (TRA) required?

A TRA is not always required, it depends on the appropriate safeguard mechanism an organisation is intending to rely on for a restricted transfer.

Adequacy decision (Article 45): No TRA required

Adequacy status is awarded to specific countries judged to have a similar level of data protection standards as those in the UK. An adequacy decision essentially allows for the free flow of personal data between the UK and the other country. The UK Government refers to these as ‘data bridges’. When you rely on adequacy, a TRA is not required.
Currently there is reciprocal adequacy between the UK and the EEA. You can check which other countries have adequacy in the ICO data transfer guidance.

Other safeguard mechanisms (Article 46): TRA required

The requirement to conduct a risk assessment came into effect following the 2021 EU Schrems II ruling, and will apply, for example, if you intend to rely on the following safeguard mechanisms:

ICO’s International Data Transfer Agreement (IDTA)
EU Standard Contractual Clauses (SCCs) with the UK Addendum
Binding Corporate Rules (BCRs)

What’s the purpose of a Transfer Risk Assessment?

A TRA aims to help organisations to consider if the relevant protections for people under UK data protection law will be undermined when their personal data is transferred overseas. The ICO explains there are two broad types of risks to be considered:

Risks to people’s rights arising in the destination country from third parties accessing the information that are not bound by the Article 46 transfer mechanism, in particular government and public bodies.
Risks to people’s rights arising from difficulties enforcing the Article 46 transfer mechanism.

It’s worth bearing in mind if a processor is making a restricted transfer, for example to a sub-processor, it’s their responsibility to conduct the TRA. A controller should still carry out reasonable and proportionate checks to make sure these transfers are compliant with UK GDPR.

When onboarding a new processor, some controllers may request to see copies of their processors’ TRAs to sub-processors.

More information is available in the ICO TRA Guidance.

How to conduct a TRA for a transfer from the UK

The ICO sets out three distinct options for conducting the risk assessment.

Option 1: ICO TRA tool

This is a specific risk-assessment tool. It enables you to evaluate any increased risk to people’s privacy and other human rights as a result of the transfer, comparing this with if the data remained in the UK.

In our view, the ICO has gone to considerable efforts to make this (Word document) tool as straightforward as possible. It helpfully provides a list of common categories of personal information with an initial risk score. You don’t have to use this specific template and can record your answers to six key questions in other ways.

However, to the uninitiated the TRA tool can be tricky to complete. If the circumstances of a specific transfer require a more detailed investigation it will involve a level of research into the legal system, respect for rule of law and the human rights record in the destination country.

Option 2: EDPB approach

This assessment looks at comparing the laws and practices of the UK with those of the destination country (the ‘data importer’). In particular, it means looking at the safeguards in place in relation to third party access to the information, particularly by Governments. The safeguards don’t need to be identical but need to be sufficiently similar to those in the UK.

Option 3: Reliance on published UK Government analysis in making adequacy regulations

As mentioned above, the UK Government can make adequacy decisions (known as ‘data bridges’). In making these decisions there are specific considerations the Government must take account of when assessing another country or territory. This includes an assessment of risks similar to the assessment which would be undertaken when using options 1 and 2. Therefore, if there’s relevant published UK Government analysis, which judges standards of data protection to be satisfactory, this can be relied upon. Notably, in 2023 the Department for Science, Innovation and Technology (DSIT) published analysis for the United States. DSIT Analysis

Transfers from the UK to the United States

It’s worth taking a look specifically at transfers from the UK to US. These are a common type of restricted transfer, especially for UK based organisations considering utilising the services of US based technology / SaaS providers.

Adequacy: the EU-US Data Privacy Framework, plus US-UK ‘data bridge’ extension

There’s an adequacy decision which UK organisations may be able to rely on, meaning a TRA is not required. However, unlike other adequacy decisions for specific countries (such as Japan, Israel and New Zealand), the ability to rely on the adequacy decision for the United States depends on whether the specific US company you are transferring data to has self-certified to the Data Privacy Framework and the UK extension to this framework. You can check if an organisation is certified here.

To give some commonly used examples, at the time of writing, Google LLC, Microsoft, Salesforce and Mailchimp are signed up to the Framework and UK extension (‘data bridge’).

Other safeguard measures and TRA

If an organisation isn’t listed as a signatory to the Data Privacy Framework and UK extension it’s likely you’ll need to rely on the ICO’s IDTA, EU SCCS with the UK Addendum, or BCRs (for intra-group transfers). And options 1, 2 and 3 outlined earlier for conducting a TRA will be in play.

I’d encourage you to read the ICO’s guidance on transfers to the US, which sets out the potential to streamline the TRA process by relying on UK Government analysis (e.g. Option 3). The ICO states: “a significant part of the analysis relates to broader issues not specific to the US data bridge but analyses the application of relevant US laws and practices more generally. It is equally relevant to personal information transferred using an Article 46 transfer mechanism.”

The ICO’s guidance  sets out in more detail how you can rely on this analysis, as an alternative to using the TRA Tool or the EDPB approach.

To conclude, international data transfer rules are not simple! They can often feel overly complex, with tricky compliance hurdles. Nonetheless, it’s both legally and ethically the right thing to do to make sure people don’t lose the rights they are entitled to under UK data protection law.

In practice, a risk-based approach is frequently adopted, applying more rigour to more risky transfers. For example, a transfer of a list of employees’ work email addresses is unlikely to pose as much risk as transferring more sensitive personal information. As ever, the devil is in the detail.

Rising cyber threats but data breaches aren’t always obvious

The UK Government and National Cyber Security Centre have issued warnings about significant and growing cyber threats, with the expectation of increased ransomware attacks, state-sponsored cyber activity and sophisticated cybercrime. Do take heed: the retail sector has already seen a number of damaging attacks.

Sometimes, it’s obvious a data breach has taken place. However, this isn’t always the case, especially when cyber criminals take steps to cover their tracks. A recent example illustrates the consequences for organisations who fail to fully appreciate the significance of a malicious attack.

The ICO has issued a £60k fine to law firm DPP, following a 2022 cyber-attack. The attack led to highly sensitive and confidential personal information being published on the dark web. The ICO investigation discovered lapses in IT security practices, leaving information vulnerable to unauthorised access. Hackers were able to exploit a user account which did not have Multi-Factor Authentication (MFA), enabling them to move laterally across the firm’s systems.

Let’s be clear; MFA is now a must have on all relevant data systems.

Announcing the fine, the ICO said; “DPP only became aware of when the National Crime Agency contacted the firm to advise information relating to their clients had been posted on the dark web. DPP did not consider that the loss of access to personal information constituted a personal data breach, so did not report the incident to us until 43 days after they became aware of it.”

A personal data breach is defined as ‘a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data.’ That’s a broad scope.

The ICO enforcement notice accepts actions taken by the attackers made DPP’s response to the incident difficult. Unfortunately, DPP’s initial assessment indicated no personal data had been exfiltrated and didn’t consider loss of access to personal data to be a breach – therefore the firm didn’t report it.

You can check out the full enforcement notice, but bear in mind it’s reported DPP disputes some of the ICO’s conclusions and may appeal.

Any organisation suffering a cyber-attack has my sympathy. Attacks are becoming more frequent, sophisticated and harder to track. They can severely disrupt day-to-day operations. Ascertaining the cause and consequences of an attack can be difficult. Indeed, in some cases the consequences might never be clearly established. And when it becomes public knowledge the organisation needs to work decisively, not just to get operations back up and running and mitigate any harms to those affected, but also manage PR.

As I write, we’re witnessing M&S battle a significant ransomware attack, which has left store shelves empty. Cyber criminals have also reportedly told the BBC their attack on the Co-op is more serious than the company had previously admitted.

Organisations are legally required to report personal data breaches to the ICO (or another relevant Data Protection Authority) within 72-hours of becoming aware, unless there is unlikely to be a risk to individuals. When it comes to ransomware attacks, it may be best to assume that (more likely than not) personal information is affected. The ICO states in a research paper; ‘If you become a victim of ransomware, you should assume the information has been exfiltrated (extracted).’

In other words, it would be wise to submit an initial data breach report. It’s understood you won’t know all the facts immediately and you may need to bring in digital forensics expertise. In this situation, you can submit an initial report and update the Regulator when more facts become known. The risk can subsequently be upgraded or downgraded as you continue your investigations. We’ve written more about how to assess the risks posed by a data breach here.

It’s important, even for small-to-medium sized businesses, to have sufficient knowledge about what constitutes a personal data breach, and the threats we all face. Here’s a refresher of some common ways a personal data breach can occur.

Cyber security incidents

We often hear about ransomware attacks where hackers gain unauthorised access to databases, exfiltrating or altering personal information, and making a demand for payment. There are also other forms of malicious attack, such as;

Brute force – this is where hackers use algorithms to ‘guess’ username and password credentials, testing multiple combinations to try to gain access to user accounts. It’s understood this is how hackers initially got into DPP Law’s systems. Clearly, these attacks are more successful when passwords are easy to guess and when MFA is not in place.

■ Denial of Service (DOS) – this works by overloading a computer network or website and can result in a degrading of performance, or render the system completely inaccessible. DoS attacks may result in full or partial loss of access (availability) to personal data records. And as we said above, that’s classed as a data breach.

■ Supply chain attacks – these attacks target vulnerabilities in third-party services your organisation is using. In 2023 the BBC, British Airways and Boots were among many organisations impacted by the well-publicised MOVEit supply chain breach. More recently the ICO issued a £3 million fine to an IT software company which provided services to many UK organisations including the NHS.

Phishing – this is when criminals use scam emails to trick people into clicking on a malicious link. Phishing attacks can trick people into sharing sensitive information, such as payment card details or login credentials. As well as email, phishing can be spread via text messages or over the phone.

I’d urge you to read the ICO’s Learning from the Mistakes; which provides detailed information on the types of cyber-attacks organisations can suffer and ways to mitigate the risk.

Loss or theft of devices or hard copy documents

This is pretty self-explanatory; a smartphone, laptop or other device containing personal data is lost or stolen. When devices are not encrypted this can lead to the exposure of potentially sensitive personal information. Alternatively, a data breach can occur when physical documents are lost or stolen.

Disclosure of personal information

This type of incident can occur in a number of different ways, for example;

An email sent to the wrong recipient(s).

Accidentally using the CC field in emails for multiple recipients, thereby revealing their email address to all recipients. In some cases this can just be embarrassing, but in others like the Central YMCA breach much more serious.

Information is posted to the wrong person, such as a hospital sending medical records by post to wrong recipient.

Publishing confidential information on a public website.

Sharing personal data with unauthorised third parties.

Unauthorised Disclosure

This type of incident may occur due to a malicious attack such as ransomware, or it may be an insider breach, as illustrated by these cases;

In 2023 two former Tesla employees leaked confidential and personal information relating to employees and customers.

Back in 2014 a Morrison’s employee leaked his colleagues’ payroll details in what was seen as an act of revenge after being given a verbal warning. A case which resulted in years of legal wrangling over whether Morrison’s was liable for the actions of a rogue employee.

This type of incident also includes ‘employee snooping.’ For example, a member of staff with access to a customer database browses the personal data of others without a legitimate business purpose. Or a police officer or council official looks up and discloses information without authority.

Improper disposal of records

Insecure disposal of electronic or paper records might lead to a data breach. For example, if a company disposes of old paper files containing customer details without shredding them, and a third party finds them.

The above is by no means an exhaustive list, but provides those less experienced in data breaches with a steer on what risks to be aware of.

Not all security incidents will be personal data breaches; they could involve commercially sensitive information, but no personal data. While these don’t need to be reported if they meet a certain threshold, they still have the potential to cause considerable fallout.

Privacy violations

In other circumstances there may be a violation of data protection law, which is not a data breach. As an example, I’ve been asked before whether it’s necessary to report an email marketing campaign accidentally sent to customers who’ve unsubscribed as a breach. While a clear violation of the right to object to direct marketing, this doesn’t represent a breach of security: there’s been no destruction, loss, alteration, unauthorised disclosure of, or access to personal data. The individuals’ personal data remains secure. Efforts therefore need to focus on trying to minimise the risk of complaints escalating, and making sure this never happens again.

To conclude, the DPP Law case is instructive; it’s not a big company, employing less than 250 people, but handles highly sensitive information relating to their clients. The attack suffered sends a clear message; any business can fall victim to cyber-attacks and personal data breaches. The more sensitive the data your organisation handles, the more damaging a breach could be. Not only must cyber security be treated as a priority, but so are robust data breach procedures to guide your team through any potential attack.

ICO fines software company £3millon after cyber-attack

First UK processor fine is a stark reminder of supply chain risks

The Information Commissioner’s Office has fined Advanced Software Group Ltd (Advanced) £3.07 million following a cyber-attack in 2022 which put the personal information of nearly 80,000 people at risk. This marks the first fine issued under UK GDPR to a processor.

Advanced, which provides IT and software services to organisations including the NHS, was found to have failed to implement appropriate technical and organisational measures to protect its systems.

In the ransomware attack, hackers managed to access certain systems of Advanced’s health and care subsidiary. This was done via a customer account, which notably did not have Multi Factor Authentication (MFA). The attack caused massive disruption to critical NHS services and healthcare staff were left unable to access patient records. Advanced was found to have insufficient measures in place, including;

Gaps in deployment of Multi Factor Authentication
A lack of mature vulnerability management scanning mechanisms
Inadequate security patch management

A provisional fine of £6.09million was reduced to £3.07million after Advanced’s proactive engagement with the National Cyber Security Centre, the National Crime Agency and the NHS. Advanced has agreed to pay the fine without appeal. You can read the ICO enforcement notice here.

Key learnings from this case

This action serves as a timely reminder for both controller organisations and service providers to make sure robust measures are in place to protect personal data and ensure systems are secure throughout the supply chain.

Supplier due diligence

While this fine has been imposed on a processor, organisations which engage other parties to provide services have a duty to make sure they work with suppliers who can demonstrate robust standards in data protection and information security.

In our experience, controllers need to make sure they’re asking the right questions before they onboard any new supplier who’d be processing personal data on their behalf – whether this be cloud computing providers, SasS solutions or other technology providers. To give a simple illustration;

Do they have a DPO or another individual in the business who oversees data protection compliance?
Do they have an Information Security Officer, or other related role?
Can they provide evidence of data protection and info sec policies and procedures?
Have they experienced a data breach before?
What information security measures do they have in place?
Are security measures regularly test, and how?

Suppliers for their part need to be prepared to meet client’s due diligence requests, including being able to provide detailed information of data location(s) and security measures and controls in place to protect client data.

We’d stress a proportionate risk-based approach should be taken to this, the more sensitive the data the more robust the checks should be.

Seven quick information security tips

1. Restrict access to your data and services and use Multi Factor Authentication where possible
2. Choose secure settings for your network, devices and software
3. Protect yourself from viruses and other malware
4. Keep your devices and software up to date
5. Keep logs and monitor them
6. Restrict or prevent use of USB / memory drives
7. Back up your data

The ICO has published ransomware and compliance guidance which provides information on how to best protect systems.

Controller-processor contracts

Once satisfied with a prospective supplier’s approach to data protection and information security it’s then vital to make sure contractual terms cover core requirements under UK GDPR. Often covered in a Data Processing Agreement/Addendum, these shouldn’t be overlooked. We’ve written about supplier agreements here.

It’s worth noting liability clauses in such agreements are facing increasing scrutiny, reflecting the increased cost of non-compliance and the fall-out from data breaches. Irina Beschieriu, Deals Counsel for Atos IT Solutions has written an interesting article on this for IAPP and says; “General limitations of liability clauses are no longer considered sufficient to address the specific risks associated with data privacy. Instead, we have seen the rise of dedicated provisions meticulously crafted to address data privacy liabilities specifically. Negotiations surrounding these provisions are now more intense, more detailed, and carry higher stakes than ever before.” See: The growing burden of data privacy liability in tech contracts

While ICO fines are not commonplace, we’d urge both controllers and processors to take heed of this action. In announcing this enforcement action Information Commissioner John Edwards says; “With cyber incidents increasing across all sectors, my decision today is a stark reminder that organisations risk becoming the next target without robust security measures in place. I urge all organisations to ensure that every external connection is secured with MFA today to protect the public and their personal information - there is no excuse for leaving any part of your system vulnerable.”