DPIAs: how to get organisational buy-in

March 2025

Data Protection Impact Assessments (DPIAs) can get a bad rap. Project managers, team leaders and others may not understand them, complain they’re too onerous to complete or say they ‘slow things down’. The result – data protection risks may not be identified or mitigated. Assessments may get overlooked, conducted in a less than thorough way, or get started but remain incomplete.

To banish the negative vibes we need to shout about the benefits of DPIAs. Make sure relevant teams know what they are, when and how to conduct them, and most importantly make sure the process is clearly explained and straightforward to follow.

When used well in the right situations, they can be one of the most useful tools in your organisation’s data protection toolkit. It can’ be stressed enough – DPIAs help to identify, assess and tackle risks before they see the light of day. They help you meet you protect the rights and interests of your customers and employees, protect your business reputation, meet your GDPR accountability obligations and demonstrate how you comply with data protection laws.

Let’s take a look at how we breathe new life into the DPIA process. But first a quick recap on what the law requires…

When DPIAs are mandatory

Sometimes there’s no choice and a DPIA is a ‘must do’. Under GDPR/UK GDPR it is mandatory to conduct a DPIA when projects are likely to represent a ‘high risk’ to those whose personal data is involved. The law gives us three examples:

Large scale use of special category data
Systematic and extensive profiling with significant effect
Public monitoring on a large scale

The above activities are far from routine, so thankfully the UK’s Information Commissioner’s Office (ICO) and other European Data Protection Authorities have published their own lists of processing ‘likely to result in high risk’. For example, the ICO sets out the following:

1. Using innovative technologies or the novel application of existing technologies (including AI).

2. Any decisions which could lead to denial of service; processing which makes decisions about an individual’s access to a product, service, opportunity or benefit which is based to any extent on automated decision-making (including profiling) or involve processing special category data.

3. Large-scale profiling of individuals.

4. Any processing of biometric data, where this is used for identification purposes.

5. Any processing of genetic data (unless by an individual GP or health professional for the provision of health care directly to the person concerned).

6. Combining, comparing or matching personal data gathered from multiple sources.

7. Any invisible processing – this is where personal data is not collected directly from individuals, and they are not aware of how it’s being used (i.e. the effort of providing privacy information to individuals would be disproportionate).

8. Tracking individual’s geolocation or behaviour.

9. Targeting children or other vulnerable individuals.

10. Risk of physical harm – where a personal data breach could jeopardise the physical health or safety of individuals.

For more detail please see the ICO DPIA Guidance.

How to assess ‘high risk’

DPIAs aren’t required for every new or change of activity and insisting teams undertake them too often can turn them into a needless box-ticking exercise and can feed into a general air of malaise.

Judgement calls need to be made to assess ‘high-risk’ and ‘large-scale’ and a method for evaluating where the threshold falls. This will differ depending on sector, nature of data handled, organisational risk appetite and so on. Regulated sectors, such as financial services and telecoms, have more to think about and may adopt a cautious approach. Also, bear in mind a DPIA can be a helpful risk assessment exercise even when a project doesn’t fall under the mandatory requirements.

Adopt a screening process

In my experience, embedding a straight-forward screening questionnaire is a great way to effectively sift through change projects and decide which need a more detailed assessment and which don’t. You can either ask teams to complete the questionnaire, or set aside 30 minutes to lead them through the screening. Then the DPO or data protection leader can make the call. A screening process may include questions such as:

What does the project /activity hope to achieve?
What personal information is involved?
Does this include more sensitive data (like financial details) or special category data?
Where did we source the data from?
Does the activity involve children’s data or others who would be considered vulnerable?
Will data be shared with other organisations?
Could what we’re doing be considered innovative or cutting edge?
Are we using personal details for a new purpose?

This is not an exhaustive list, there are other pertinent questions to ask, but try not to make it too long.

Engage with your teams

First rule of DPIA Club is… we MUST talk about it!

Build relationships with the people who ‘do new stuff’ with your data. The people who run development projects and the key stakeholders – such as heads of the main functions which process personal data across your business, e.g. Marketing, Operations, HR, etc. If you have a Procurement team, then target them too.

Ask what projects they have on the horizon which could affect the way personal data is used. The aim is to make them aware of DPIA requirements and ask them to give you an early ‘heads up’ if they are looking to onboard a new service provider or use data for an innovative new project.

Let them know tech projects and system migrations almost always involve some form of personal data processing or other. They should be mindful of the potential for this to lead to privacy risks.

If they think about data protection from the outset it will save valuable time and money in the long run. Save unwelcome hiccups along the line. Give them examples of how things have gone wrong or could go wrong.

You could raise awareness across the business using your intranet, email reminders, posters, drop-in clinics … whatever it takes to get the message across. ‘Training’ sessions with key stakeholders can also really help to enhance their risk assessment skills.

Use a good DPIA template

In my opinion too many businesses use complex and jargon-filled DPIA templates, which many people find hard to understand. They ask questions in ‘GDPR-talk’ which people find hard to grasp and answer, and they often don’t really help people to identify what privacy risks actually look like.

Take a look at your DPIA template with fresh eyes. If you don’t like it use a better one, or adapt it to fit your business ways of working.

Be prepared for Agile working

Many development projects use Agile methodology; breaking projects into smaller manageable cycles called sprints. These allow teams to adapt quickly to changes and deliver incremental gains more quickly. This means adapting your assessment approach. You won’t get all the answers you need at the start. Stay close to the project as it evolves and be ready to roll your DPIA in line with scheduled sprints.

I hope this has given you some ideas for how to engage your colleagues and freshen up the DPIA process. Dispelling the myth DPIAs are a waste of time, too complex or too onerous is a fight worth winning.

Online services face scrutiny over use of children’s data

March 2025

The importance of compliance with the UK Children’s Code

Social media platforms. Content streaming services. Online gaming. Apps. Any online services popular with children carry an inherent privacy risk.

Along with growing concerns over protecting them from harmful content, there’s an increasing focus on children’s privacy. Beyond the companies you’d expect to be impacted, it’s worth remembering these issues can affect a growing number of other organisations.

We know children are being exposed to inappropriate and harmful content. Some content is illegal like child sexual abuse images or content promoting terrorism, but other material can still cause harm. Such as content promoting eating disorders, or content which is inappropriate for the age of children viewing it, or is overly influential.

The Information Commissioner’s Office (ICO) recently launched an investigation into TikTok, amid concerns at how the platform uses children’s data – specifically around how their data is used to deliver content into their feeds. The regulator is also investigating the image sharing website Imgur and the social media platform Reddit, in relation to their use children’s data and their age verification practices.

These investigations are part of wider interventions into how social media and video sharing platforms use information gathered about children and the ICO say it’s determined to continue its drive to make sure companies change their approach to children’s online privacy, in line with the Children’s Code, which came into force in 2021.

This all serves as a timely reminder of the need to comply with this Code.

What is the Children’s Code?

The Children’s Code (aka ‘Age-Appropriate Design Code’) is a statutory code of practice aimed at protecting children’s privacy online. It sets out how to approach age-appropriate design and gives fifteen standards organisations are expected to meet. These are not necessarily technical standards, but more principles and required privacy features.

Who does the Children’s Code apply to?

A wide range of online services are within scope, including apps, search engines, online games, online marketplaces, connected toys and devices, news and educational sites, online messaging services and much more.

If children are ‘likely’ to access your online service(s), even if they are not your target audience, the code applies. For example, free services, small businesses, not-for-profits and educational sites are all in scope. Companies need to ask themselves – is a child likely to use our product or service online?

What ages does it apply to?

The code adopts the definition of a child under the UN Convention on the Rights of the Child (UNCRC) which is anyone under the age of 18. This means it applies to online services likely to be accessed by older children aged 16 and 17, not just young children. (This shouldn’t be confused with the age of consent for a child, which for online services is 13 in the UK).

Who does the Children’s Code not apply to?

Some public authority services are out of scope. For example, an online public service which is not provided on a commercial basis, or a police force with an online service with processes personal data for law enforcement purposes. Preventative or counselling services are also not in scope, such as websites or apps which specifically provide online counselling or other preventative services to children. However, more general health, fitness and wellbeing apps are in scope.

How do you assess ‘likely to be accessed’ by a child?

Crucially, the Code covers services which may not be specifically ‘aimed or targeted’ at children, but are ‘likely’ to be accessed by them. Each provider will need to assess this, and the Code provides some questions to help you:

Is the possibility of children using your service more probable than not?
Is the nature and content of the service appealing to children even if not intended for them? (Remember this includes older 16-17 year olds)
Do you have measures in place to prevent children gaining access to an adult only service?

This assessment may not always be clear cut, but it’s worth nothing the Code states:

If you decide that your service is not likely to be accessed by children and that you are therefore not going to implement the code then you should document and support your reasons for your decision. You may wish to refer to market research, current evidence on user behaviour, the user base of similar or existing services and service types and testing of access restriction measures.

If you initially judge that the service is not likely to be accessed by children, but evidence later emerges that a significant number of children are in fact accessing your service, you will need to conform to the standards in this code or review your access restrictions if you do not think it is appropriate for children to use your service.

This means there’s a clear expectation online services may need to have evidence if they decide they do not need to conform with the Code.

The 15 standards of the Children’s Code

The Code is extremely detailed. Here’s a summary of the salient points:

1. Best interest of the child – consider the needs of children using your service and how best to support those needs. The best interests of the child should be a primary consideration when designing and developing an online service.

2. Data Protection Impact Assessments (DPIAs) – these should be carried out to identify and minimise any data protection risks to children.

3. Age-appropriate application – assess the age range of your audience. Remember, the needs of children of different ages should be central to design and development. Make sure children are given an appropriate level of protection about how their information is used.

4. Transparency – UK GDPR specifically states privacy information must be easy for children to find and easy for them to understand. The Code says you should consider bite-sized ‘just in time’ notices when collecting children’s data.

5. Detrimental use of data – don’t use children’s information in a way which would be detrimental to children’s physical or mental health and well-being.

6. Policies and community standards – if children provide you with their personal information when they join or access your service, you must uphold your own published T&Cs, policies and community standards. Children should be able to expect the service to operate in the way you say it will and for you to do what you say you are doing.

7. Default settings – privacy settings for children must be set to ‘high’ by default, unless you can demonstrate a compelling reason for taking a different stance. It’s not enough to allow children to activate high privacy settings, you need to provide these by default.

8. Data minimisation – only collect and keep the minimum amount of data about children that’s necessary for you to provide your service.

9. Data sharing – do not share children’s data unless you can demonstrate a compelling reason to do so. The word ‘compelling’ is significant here. It means the bar for non-compliance is set very high.

10. Geolocation – any geolocation privacy settings should be switched off by default. Children should have to actively change the default setting to allow their geolocation to be used (unless you can demonstrate a compelling reason for this to be switched on).

11. Parental controls –make it clear to children if parental controls are in place and if children are being tracked or monitored by their parents.

12. Profiling – options which use profiling should be switched off by default (again, unless you can demonstrate a compelling reason for these to be switched on). This doesn’t mean profiling is banned, but if conducted measures should be in place to protect children from harmful effects. If profiling is part of the service you provide, you need to be sure this is completely necessary.

13. Nudge techniques – don’t use techniques which lead or encourage children to activate options which mean they give you more personal information, or turn off privacy protections.

14. Connected toys and devices – you need to conform with the code if you sell toys or devices (such as talking teddy bears, fitness bands or home-hub interactive speakers) which collect personal data and transmit via a network connection. If your electronic toys or devices do not connect to the internet and only store data in the device itself, the code does not apply.

15. Online tools – it must be easy for children to exercise their privacy rights and report concerns. The right to erasure is particularly relevant when it comes to a children. Mechanisms used to help children exercise their rights should be age appropriate, prominent and easy to use.

The ICO investigations into TikTok, Imgur and Reddit should be seen as a direction of travel for us all. Going forward, they signal how the regulator intends to treat compliance around children’s online privacy, welfare and safeguarding.

If your service uses children’s data even tangentially, it’s worth remembering, re-examining and considering the Code and how it might impact on your business.

Right to erasure in the spotlight: how to manage requests

March 2025

10 tips to tackle erasure requests

The European Data Protection Board (EDPB) has announced a year-long focus on the right to erasure. Data Protection Authorities across Europe are taking part in this Coordinated Enforcement Framework initiative and will be contacting a number of organisations from different sectors, either launching formal investigations or undertaking fact-finding exercises.

The EDPB chose to focus on the right to erasure as its one of the most frequently exercised GDPR rights and one DPAs frequently receive complaints about. In my work I find organisations are not always handling these requests appropriately and often don’t have clear and comprehensive procedures in place.

While this is not a UK specific initiative, this right can raise a number of questions wherever your organisation is located. When can we refuse? What data should we erase? And, on a technical level, how do we make sure everything that needs to be erased is actually destroyed, especially when the data is held on multiple systems?

It can raise complex challenges. Add to this the tight timeframe to action individual requests and the dreaded bulk requests from third parties, and it can turn into a bit of a minefield. We’ve got some tips to help you navigate the mines. But first, a little refresher on what the right to erasure means.

What is the right to erasure?

As the name suggests, a person has the right to request their personal data is erased from your systems if you no longer have a compelling lawful reason to keep it. This applies to ALL systems, back-ups and even data held in the cloud.

You may hear it referred to as the ‘Right to be Forgotten’. This stems from a decision in 2014 by the Court of Justice of the EU which recognised the right of EU citizens to request the removal of links to personal information on search engines. GDPR took this ruling a step further and enshrined a broader right into EU law, taking it beyond the context of publicly available personal information. Under the UK GDPR the right remains the same as its EU counterpart.

Crucially, the right to erasure is not an absolute right. Organisations may have a clear justification for denying a request either in part or in full.

When does the right to erasure apply?

You need to fulfil a person’s request for erasure in the following circumstances:

It’s no longer necessary for your organisation to retain the personal data for the purposes it was collected;
They gave you their consent to use their personal data for a specific purpose/s and they have now withdrawn their consent;
You’re relying on legitimate interests as your lawful basis to handle their data, they object to this, and you have no compelling and overriding legitimate interest to continue to hold it;
You’re fulfilling a legal ruling or legal obligation to erase the data;
You’re processing a child’s data to provide information services (i.e. online services) and an appropriate party is making the request, be this a parent or guardian, or the child themselves if they are of a competent age to decide for themselves.
You’re handing their data unlawfully.

The last point, a general ‘catch-all’ is a tricky one to balance as there may be many reasons why personal data could be processed unlawfully. For example, the handling of personal data might be considered unlawful if it’s inaccurate, or even if necessary information about your processing activities was not provided in a privacy notice.

When can an erasure request be refused?

The law specifically tells us the right to erasure will not apply when you’re holding personal data for the following reasons:

to exercise the right of freedom of expression and information;
to comply with a legal obligation;
for the establishment or defence of legal claims;
to perform a task carried out in the public interest or when exercising and organisation’s official authority;
for public interest in the area of public health;
for archiving purposes in the public interest, scientific or historical research or statistical purposes (where erasure would make this impossible or seriously impair your objectives).

Under UK GDPR and the Data Protection Act 2018 there are two specific circumstances where the right to erasure doesn’t apply to special category data. Further information about these exemptions can be found in the ICO erasure guidance.

It’s also important to consider whether you have a contract in place with the individual, which necessitates the continued processing of their data. There may also be grounds for a refusing a request where you can justify its manifestly unfounded or excessive.

There are many variables at play and each request needs to be assessed on a case-by-case basis. This is where the devil really is in the detail. In more complex cases you’ll need to consider the potential fallout should you delete personal data and subsequently discover you really needed to keep it. If you have a robust justification for needing to keep personal data, then you should keep it and document the reason(s) for your decision. This highlights the requirement for accurate record keeping, not only for erasure requests but for all privacy rights requests.

If you refuse to comply with a request (either in part or in full), you must explain why and tell the individual they have the right to raise a complaint with the UK’s Information Commissioner’s Office, or other relevant Data Protection Authority.

10-point checklist for handling erasure requests

1. Awareness

An individual can request their personal data is erased either in writing or verbally. They might make this request to anyone in your organisation. So, everyone in your organisation needs to know how to recognise this type of request, what to do if they receive one, and who to direct it to. Awareness campaigns, training and easy-to-understand policies and guides all play their part in getting the message across to all staff.

2. Identity verification

You clearly don’t want to delete someone’s details unless you are absolutely sure they are who they say they are. Sometimes this will be obvious, but in other circumstances you’ll need to ask for verification of identity. However, if the deletion has not negative impact on the individual, for example they are only on your marketing list, asking for proof of identity is likely to be a disproportionate step.

When asking for proof of identity only ask for the minimum amount of information necessary to confirm identity. Don’t accumulate additional personal information such as copies of passports or driving licences, unless it’s truly justified, and remember to destroy these too!

If a request is received via another organisation, make sure the third party genuinely has the authority to act on behalf of the individual in question. The responsibility lies with the third party to provide any necessary evidence to prove this.

3. Technical measures

Your customers might think deleting their data is as simple as clicking a button. If only it were that easy!

It can be difficult to locate, identify, assess and properly destroy data – especially if it’s held on many different systems. You might hold records on emails, backed-up systems, on the cloud… all must be deleted.

Make sure your systems, applications and databases allow easy identification and deletion of individuals. You may also need to assess the implications of deletion; it can impact on how different software works.

This is where the concept of Data Protection by Design really supports businesses. If from the outset of any new project or onboarding of new technology systems you factor in how to successfully manage all individual privacy rights, it will make life much easier in the long run.

It’s worth reiterating – the right to erasure extends to deleting data from backups. However, the ICO recognises the inherent difficulties here and says, “the key issue is to put the backup data ‘beyond use’, even if it cannot be immediately overwritten.”

4. Timeline

You don’t have long to comply with erasure requests, so keeping track of time is crucial. The request must be actioned ‘without undue delay,’ and in any case within one calendar month of receiving it. You may be able to extend this by up to two months if it’s particularly complex. If you need to extend, make sure you tell the individual before the first month is up, giving them clear reasons for the delay.

5. Who else holds their data?

The right to erasure doesn’t just apply to the records your organisation holds. You’re also expected to inform both your suppliers (processors) and other controllers you have shared it with.

Having a clear understanding of all your suppliers and other organisations you share personal data with, such as in your Record of Processing Activities, means you can efficiently contact them and inform them of erasure requests. You don’t have to do this if it would prove impossible or involves disproportionate effort, but you may need to be able to justify this is genuinely the case.

6. Public domain data

The right to erasure also applies to personal data which has been made public in an online environment (‘the right to be forgotten’). So take note if you publish personal data, or pass it on for others to publish.

You need to be ready to take reasonable steps to inform other organisations who are handling the personal data; asking them to erase links to, copies of, or replication of the data. What’s ‘reasonable’ is another judgement call, and the expectation scales with size; the bigger your organisation and the more resources you have, the more you’ll be expected to do.

7. Children’s data erasure rights

Children have special protection under data protection law, and the right to erasure is particularly relevant when a child has given their consent (or their parent/guardian has) and at a later stage (even when they’re an adult) they want their personal information removed, especially if it’s available on the internet. Baking in the ability to delete children’s information from the start is crucial.

8. Exemptions

It’s helpful to have a clear checklist of the exemptions which might apply and be relevant for your organisation. They don’t all apply in the same way, so be sure to examine each exemption on a case-by-case basis. The ICO exemptions guide is a good starting point, and it’s likely you’ll also need to reference the Data Protection Act 2018.

9. Maintain an erasure log

How do we delete someone, but also prove we have done it? Feels ambiguous doesn’t it? However, organisations are required to keep a log of erasure requests, actions taken and justifications for these to demonstrate compliance. The key is only recording the minimum amount of information necessary to meet this obligation, and keeping this secure. I know some organisations who’ve taken the step of making sure this log is pseudonymised for extra protection.

10. Minimisation and retention

The right to erasure (and indeed other privacy rights, such as DSARs) can be less complex if you try to stick to two of the core data protection principles; data minimisation and data retention (storage limitation). Collecting ‘just enough’ data in the first place, using it in specified ways and only keeping it for as long as you need it, means there’s less data to trawl through when an erasure request comes in.

Sounds simple, less easy in practice, but worth the effort. For useful tips, tools and templates see our Data Retention Guide.

Giving those responsible for handling erasure requests a clear procedure to follow which covers the key considerations and how to actually fulfil requests in practice, is really worth developing.  With the right elements in place you’ll be in a much better place to handle the right to erasure effectively, within the statutory timescale and with less risk of mistakes.

Why Data Processing Agreements with suppliers matter

February 2025

Controller to processor contractual terms

Supply-chain data breaches have become all too common. If one of your suppliers (service providers) suffers a breach or other privacy violation, the data protection clauses in contractual terms could suddenly become very important.

GDPR (and it’s UK GDPR spin-off) sets out strict contractual requirements for when an organisation utilises the services of another organisation which will handle personal data on its behalf – including technology providers. These contractual terms are designed to protect all parties; the individuals whose data is processed, the (controller) organisation and their supplier (processor).

Establishing the relationship

First, it’s important to clearly establish what the relationship is between your organisation and the third party. Are both parties are acting as separate or joint controllers? Or is the supplier acting as your processor?

This is not always easy to determine. The lines can become blurred when a third party in some situations is acting as a processor, but at other times as a controller – either using the data for their own purposes, or for jointly shared purposes (i.e. joint controllers). Controller or processor; what are we? 

Who’s accountable?

Prior to the GDPR being implemented in 2018, accountability rested almost entirely with the controller, who was liable if things went wrong. Post GDPR, the controller and processor could both or solely be held accountable, and liable for compensating any damages, either material or non-material, suffered by individuals.

What obligations does each party have?

While the majority of data protection compliance obligations rest with controllers, processors also have their own accountabilities. These include, but are not limited to, being accountable for any sub-contractors (aka ‘sub processors’) they appoint which process the controller data, any international data transfers and keeping adequate records of their processing activities.

Processors are also required to assist the controller with their obligations. For example, in the event of a data breach, handling privacy rights requests and conducting Data Protection Impact Assessments.

What Controller-Processor contracts need to cover

The requirements an organisation needs to meet when utilising the services of processors are set out in Article 28, GDPR. This applies when suppliers are:

processing personal data on behalf of the organisation (e.g. processing the organisation’s employee records, customer data or another category of personal data;
acting solely under the instructions of your organisation; and
NOT using the personal data for their own business purposes.

Data protection legislation makes it very clear this arrangement must be covered by a binding agreement and there are specific provisions which must be included.

The specific data protection provisions are often set out separately to other provisions, in a Data Processing Agreement (DPA) or Addendum. Alternatively, they may be included within the main agreement. So if there’s no adequate data protection section in the main agreement, look for a DPA or Addendum! These should include the following aspects.

1. Technical and Organisational Measures (TOMs)

A processor needs to provide sufficient guarantees to implement appropriate technical and organisational measures to meet the requirements of GDPR and ensure the protection of individuals’ rights. Good practice would be to include a summary of key information security measures within the contractual terms.

It’s advisable as part of the due diligence process, prior to entering into a contract, to conduct data protection and information security assessments, where relevant, to gain suitable oversight and assurances from suppliers that personal data will be properly protected. In practice its unlikely to be feasible to carry out such due diligence on all suppliers, so you may wish to focus efforts on those which handle sensitive data or types of processing which involve higher levels of risk.

2. Appointment of sub-processors

A processor must not engage other processors (often referred to as ‘sub-processors’) to conduct processing of the controller’s personal data without the authorisation of the controller. The processor also needs to inform the controller of any intended material changes concerning additional/replacement sub processors. The controller must be given the opportunity to object to such changes.

A contract should therefore provide details of any sub-processors which will be used to handle the controller’s data. Updates should be provided when this list changes. In our experience, well-established suppliers will often just provide a link to view their sub-processors and may put the onus on the organisation their contracting with to check for updates. Something to watch out for.

Processors need to be aware they are accountable for the actions of their sub-processors. It is specifically stated that the same obligations as set out in the contract between the controller and processor shall be imposed on another (sub) processor by way of contract or other legal act.

Where a sub-processor fails to fulfil its data protection obligations, the processor up the chain may become accountable and liable to the controller for the performance of the sub-processor’s obligations. Processors are therefore responsible to conducting due diligence of any sub-processors they use.

This point really illustrates how important it is to clearly establish controller > processor > sub-processor supply chains and make sure the nature of the relationship between parties is clear in contractual terms.

3. Further contractual requirements

A contract (or other legal act) between a controller and processor must set out the following:

Types of personal data the processor will be processing on behalf of the controller (e.g. name, email address, telephone number, bank account number, date of birth – including any special category data)
Categories of data subject (e.g. employees, patients, customers, students, minors)
Nature and purpose of processing activities
Duration of processing (i.e. the term of the contract)

Terms must also include:

a) Rights and duties of each party, e.g. the processors commitment not to use the controller’s personal data for any other purpose than for providing the agreed service(s).

b) Instructions from the controller – The agreement should include details of what the processor is permitted to do with the personal data it is provided. In practice, this may often be set out by the supplier (processor) and often may be provided separately to the main agreement – particularly if there are multiple workstreams or project-based activities. But the controller should check and agree that these accurately represent their instructions, so the scope of data processing is clear.

c) International data transfers – If relevant, the agreement should include details and provisions for any transfer of personal data to a third country, whether this be to the processor itself or any sub-processors used. International Data Transfers Guide.

d) Duty of confidentiality – There must be a confidentiality clause which commits the processor to ensuring the people authorised to access the controller’s personal data have committed themselves to a duty of confidentiality or are under a statutory obligation of confidentiality.

e) Data subject’s rights – The processor must commit to assist the controller, where applicable, with the fulfilment of data subject’s rights, such as Subject Access Requests, the right to erasure, the right to object to processing, etc.

f) Assistance with controller’s compliance – the agreement should set out that, as and when required, the processor will assist the controller with:

Security of processing
Conducting Data Protection Impact Assessments (DPIA), should this be required
Prompt notification of any personal data breaches affecting controller data. Often the terms will stipulate the processor must inform the controller about any data breach affecting the controller’s personal data ‘without undue delay’ or will have a specific timeframe, for example, within 24/48 hours. This could become vital so that the controller can meet its reporting deadline of 72 hours.

g) Return or destruction of the data – the agreement should stipulate what happens to the controller’s personal data at the end of the contract term. The law states that, at the choice of the controller, all personal data must be returned or destroyed.

h) Audits and inspections – the agreement should set out that the processor agrees to make available all information necessary to demonstrate compliance with Article 28 and will allow for, and contribute towards audits, including inspections, by the controller or an authorised auditor.

What might be the consequences of getting contracts wrong?

It’s crucial contracts relating to data processing include all the appropriate terms to make sure that individuals are properly protected, and also to make sure the accountabilities and liabilities of each party are clearly agreed should a data breach or other violation of data protection law occur. As GDPR states:

‘Any person who has suffered material or non-material damage as a result of an infringement of this Regulation shall have the right to receive compensation from the controller or processor for the damage suffered.

Any controller involved in processing shall be liable for the damage caused by processing which infringes this Regulation. A processor shall be liable for the damage caused by processing only where it has not complied with obligations of this Regulation specifically directed to processors or where it has acted outside or contrary to lawful instructions of the controller.’ (Article 82)

What can we do when contractual terms are non-negotiable?

Size matters! Established suppliers such as big tech providers will often have their own standard Data Processing Agreements and offer little or no room for negotiation. It’s sometimes a case of ‘take it or leave it’. In this situation, organisations will need to take a balanced approach, weighing up the necessity of utilising the services with the contractual terms provided, and decide whether these are sufficient.
Conversely, big established controllers will often be in a position to dictate their terms to smaller suppliers.

However, it really is wise to check Data Processing Agreements, or data protection clauses within a main contract. It’s worth clicking on the link to check the sub-processors a supplier uses. All of this will inform your decisions. If you don’t check, you may be unaware of some underlying risks.

Training your people in data protection: where to begin?

February 2025

In most organisations, your people are your greatest asset. However, from a compliance perspective, without proper training, knowledge and understanding, employees might make mistakes with potentially serious consequences. They could miss key legal requirements – whether this be compliance with Health & Safety, Employment, Data Protection or any other relevant laws.

Organisations need to give people relevant training and guidance, but when it comes to data protection, establishing the most effective approach can be tricky.

Legal requirements

There’s a requirement under GDPR for organisations to ‘implement appropriate technical and organisational measures’. The ‘organisational measures’ part is where employee awareness and training comes in, but the law doesn’t say how organisations should do this. Alongside this, organisations are required to meet GDPR accountability requirements.

Regulatory expectations

The UK’s Information Commissioner’s Office’s Accountability Framework provides helpful pointers on what a ‘good’ data protection training and awareness programme would look like. To summarise, the following are key components:

Appropriate training in data protection and information security for all employees
Refresher training on a regular basis
Data protection and information security in induction programme for new starters
More specialist training for specific employees where relevant.
Ongoing exercises to keep raising awareness

One size doesn’t fit all

Each organisation needs to work out its own approach. Some organisations (and indeed some teams within organisations) will handle much more sensitive data than others. Some employees may have very limited contact with personal data in their roles. While some may need to know how to conduct a Data Protection Impact Assessment, others may need to understand the nuances of fulfilling Data Subject Access Requests. Some teams may need to understand more about supplier due diligence, controller to processor contracts and the rules for international data transfers, and so on.

How the core data protection principles and lawful bases are applied in practice will vary enormously for different business functions – from Marketing to Operations, from HR to a Contact Centre.

For example, marketers usually need to understand more about consent and legitimate interests, the right to opt-out, what the law says about profiling, and so on. They also need to be very familiar with marketing rules under different legislation, such as PECR.

Whereas HR teams need to understand how data protection laws apply to recruitment and the many different tasks which take place for employment purposes; such as appraisals and development, health, sickness and absence data, diversity, employee communications, payroll… and so on.

What does good training look like?

‘All’ staff training

Often, to cover baseline training for all employees, organisations will look to use an outsourced providers’ data protection training module(s). Alternatively, they may customise an external solution or develop their own training content internally.

In my experience, the quality of outsourced training modules can vary enormously, so it pays to do your homework and find an effective solution which suits your organisation well.

Just be mindful; outsourced generic online training which is not customised is unlikely to be enough on its own. For example, it won’t tell your people how to internally report a suspected data breach, or who to forward privacy rights requests to. It won’t cover your own internal standards or policies. Additional internal materials will be needed – be these policies, procedures, guides, factsheets, short videos and so on.

It’s worth repeating; the law doesn’t tell us how we embed necessary knowledge and understanding. If you have certain roles where people’s handling of personal data is very limited, you may decide making them sit through an online training module really isn’t necessary. You could choose different methods to instil simple, relevant and important ‘dos and don’ts’.

More specialist training

Training is likely to be most effective if it’s bespoke or tailored to the needs of specific functions or teams and provides useful examples, such as user-journeys or case studies. Aligned to the different data protection requirements people need to consider for their own role.

However, this could become time consuming and costly, so a balance needs to be struck between the benefits and time. It can help to think about where the biggest risks lie in your business, so you can focus your efforts on the key teams which have greater exposure to, and influence, over data risk.

Does training need to focus on the Sales & Marketing team, the HR team, customer-facing teams, development team, anyone else?

Data Subject Access Requests (DSARs) and other data rights are usually handled by nominated people, who are highly likely to need more specialist knowledge in how to handle them. But if your organisation has never had any privacy rights requests, this is unlikely to be a priority area.

Organisational culture

Ideally you’d want training to align with your organisation’s culture. Training doesn’t have to be provided in a specific format and there’s nothing to say you can’t be creative. Some organisations use gamification, bite-sized videos, ‘win a prize’ quizzes and so on. Try and include humour if you can; a joke just might make a key message hit home.

To sum up, making sure people have the right skills and knowledge for your business is one of the best ways to reduce the chance of data protection risks being overlooked. Prevention is usually better than cure!

Data Breaches: Assessing the level of risk

February 2025

The alarm goes off inside your organisation; you’re certain, or have a reasonable degree of certainty, a personal data breach has occurred. You’ve either contained the breach, or are in the process of doing so. You’ve established all the facts or are still gathering them.

Great stuff. You’re starting to manage the risk. Alongside this, there are two pressing issues to address under GDPR (and UK GDPR):

1. Do you need to report the breach to a Data Protection Authority?
(e.g. the UK’s Information Commissioner’s Office – ICO).
Reporting is required within 72-hours of becoming aware of a breach, and must be done unless the breach is unlikely to represent a ‘risk’

2. Do you need to notify affected individuals?
This is required without undue delay if the breach represents a ‘high risk’.
Data Protection Authorities don’t need to hear about every incident where there’s minimal risk to individuals. In fact, the ICO made it clear after GDPR was implemented they saw a degree of over-reporting. There’s a balance to be struck; you don’t want to fail to report a data breach when you should have.

Each incident needs to be considered on a case-by-case basis, taking account of all relevant factors. No two incidents are likely to be the same (unless you failed to address something crucial the first time around!).

The key is balancing the severity of the potential impact on those affected with the likelihood of this occurring. For example, the impact could be quite severe, but highly unlikely to materialise, or conversely the impact could be relatively low, but highly likely.

What do data breach harms look like?

There could be a number of negative consequences for people affected, so you need to consider the harms and/or damage the breach might cause.

For example, it could result in any of the following: financial loss, identity theft, fraud, emotional distress, loss of confidentiality, discrimination, humiliation and reputational damage. Other harms could include material or physical damage, loss of control of personal data, social disadvantage or limitation of rights.

How to assess the potential harm from a data breach

In assessing the types of harm the breach may result in it can be useful to answer the following types of assessment questions:

Can individuals be identified easily?
Are people at increased risk of identity theft or fraud?
Could people suffer financially?
Could people’s reputation be damaged?
Is there a breach of confidentiality?
Are people at risks of physical harm?
Does the breach involve information relating to children or vulnerable adults?
Does the combination of data involved pose more of a risk?

The above is by no means an exhaustive list. The importance of certain questions will vary, depending on the nature of the incident, the personal data and individuals affected and indeed the nature of your organisation.

It’s good practice to use a risk matrix, with a scoring system of likelihood against severity, so you can evaluate the severity and likelihood of harm identified. This helps answer the key questions of a) should we report to a Data Protection Authority? and b) should we notify affected individuals? Not only does a scoring system provide internal reassurance a clear methodology is being used it’s also useful evidence of your assessment should it ever be required.

The European Commission Guidelines on Notification of a Personal Data Breach (in section IV) provide helpful pointers on how to assess risk and high risk.

If your breach involves special category data or financial details, the risks may be more obvious and the decision to report the breach may be more-clear cut.

Assessments may need to be fluid, including regular ‘check-ins’ with colleagues as your understanding of the situation evolves and answers to your questions become known.

While your response to a data breach needs to be swift and effective, often you won’t know all the facts and are unable fully evaluate the risk posed within 72-hours. The first report to a Data Protection Authority can be just an initial report. This can then be followed up with more information as it becomes available. In some cases the risk rating of a breach might be downgraded or upgraded.

The key to success is having a robust data incident procedure, to help your data incident response team manage what can be multiple moving parts as effectively as possible. A procedure which includes a clear method of assessing the risk. Like many ‘emergencies’ in life, from a punctured tyre to a cut finger, being well prepared will prove invaluable.

AI: Risk and Regulation

February 2025

Artificial Intelligence is an epoch-defining opportunity. The biggest game-changer since the Internet. Governments, businesses and other organisations fear losing out, unless they embrace innovation and change. Yet the benefits of AI carry with them ethical challenges. There’s the risk of considerable harms to individuals, wider society and the environment. Organisations also have to navigate risks; reputational, commercial and regulatory.

To regulate, or not to regulate AI?

The AI regulatory landscape’s far from settled – in fact, it’s a new frontier. On the one hand, the first phase of the EU AI Act comes into effect – the world’s first comprehensive AI regulation. On the other, President Trump has ‘ripped up’ Joe Biden’s AI Executive Order of 2023. The new US administration wants to remove barriers it claims stifle innovation. All previous policies, directives, regulations and orders in relation to AI are under review. The focus is on making sure America is a global leader in AI technology.

In the UK, an EU-style regulation looks unlikely. For the time-being a ‘principles-based framework’ is supported for sector specific regulators to interpret and apply. Specific legislation for those developing the most powerful AI models looks the most likely direction of travel.

John Edwards, the UK Information Commissioner penned a letter to the Prime Minister (in response to a request from Government to key regulators, to set out how they’ll support economic growth) in which he says; “regulatory uncertainty risks being a barrier to businesses investing and adopting transformative technology”. The Commissioner says his office will “develop rules for those developing and using AI products, to make it easier to innovate and invest responsibly”. Interestingly, the Commissioner supports the idea of a statutory Code of Practice for AI, saying this would give regulatory certainty to businesses wanting to invest in AI in the UK.

AI regulation has supporters and critics in equal measure. The EU’s strict approach has led to fears Europe will languish behind the rest of the world. Others argue it’s crucial to enforce an ethical and responsible approach to AI – in the absence of regulation, the argument goes, AI could be more malevolent than benign.

The divisions were crystal clear at a high-level AI Summit in Paris on 11 February, as the US and UK refused to sign President Macron’s declaration calling for open and ethical AI.

The potential is there for the UK to find a sweet spot, positioning its approach between its cautious European neighbours on one side and the ‘Wild West’ on the other?

EU AI ACT – first phase now applicable

The AI Act was implemented in August 2024, and is coming into effect in stages. On 2nd February rules in relation to AI literacy requirements, definition of an AI system and a limited number of prohibited AI use cases, which the EU determines pose an unacceptable risk, came into effect.

Like GDPR, the AI Act has extra-territorial scope, meaning it applies to organisations based outside the EU, where they place AI products on the market or put them into service in the EU, and/or where outputs produced by AI applications are used by people within the EU. We’ve already seen how EU regulation has led to organisations like Meta and Google excluding the EU from use of its new AI products.

In brief the prohibited uses under the AI Act are:

 Facial recognition – the use of AI systems which create or expand facial recognition databased through the untargeted scraping of images from the internet or CCTV footage. Social scoring – AI systems which evaluate and score people on their behaviour or characteristics, where this might lead to detrimental or unfavourable treatment in an unrelated context. Or where it could lead to detrimental or unfavourable treatment which is unjustified or disproportionate.

Predictive criminal risk assessments based on profiling.

Subliminal manipulation or other deceptive techniques which distort people’s behaviour and cause them to take decisions they wouldn’t have otherwise taken which are likely to cause significant harm.

Exploitation of vulnerabilities – such as someone’s age, disability or social/economic disadvantage.

Inferring emotions in the workplace or educational setting.

Biometric categorisation which infers special category data.

Real-time remote biometric identification for law enforcement purposes.

The European Commission has published guidelines alongside these prohibited practices coming into effect. Guidelines on Prohibited PracticesGuidelines on Definition of AI System

EU AI Act – what comes next?

The rules are complex and organisations which fall within the scope of the AI Act will need to comply with tiered requirements dependent on risk, which at a very top level involves;

For AI systems classified as high-risk there will be core requirements, such as mandatory Fundamental Rights Impact Assessments (FRIA), registration on a public EU database, data governance and transparency requirements, human oversight and more.
General-purpose AI (GPAI) systems, and the GPAI models they are based on, will be required to adhere to transparency requirements, including technical documentation, compliance with EU copyright law and detailed summaries about content used for training AI systems.
For Generative AI applications, people will have to be informed when they are interacting with AI, for example a Chatbot.

It’s worth bearing in mind an AI system could, for example, be both high-risk and GPAI.

Managing AI use

While compliance will be a key factor for many organisations, protecting the organisation’s reputation may be an even bigger concern. So, how do we ensure AI is used in an efficient, ethical and responsible way?

Organisations already utilising AI are likely to have embedded robust governance, enabling smart investment and innovation to take place within a clear framework to mitigate potential pitfalls. For others, here are some points to consider:

Senior leadership oversight
Establish your organisation’s approach to AI; your strategy and risk-appetite.

Key stakeholders
Identify key individuals and/or departments likely to play a role in governing how AI is developed, customised and/or used.

Roles and responsibilities
Determine who is responsible and accountable for each AI system.

Knowledge of AI use
Understand and record what AI systems are already in use across the business, and why.

Policies and procedures
Develop appropriate policies and procedures, or update existing policies so people understand internal standards and relevant regulatory requirements.

Training, awareness and AI literacy
Provide appropriate training, consider if this should be role specific. Remember, already in effect under the EU ACT is a requirement for providers and developers of AI systems to make sure their staff have sufficient levels of AI literacy)

Risk assessments
Develop a clear process for assessing and mitigating potentials AI risks. While a Data Protection Impact Assessment (DPIA) may be required, this is unlikely to be sufficient on its own.

Supplier management
Embed appropriate due diligence processes when looking to adopt (and indeed customise) third-party AI SAAS solutions.

AI security risks

Appropriate security measures are of critical importance. Vulnerabilities in AI models can be exploited, input data can be manipulated, malicious attacks can target training datasets, unauthorised parties may access sensitive, personal and/or confidential data. Data can be leaked via third party AI solutions. We also need to be mindful of how online criminals exploit AI to create ever more sophisticated and advance malware, for example, to automate phishing attacks. On this point, the UK Government has recently published a voluntary AI cyber security code of practice.

AI is here. It’s genuinely transformative and far-reaching; organisations unable or unwilling to embrace change – and properly manage the risks – will be left behind. To take the fullest advantage of AI’s possibilities, agile and effective governance is key.

Data protection and employment records

February 2025

How to manage personal data relating to employees

Data protection compliance efforts are often focused on commercial or public-facing aspects of an organisation’s activities. Making sure core data protection principles and requirements are met when collecting and handling the data of customers, members, supporters, students, patients, and so on. However the personal data held relating to employees and job applicants doesn’t always get the same level of attention.

Handling employees’ personal information is an essential part of running a business, and organisations need to be aware and mindful of their obligations under the UK GDPR and Data Protection Act 2018. As well as, of course, obligations under employment law, health and safety law, and any other relevant legislation or sector specific standards.

A personal data breach could affect employee records. Employees can raise complaints about an organisation’s employment activities and employees (or former employees) can raise Data Subject Access Requests which can sometimes be complex to respond to. All of which can expose gaps in compliance with data protection laws. In some organisations employee records may represent the highest privacy risk.

Employee records are likely to include special category data and more sensitive information such as:

DE&I information (such as information relating to race, ethnicity, religion, gender, age, sexual orientation, etc)
disabilities and/or medical conditions
health and safety records
absence and sickness records
performance reviews and development plans
disciplinary and grievance records
occupational health referrals
financial information required for payroll

Alongside the core HR records, employees may be present on other records – such as CCTV, any tracking of computer / internet use, and so on. All of which need careful consideration from a data protection standpoint. Also see monitoring employees.

In my experience, while the security of employee records may often be taken into consideration, other core data protection principles might sometimes be overlooked, such as:

Lawfulness

It’s necessary to have a lawful basis for each processing activity. Many activities may be necessary to perform a legal obligation or covered under the contract of employment with the individual. However, the contract may not cover every activity an organisation has requiring the use of employee data. It should be clearly determined where legal obligation or the contract is appropriate for any given activity and confirm any activities where you may instead need to rely on other lawful bases, such as legitimate interests or consent.

Special category data

To handle medical information, trade union membership and diversity, equity and inclusion (DE&I) activities, and any other uses of special category data, it’s necessary to determine a lawful basis, plus a separate condition for processing under Article 9. Handling special category data

Data minimisation

The principle of data minimisation requires employers to take steps to minimise the amount of personal information about their employees to what is necessary for their activities and not hold additional personal information ‘just in case’ they might need it.

Data retention

Employee’s data should not be kept longer than necessary. There are statutory retention requirements for employment records in the UK (and many other jurisdictions), which set out how long they must be kept. But these laws may not cover all types of activities you may have for employment data. Once you set these retention periods, they need to be implemented in practice, i.e. regular reviews of the data you hold for specific purposes and securely destroy records you no longer need. These may be electronic records on IT systems or perhaps physical HR records languishing in boxes in a storeroom! You may wish to refer to our Data Retention Guidance

Transparency

Employees are entitled to know the ways in which their employer uses their personal data, the lawful bases, the retention periods and so on. The requirements for privacy notices must be applied to employees, just like external audiences. This necessary privacy information may be provided in an Employee Privacy Notice or via an Employee Handbook.

Risk assessments

Data Protection Impact Assessments are mandatory in certain circumstances. In other cases they might be helpful to conduct. Organisations mustn’t overlook DPIA requirements in relation to employee activities. For example, any monitoring of employees which might be considered intrusive or the use of biometric data for identification purposes.

Record keeping

Appropriate measures need to be in place to make sure employee records are being handled lawfully, fairly and transparently and in line with other core data protection principles. It’s difficult to do this without mapping employee data and maintaining clear records of the purposes you are using it for, the lawful bases, special category conditions and so on, i.e. your Record of Processing Activities (RoPA). The absence adequate records will make the creating a comprehensive privacy notice rather challenging.

Training

Whilst we’re on the topic of employees, let’s also give a mention to training. All employees handling personal data should receive appropriate information security and data protection training. It’s likely those in HR / People teams handling employee data on a daily basis will benefit from specialist training beyond the generic online training modules aimed at all staff.

To help you navigate data protection obligations the ICO has published new guidance on handling employee records, which provides more detail on what the law requires and regulatory expectations.

Finally, don’t forget data protection compliance efforts need to extend beyond employees to job applicants, contractors, volunteers and others who perform work-related duties for the organisation.