Is your marketing profiling lawful, fair and transparent?

October 2022

ICO fines catalogue retailer £1.35 million for ‘invisible processing’

Many companies want to know their customers better. This is not a bad thing. Information gathered about people is regularly used for a variety of activities including improving products and services, personalisation or making sure marketing campaigns are better targeted.

However, the significant fine dished out to catalogue retailer Easylife highlights why companies need to be transparent about what they do, have a robust lawful basis, be careful about making assumptions about people and take special care with special category data.

It also shows how profiling is not limited to the realms of online tracking and the adtech ecosystem, it can be a simpler activity.

What did the catalogue retailer do?

Easylife had what were termed ‘trigger products’ in its Health Club catalogue. If a customer purchased a certain product, it triggered a marketing call to the individual to try and sell other related products. This was done using a third-party call centre.

Using previous transactions to tailor future marketing is not an unusual marketing tactic, often referred to as ‘NBA – Next Best Action’. The key in this case is Easylife inferred customers were likely to have certain health conditions based on their purchase of trigger products.

For example, if a customer bought a product which could be associated with arthritis, this triggered a telemarketing call to try and sell other products popular with arthritis sufferers – such as glucosamine and bio-magnetic joint patches.

Data relating to medical conditions, whether provided by the individual or inferred from other data, is classified as special category data under data protection law and handling this type of data requires special conditions to be met.

The ICO’s ruling

To summarise the ICO’s enforcement notice Easylife was found have failed to:

  • have a valid lawful basis for processing
  • meet the need to have an additional condition for processing special category data
  • be transparent about its profiling of customers

It was found to have conducted ‘invisible processing’ of 145,000 customers.

There were no complaints raised about this activity; it only came to light due to a separate ICO investigation into contraventions of the telemarketing rules. The ICO says it wasn’t surprised no one had complained, as people just wouldn’t have been aware this profiling was happening, due to the lack of transparency.

It just goes to show ICO fines don’t always arise as a result of individuals raising complaints.

Key findings

Easylife argued it was just processing transactional data. The ICO ruled when this transactional data was used to influence its telemarketing decisions, it constituted profiling.

The ICO said while data on customer purchases constituted personal data, when this was used to make inferences about health conditions, this became the processing of special category data. The ICO said this was regardless of the statistical confidence Easylife had in the profiling it had conducted.

Easylife claimed it was relying on the lawful basis of Legitimate Interests. However, the Legitimate Interests Assessment (LIA) the company provided to the ICO during its investigation actually related to a previous activity, in which health related data wasn’t used.

When processing special category data organisations need to make sure they not only have a lawful basis, but also comply with Article 9 of UK GDPR.

The ICO advised the appropriate basis for handling this special category data was with the explicit consent of customers. In other words legitimate interests was not an appropriate basis to use.

Easylife was found to have no lawful basis, nor a condition under Article 9.

It was ruled there was a lack of transparency; customers hadn’t been informed profiling was taking place. Easylife’s privacy notice was found to have a ‘small section’ which stated how personal data would be used. This included the following:

*Keep you informed about the status of your orders and provide updates or information about associated products or additional products, services, or promotions that might be of interest to you.
*Improve and develop the products or services we offer by analysing your information.

This was ruled inadequate and Easylife was found to have failed to give enough information about the purposes for processing and the lawful bases for processing.

The ICO’s enforcement notice points out it would have expected a Data Protection Impact Assessment to have been conducted for for the profiling of special category data. This had not been done.

The Data Processing Agreement between Easylife and its processor; the third-party call centre, was also scrutinised. While it covered key requirements such as confidentiality, security, sub-contracting and termination, it failed to indicate the types of personal data being handled.

Commenting on the fine, John Edwards, UK Information Commissioner, said:

“Easylife was making assumptions about people’s medical condition based on their purchase history without their knowledge, and then peddled them a health product – that is not allowed.

The invisible use of people’s data meant that people could not understand how their data was being used and, ultimately, were not able to exercise their privacy and data protection rights. The lack of transparency, combined with the intrusive nature of the profiling, has resulted in a serious breach of people’s information rights.”

Alongside the £1.35 million fine, Easylife’s been fined a further £130,000 under PECR for making intrusive telemarketing calls to individuals registered on the Telephone Preference Service. Currently the maximum fine for contravening the marketing rules under PECR is £500,000, much lower than potential fines under DPA 2018/UK GDPR.

Update March 2023: The ICO announces reduction in GDPR fine from £1.35 million to £250,000.

6 key takeaways

1. If you are profiling your customers, try to make sure this is based on facts. Making the type of assumptions Easylife was making will always carry risks.

2. Be sure to be transparent about your activities. This doesn’t mean you have to use the precise term ‘profiling’ in your privacy notice, but the ways in which you use personal information should be clear.

3. Make sure your clearly state the lawful bases you rely upon in your privacy notice. It can be helpful and clear to link lawful bases to specific business activities.

4. If you’re processing special category data, collected directly or inferred from other data, make sure you can meet a condition under Article 9. For marketing activities the only option is explicit consent.

5. If you’re conducting profiling using special category data, carry out a DPIA.

6. Always remember the marketing rules under PECR for whatever marketing channel you’re using. For telemarketing, if you don’t have the consent of individuals, be sure to screen lists against the TPS.

Is bias and discrimination in AI a problem?

September 2022

Artificial Intelligence - good governance will need to catch up with the technology

The AI landscape

We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include:

  • Cybersecurity analysis to identify anomalies in IT structures
  • Automating repetitive maintenance tasks and guiding technical support teams
  • Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement
  • Reviewing job applications to identify the best-qualified candidates in HR
  • Research scientists looking for patterns in health to identify new cures for cancer
  • Predicting equipment failure in manufacturing
  • Detecting fraud in banking by analysing irregular patterns in transactions.
  • TV and movie recommendations for Netflix users
  • Inventory optimisation and demand forecasting in retail & transportation
  • Programming cars to self-drive

Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.  

In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications. 

Why recruitment applications?

AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest. 

The Amazon recruitment tool – an example of bias/discrimination

The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications.

What is AI?

AI can be defined as: 

“using a non-human system to learn from experience and imitate human intelligent behaviour”

The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.  

Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes. 

In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account.

You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity. 

How is AI used? 

AI can be used to make decisions:

1.     A prediction – e.g. you will be good at a job

2.     A recommendation – e.g. you will like this news article

3.     A classification – e.g. this email is spam. 

The benefits of AI

AI is generally a force for good:

1.     It can automate a process and save time

2.     It can optimise the efficiency of a process or function (often seen in factory or processing plants)

3.     It can enhance the ability of individuals – often by speeding processes

Where do data protection and AI intersect?

An explanation of AI-assisted decisions is required: 

1.     If there is a process without any human involvement

2.     It produces legal or similarly significant effects on an individual – e.g. not getting a job. 

Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing. 

What data is in scope?

  • Personal data used to train a model
  • Personal data used to test a model
  • On deployment, personal data used or created to make decisions about individuals

If no personal data is included in a model, AI is not in scope for data protection. 

How to approach an AI project?

 Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation. 

Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination. 

Where does bias/discrimination creep in?

Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:

  • Age
  • Disability
  • Gender reassignment
  • Marriage and civil partnership
  • Pregnancy and maternity
  • Race
  • Religion and belief
  • Sex
  • Sexual orientation. 

When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination. 

Our Top 10 Tips

  1. Ask how the algorithm has been trained – the “black box” excuse isn’t good enough
  2. Review the training inputs to identify possible bias with the use of historic data
  3. Test the outcomes of the model – this really seems so obvious but not done regularly enough
  4. Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes
  5. Consider how to compensate for bias built into the training – a possible form of positive discrimination
  6. Have a person review the outcomes of the model if it is challenged and give that person authority to challenge
  7. Incorporate your AI projects into your data protection governance structure
  8. Ensure that you’ve done a full DPIA identifying risks and mitigations
  9. Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework
  10. Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised? 

In summary

AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable. 

In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged. 

Data Subject Access Requests – 10 Quick Tips

September 2022

Handling DSARs efficiently and effectively

DSARs can be challenging to handle and complete on time, especially when you get one from a disgruntled ex-employee with a grievance.

While it’s clearly important for people to be able to request and receive a copy of their personal data, I fully appreciate how tricky they can be to fulfil. Prior to joining the DPN more than seven years ago, I used to handle them myself and now I spend a fair bit of time helping clients with the requests they receive. Without further ado, here are my quick tips.

Ten Quick DSAR Tips

1. Staff Awareness

A request can come into any part of the business. Requests can be made in writing, verbally or even via social media. We’re told however they come in they’re valid. Customer-facing staff and others need to know how to recognise them and what action to take. And not all requests for information will be a DSAR.

2. It’s not a right to documentation!

People have the right to request a copy of their personal data, but they don’t have the right to receive reams of documents which might contain just their name or email address, or in part relate to them. You can extract relevant personal data from documents and emails, as long as the context is made clear.

3. Always acknowledge DSARs

Quickly acknowledge any request. It can also be helpful to explain a little more about what they can expect to receive. This can save issues further down the line if the individual doesn’t get what they expected to. Always be personable and polite, even if they aren’t!

4. Diarise response date

Be sure to set the date for when the DSAR must be fulfilled by. This is one calendar month from the date you received it. You can start the clock after you’ve received any necessary confirmation of their identity . You can pause the clock if you need to seek further clarification.

5. Talk to the requester

Don’t always sit behind the comfort of an email A telephone call may be a novel suggestion, but in my experience actually speaking to the person (if they are happy to take your call) can make a huge difference.

6. Be wary of requests from third-party portals

Increasingly organisations are receiving DSARs and other privacy rights requests via third-party portals which offer to submit the requests on behalf of individuals. Sometimes multiple requests can be received at once. You have a responsibility to check these requests are genuine, be sure the individual is who they say they are and the third-party has the authority to act on their behalf.

I’ve written more about this here: Managing Erasure Requests or DSARs via Third-Party Portals

7. Collaboration

One person, or indeed the data protection team, can’t fulfil these requests on their own. Make sure others who’ll need to support in gathering relevant information understand their responsibilities, and in particular the need to prioritise any actions. The clock keeps ticking and a calendar month can race away.

8. Share the knowledge

What happens if the person who routinely handles requests is off sick? Or the person from the IT team who knows how to gather the data is on holiday? Make sure other people are familiar with the process, and have a clear written procedure others can pick up if necessary.

9. Don’t forget the exemptions

There’s information you can legitimately withhold. The exemptions are there for a reason – to cover information you’ve good reasons for not disclosing. This might be information relating to other individuals, details subject to legal privilege or commercially sensitive information. Sometimes you’ll be obliged to rely on an exemption, other times you may choose to rely on one or not. Be sure to tell people if you’ve used one (or more) and why.

The ICO’s Right of Access Guidance covers the exemptions and links through to relevant sections in the Data Protection Act 2019.

10. Respond securely

The last thing you want is to cause a potential data breach when responding to a DSAR! It can be helpful to liaise with the individual about how you send the data to make sure this will work for them. While secure sending is crucial, you shouldn’t make it difficult for them to access.

Hmm, should I have done more than 10 tips? Be proportionate when asking for proof of id, consider the privacy of others… and I could go on. Check out our DSAR Guide for more information.

Often DSARs are straightforward, but sometimes they’re a minefield. Having a clear procedure can go a long way to making sure things run as smoothly as possible.

Access controls: Protecting your systems and data

August 2022

Is your data properly protected?

Do existing staff or former employees have access to personal data they shouldn’t have access to?  Keeping your business’ IT estate and personal data safe and secure is vital.  One of the key ways to achieve this is by having robust access controls.

Failure to make sure you have appropriate measures and controls to protect your network and the personal data on it could lead to a data breach.  Which could have very serious consequences for your customers and staff, and the business’ reputation and finances.

A former staff advisor for an NHS Foundation was recently found guilty of accessing patient records without a valid reason.  The ability to access and either deliberately or accidentally misuse data is a common risk for all organisations.

Add to this the increased post-Covid risk of more employees and contractors working remotely, and it’s clear we need to take control of who has access to what.

High-level check list of areas to consider

1. Apply the ‘Principle of Least Privilege’

There’s a useful security principle, known as ‘the principle of least privilege’ (PoLP).  This sets a rule that employees should have only the minimum access rights needed to perform their job functions.

Think of it in the same way as the ‘minimisation’ principle within GDPR.  You grant the minimum access necessary for each user to meet the specific set of tasks their role requires, with the specific datasets they need.

By adopting this principle, you can prevent the risk of employees gaining more access rights over time.  You’ll need to periodically check to make sure they still need the existing access rights they have. For example, when someone changes role, their access needs may also change.

If your access controls haven’t been reviewed for a long time, adopting PoLP can give you great start point to tighten up security.

2. Identity and Access Management

IAM is a broad term for the policy, processes and technology you use to administer employee access to your IT resources.

IAM technology can join it all up – a single place where your business users can be authenticated when they sign into the network and be granted specific access to the selected IT resources, datasets and functions they need for their role.  One IAM example you may have heard of is Microsoft’s Active Directory.

3. Role-based access

Your business might have several departments and various levels of responsibility within them.  Most employees won’t need access to all areas.

Many businesses adopt a framework in which employees can be identified by their job role and level, so they can be given access rights which meets the needs of the type of job they do.

4. Security layers

Striking the right balance between usability and security is not easy.   It’s important to consider the sensitivity of different data and the risks if that data was breached.  You can take a proportionate approach to setting your security controls.

For example personal data, financial data, special category or other sensitive personal data, commercially sensitive data (and so on) will need a greater level of security than most other data.

Technologies can help you apply proportionate levels of security.  Implementing security technologies at the appropriate levels can give greater protection to certain systems & data which demand a high level of security (i.e. strictly-controlled access), while allowing non-confidential or non-sensitive information to be accessed quickly by a wider audience.

5. Using biometrics

How do you access your laptop or phone? Many of us use our fingerprint or facial recognition which give a high level of security, using our own biometrics data.  But some say, for all their convenience benefits, they are not as secure as a complex password!

But then, how many of us really use complex passwords? Perhaps you use an app to generate and store complex passwords for you.  Sadly lots of people use words, names or memorable dates within their passwords. Security is only going to be as good as your weakest link.

6. Multi-factor authentication (MFA)

Multi-factor authentication has become a business standard in many situations, to prevent fraudulent use of stolen passwords or PINs.

But do make sure it’s set up effectively. I’ve seen some examples where MFA has to be activated by the user themselves. So if they fail to activate it, there’s little point having it.  I’ve heard about data breaches happening following ineffective implementation of MFA, so do be vigilant.

There are an array of measures which can be adopted. This is just a taster, which I hope you found useful – stay safe and secure!

Google Analytics: GA4 vs Universal Analytics – What will change?

July 2022

Will GA4 improve compliance?

For any users of Google Analytics, you will have started to see some messaging warning that the Universal Analytics tools will be retired in 2023 and that now is the time to migrate across to Google Analytics 4.

 What is Google Analytics 4 (GA4)? 

GA4 is a new property that helps analyse the performance of your website and app traffic and will replace Universal Google Analytics. It was first released in October 2020 although it’s only now that the campaign to migrate across has started in earnest. 

 Key components include: 

  • Event-based tracking: Universal Analytics is session-based, while GA4 is event–based. In other words, the ability to track events like button clicks, video plays, and more is built in with GA4, while this requires advanced setups in UA. This comes from the premise that page views aren’t the sole important metric.
  • Cross-device tracking: UA was built around desktop web traffic, while GA4 gives businesses visibility into the customer journeys across all of their website and apps.
  • Machine learning: GA4 uses machine learning technology to share insights and make predictions.
  • Privacy-friendly: UA data relies heavily on cookies, GA 4 does not.

Crucially, on July 1, 2023, standard Universal Analytics properties (the previous version of Google analytics) will no longer process data. You’ll be able to see your Universal Analytics reports for a period of time after July 1, 2023. This means that to have a continuous history of activity, it makes sense to move across to the new GA4 platform sooner rather than later. 

What privacy improvements have been made?

GA4 came with a set of new privacy-focused features for ticking GDPR boxes including: 

  • Data deletion mechanism. Users can now request to surgically extract certain data from the Analytics servers via a new interface. 
  • Shorter data retention period. You can now shorten the default retention period to 2 months (instead of 14 months) or add a custom limit.  
  • IP Anonymisation. GA4 doesn’t log or store IP addresses by default. They allocate an anonymous and unique user id to each record
  • First-party data cookies. Google uses first-party cookies which means they’ll still be supported by browsers
  • More data sampling. Google is doing more data sampling using AI to gain more granular analytics insights – this is more privacy friendly and uses models to investigate deeper insights
  • Consent mode. The behaviour of Google tags is managed based on user consent choices. 
  • Collecting PII. Google does not allow the collection of PII in GA4 –  this is considered a violation of Googles terms of service
  • Data sharing with other Google Products. Any linking to Google advertising products requires explicit opt-in consent and a prominent section on the privacy notice 

Is Google now compliant?

Possibly in limited circumstances. If Google anonymises the data by allocating a user id that is never referenced with any other data then we can argue the data is anonymous and therefore not subject to GDPR regulation.

In some instances, this may be the case if you are doing simple tracking and effectively treat your digital platforms as an ivory tower. In most instances, it is not!

If you are advertising and can then link the id to other data, there is the potential to identify individuals and therefore the information becomes personal data and subject to GDPR.

This means that all the usual user consent rules apply and opt-in consent is required to analyse activity.

The major difficulty for Google is that data is exported to the US where it is deemed, by the EU, that Google does not adequately protect EU personal data from US surveillance rules. 

Previously, Google relied on the Privacy Shield framework to ensure that it remained compliant. Since that has been invalidated in 2020, Google has struggled to achieve compliance and has faced a number of fines.          

In particular, Google Analytics does not have a way for:

·       Ensuring data storage within the EU

·       Choosing a preferred regional storage site

·       Notifying users of the location of their data storage and any data transfers outside of the EU

What next?

Ideally, Privacy Shield 2.0 will be introduced soon! Talks have started but they’re unlikely to be swift! The US government has been talking about making its surveillance standards “proportional” to those in place in the EU. This may not be good enough for CJEU. 

In the meantime, implement GA4 as it is more privacy-focused than Google Universal Analytics and hope that US and EU come to an agreement soon. There is a risk in using GA4 and you might want to consider using other solutions.

UK data reform plans revealed: a snapshot

June 2022

DCMS publishes response to data reform consultation

DPOs, Records of Processing Activities and DPIA requirements are all set to go under UK Data Reform plans, as the Government pushes ahead with it’s intention to require organisations to implement a Privacy Management Programme (PMP).

Plans also include changes to PECR (the UK’s Privacy and Electronic Communications Regulations) including permitting charities to use the soft opt-in and allowing analytics cookies without consent.

The Government has set out the detail of how it plans to reform the data protection landscape in its response to the Autumn consultation.

Key highlights

(This article is not intended to cover the wide-ranging detail of the plans. The full consultation response from the Government can be found here).

Accountability 

  • The Government plans to proceed with the requirement for organisations to implement Privacy Management Programmes (PMPs).
  • Organisations currently compliant with the UK GDPR will not need to significantly change their approach, unless they wish to ‘take advantage of the additional flexibility the new legislation will provide’.
  • Organisations will have to implement a PMP based on the ‘level’ of processing activities they’re engaged in and the volume and sensitivity of the personal data they handle.
  • The PMP requirement will be subject to the same sanctions as under the current regime.

Data Protection Officers

  • The requirement to designate a Data Protection Officer will be removed.
  • There will be a new requirement to appoint a senior individual responsible for data protection. It’s envisaged most of the tasks of a DPO will become ‘the ultimate responsibility of a designated senior individual to oversee as part of the privacy management programme.’

Data Protection Impact Assessments

  • Under the new PMP requirement, organisations will be required to identify and manage risks, but ‘they will be granted greater flexibility as to how to meet these requirements’.
  • There will no longer be a requirement to undertake DPIAs as prescribed by UK GDPR.  However, organisations will be required to make sure they have ‘risk assessment tools in place for the identification, assessment and mitigation of data protection risks across the organisation.’
  • Organisations will be able, if they wish, to continue to use DPIAs but can tailor them based on the nature of their processing activities.
  • Existing DPIAs will remain a valid way of achieving the new requirement.

Record of Processing Activities

  • Personal data inventories will be needed as part of organisation’s PMP, covering what and where personal data is held, why it has been collected and how sensitive it is.
  • Organisations will not have to stick to the prescribed requirements set out under Article 30, UK GDPR.

Reporting Data Breaches

  • No changes will be introduced to alter the threshold for reporting a data breach.
  • The Government will work with the ICO to explore the feasibility of clearer guidance for organisations.

Subject Access Requests

  • The Government plans to proceed with changing the current threshold for refusing or charging a fee for Subject Access Requests from ‘manifestly unfounded or excessive’ to ‘vexatious or excessive’. It is said this will bring it in line with the Freedom of Information regime.
  • The Government does not intend to re-introduce a nominal fee for processing access requests.

Alongside changes to the current regime under UK GDPR, the Government plans include amendments to PECR. Key intended changes include:

Cookies

  • In the immediate term, the Government intends to permit cookies (and similar technologies) to be placed on a user’s device without explicit consent, ‘for a small number of other non-intrusive purposes’. It’s anticipated this will include analytics cookies which allow organisations to measure traffic to webpages and improve offerings to users.
  • It’s intended these changes will apply to connected technology, including apps on smartphones, tablets, smart TVs or other connected devices, as well as websites.
  • In the future, the Government intends to move to an ‘opt-out model of consent for cookies placed by websites’. The Government says its ambition is to improve the user experience and remove the need for ‘unnecessary’ cookie consent banners. It stresses an opt-out model would not apply to websites likely to be accessed by children (we’re assuming this means consent would be required) and its ambitions will be subject to an assessment that effective solutions are widely available for use.

Use of ‘soft opt-in’ extended

PECR fines to be increased

  • The Government plans to proceed with proposals to increase fines under PECR. This will allow the ICO to levy fines of up to £17.5m or 4% of a business’s global turnover.  This would bring fines in line with current fines under the existing regime.  Currently the maximum fine under PECR is capped at £500,000.

Political campaigning

  • The Government plans to consider further whether political communications should remain within the scope of PECR’s direct marketing rules (or be excluded).
  • It also intends to extend the soft opt-in so that ‘political parties and elected representatives can contact individuals who have previously shown an interest in the activities of the party (for example, by attending a conference or making a donation) without their explicit consent, provided they have been given an opportunity to refuse such communications at the point of providing their details’.

Human oversight of automated decision-making and profiling

  • The Government notes  the vast majority of respondents to the consultation opposed the proposal to remove Article 22.  The right to human review of automated decisions is considered a fundamental safeguard. It was confirmed this proposal will not be pursued.
  • The Government says it will be considering how to amend Article 22 to clarify the circumstances in which this must apply. It says it wants to align proposals in this area ‘with the broader approach to governing AI-powered automated decision-making’.  This will form part of an upcoming white paper on AI governance.

Legitimate Interests

  • The Government intends to create a limited list of defined processing activities where there would not be a requirement to conduct a balancing test for legitimate interests. This list will initially be limited to ‘carefully defined processing activities’.
  • This is likely to include processing activities to prevent crime, reporting safeguarding concerns or those which are necessary for important public interests reasons.
  • The Government proposes a new power to be able to update this list subject to parliamentary scrutiny.

Adequacy

A key concern is will UK data reform risk adequacy.  The European Commission has granted the UK adequacy, which allows for the free flow of personal data from the EEA to the UK, without the need for additional safeguards.  However, in granting adequacy the EC said it would keep it under review and if any significant changes were made it could revoke the decision.

The Government does not believe its plans risk this decision. The consultation response says; “the UK is firmly committed to maintaining high data protection standards – now and in the future”.

Response from the ICO

UK Commissioner, John Edwards says he shares and supports the ambition of these reforms.  In particular he says “I am pleased to see the government has taken our concerns about independence on board”.  You can read the ICO’s statement here.   The independence of the ICO was cited by Mr Edwards as an area which could jeopardise adequacy (in recent evidence he gave to the Science and Technology Committee).

What next?

We now await the detail of the Data Reform Bill, which will be subject to parliamentary scrutiny.  So still some way to go before the intended changes come into play.

Data Retention Guide

Data retention tools, tips and templates

This comprehensive guides take you through the key steps and considerations when approaching data retention. Whether you’re starting out or reviewing your retention policy and schedules, we hope this guide will support your work.

The guide, first published in June 2020 was developed and written by data protection specialists from a broad range of organisations and sectors.  A huge thank you to all those who made it possible.

Suppliers – why your contracts and security are important

Processors and controllers are both accountable

Do you provide a service to clients and handle your client’s personal data? If you’re acting as a processor, a recent GDPR fine serves as a helpful reminder to be sure to have all your ducks in a row.

There’s a clear warning you shouldn’t just assume the contracts your clients ask you to sign are okay, nor can you just say you have robust security measures in place, you actually have to have them!

In this recent case a software publisher, acting as a processor for their clients, was fined 1.5 million Euros by the French regulator (CNIL) following a data breach involving sensitive health data.

It was found data was exfiltrated by unauthorised parties from a poorly protected server. In a nutshell the key findings were:

  • Significant gaps in the processor’s security processes
  • Contractual documentation which failed to include mandatory obligations required under Article 28 of GDPR.

It’s worth noting the fine was based on both these counts. The ruling makes it clear processors should be wary of relying on their clients to make sure contractual terms are up to scratch. It’s the responsibility of both parties.

Here’s a quick recap on the how suppliers can minimise their risks.

Getting the relationship clear

The most important first step is to establish the relationship between your company and another.

  • Are you handling a client’s data on their behalf, under their instruction, to provide a service to them?
  • Are you acting as controller, clearly determining how the personal data will be used for your own purpose(s)?
  • Are you both? i.e. acting as a controller in certain circumstances, but a processor for specific services you provide to clients.

Are we controller or are we processor?

What are the contractual requirements?

Once you’re clear you are a processor, acting under your client’s instructions, the law states your arrangements with clients must be covered by a binding agreement. EU and UK GDPR set out specific provisions which must be written into such contracts. In brief these are as follows:

1. Types of personal data & categories of data subject

The contract needs to specify what types of personal data you’ll be handling. It should also include details of whether this data relates to your client’s employees, patients, customers, and so forth.

2. Nature, purpose, duration of processing

The contract should describe the nature of the service(s) you provide, what purpose(s) this serves and the term of the contract. The agreement should cover instructions from your client of what you are permitted to do with their data.

3. The rights and duties of each party

The obligations of both parties should be clearly defined. For example, the client’s obligation to have a lawful basis for processing, its responsibility to fulfil individual privacy rights and your commitment as a supplier to not use your client’s data for any other purpose.

4. Technical and organisational measures

As a supplier you need to provide sufficient guarantees to implement proportionate technical and organisational measures to meet requirements of UK/EU GDPR.

5. Sub-processors

If you engage other companies (‘sub processors’) to support you in delivering your services, you’ll need specific or general written authorisation from your client(s). If you make any changes to which sub-processors you use (including software providers), you’ll need to tell your client and give them the opportunity to object. Contractual terms should stipulate that you are accountable for your sub-processors.

6. International transfers

If relevant, the agreement should include details and provisions for any transfers of personal data to a third country. For example if you are based in the UK, a transfer to any other country. This would include details of any sub-processors based outside the UK. A transfer is often associated with the act of sending or transmitting personal data from one country to another. It should be noted the definition also covers cases where personal data is made ‘available’, in other words can be accessed in a third country.

7. Duty of confidentiality

There must be a confidentiality clause, which commits you to ensuring any of your staff authorised to access the client’s data are committed to a duty of confidentiality or are under a statutory obligation of confidentiality.

8. Assisting your clients

The contract should cover your commitment to assisting your clients, where necessary, with handling individual privacy rights, handling data breaches and conducting data protection impact assessments.

9. Return or destruction of data

It should be clear what happens to the client’s data when the contract ends. Does the client want you to return the data or destroy it?

10. Audits and inspections

As a processor you must agree to make available all information necessary to demonstrate your compliance and agree to audits, including inspections by your client or their authorised auditor.

Processors have obligations

This recent CNIL fine shows you can’t just sign a contract, sit back and relax.

As a processor you’re responsible for your sub-processors, data transfers, staff training and confidentiality, assisting your clients when necessary and so forth. You have to be sure to implement the technical and organisation measures you said you would to protect your client’s data.

While some clients will ask you to jump through multiple hoops as part of their due diligence process, making you clearly demonstrate your security measures are robust, others may not be so picky. But that doesn’t release you from your responsibilities.

The law and this recent fine make it clear processors can be held liable. In the event of a breach, your contractual arrangements and internal practices could come under rigorous scrutiny.