Privacy Management Programme – what does one look like?

October 2021

The concept is nothing new, but the term Privacy Management Programme (PMP) has been flung into the spotlight by the UK Government’s plans to reform data laws.

In a nutshell, the Government plans to revise the current accountability framework, replacing existing obligations (some of which are mandatory) with a requirement to implement a PMP.

It’s argued the current legislative framework ‘may be generating a significant and disproportionate administrative burden’ because it sets out detailed requirements organisations need to satisfy in order to demonstrate compliance.

The idea is a new ‘risked-based accountability framework’ will be introduced, requiring organisations to implement a PMP, but allow flexibility to internally tailor the programme to suit the organisation’s specific processing activities.

What is a Privacy Management Programme?

A PMP is a structured framework which supports organisations to meet their legal compliance obligations, the expectations of customers and clients, fulfil privacy rights, mitigate the risks of a data breach – and so forth.

Such a programme should recognise the value in taking an all-encompassing, holistic approach to data protection and privacy; embedding data protection principles and the concept of privacy by design and default.

Core components of a Privacy Management Programme

There are a number of PMP approaches and frameworks in existence. The UK Government has not yet elaborated on what they would expect a PMP to look like.

This top-level summary is broadly based on the IAPP’s Privacy Programme Management approach.

  • Governance

Organisations should develop and implement a suitable framework of management practices which make sure data is used properly and in line with organisational aims, laws and best practice. This should include adopting a privacy by design and by default approach; ensuring appropriate measures are in place to prevent unnecessary risks.

  • Assessments

Achieving clear oversight of the data held and processed, including any suppliers used to support business activities. Developing risk assessment tools which help to identify privacy risks and manage them effectively (e.g. Privacy Impact Assessments / Data Protection Impact Assessments).

  • Record-keeping

Mapping and maintaining an inventory of where personal data is, its purpose, how it is used and who it’s shared with.

  • Policies

Developing and implementing clear policies and procedures to guide staff and give them clear instructions about how personal data should be collected, used, stored, shared, protected and so on.

  • Training and awareness

Making sure adequate and appropriate training is conducted to give staff the knowledge and understanding they need to protect and handle data lawfully and in line with organisational expectations in their day-to-day roles. Making sure people are aware of how their organisation expects them to behave.

  • Privacy rights

Putting in place appropriate procedures to effectively and efficiently fulfil individual privacy rights requests, such as the right of access, erasure or objection.

  • Protecting personal information

Crucial to any PMP is protecting personal information. Working in conjunction with information security, a data protection by design approach would be expected – a proactive rather than reactive approach.

  • Data incident planning

Creating and developing data incident procedures and plans. Having appropriate methods to assess risk and potential impact, as well as understanding breach notification requirements.

  • Monitoring and auditing

Last, but by no means least no PMP would be complete without a methodology for tracking and benchmarking the programme’s performance.

What might change?

To many who’ve endeavoured to comply with the GDPR, all of the above will sound very familiar.

So, the Government isn’t proposing we do away with all the hard work already done. It’s planning a relaxation to some of the mandatory requirements; giving organisations more flexibility and control over how they implement certain elements of their programme.

On the one hand, this could be seen as a welcome move away from a ‘one-size fits all’ approach under UK GDPR, giving organisations more flexibility around how implement their privacy programmes to achieve desired outcomes.

On the other hand, there are fears the removal of mandatory requirements will lead to a watering down of the fundamental principle of accountability (a principle significantly bolstered under GDPR).

Consent: Getting it right!

March 2021

Are you suffering from consent confusion? When must we rely on it? When is it not a good idea? And what must we do to make sure our consent is valid?

Here’s a short refresher to dispel the myths and a quick ‘consent checklist’ to make sure you are ticking all the right boxes!

For starters, one of the biggest myths surrounding GDPR (fuelled by news stories back in 2018) is that we need consent do almost anything with people’s personal data.

Simply not true.

Consent is one lawful basis, there are others

Consent is just one of six lawful basis. They are all equal, no one basis is better than another and you need to pick the right one for what you are doing.

Yes, sometimes consent is required by law for certain activities, but for many others a different lawful basis may be more appropriate.

But you do need to pick one. Data protection law across the EU and UK requires us have a lawful basis for processing personal data.

(By processing we mean doing anything with people’s personal information – from collecting, storing, sharing and even the action of deleting it).

GDPR raised the bar on what constitutes valid consent

GDPR defines consent and says it must be, “freely given, specific, informed and unambiguous” and  must be given by a “clear affirmative action by the data subject”.

This means you need to clearly tell people what they are consenting to and they need to take an action to give their consent. And consent shouldn’t be bundled up with providing another service or with T&Cs.

Just to be clear, the rules for consent under UK GDPR as the same as for EU GDPR. (See UK data protection and ePrivacy law post-Brexit).

When is consent the right lawful basis?

Consent is most appropriate to use when you can offer people a clear choice and give them control over how you use their data. If you can’t do this, you should look to rely on another lawful basis.

When is consent legally required?

There are some circumstances when the law tells us we must gain consent. Let’s take a look…

1. Marketing

In specific situations you need consent to send marketing emails or SMS messages under the UK’s Privacy and Electronic Communications Regulations (PECR).

This is where things can get a bit nuanced. Consent is not always legally required for all marketing emails/SMS. There are choices you can make.

For example, there’s a specific exemption for existing customers (known as the ‘soft opt-in’) and more relaxed rules for business-to-business marketing. For more detail see Understanding email marketing rules.

There are also circumstances in which you will need consent for telemarketing calls. See the ICO’s Guide to PECR.

2. Cookies

You need consent to place cookies or other online tracking methods on people’s devices (unless those cookies are ‘strictly necessary’). Or to install apps or software on people’s devices.

The ICO has confirmed such consent needs to meet the UK GDPR standard, and that cookies used for analytics, performance or marketing are NOT strictly necessary. See the ICO’s cookie guidance.

3. Special category data

If you are intending to handle special category data, for example health data on individuals, you may need to seek explicit consent to make sure this is lawful. This is unless you can rely on another specific legal condition.

GDPR requires you to have a lawful basis for processing special category data PLUS a specific condition under Article 9.

Special category data is information relating to someone’s health, race, ethnicity, political opinions, religious beliefs, trade union membership, sex life, sexual orientation and covers genetic and biometric data.

A word of caution here, if you’re using special category data for direct marketing or profiling purposes, you’ll need explicit consent.

4. If no other lawful basis applies

As you must have a lawful basis for each processing activity you undertake, if no other lawful basis obviously applies, you will need to obtain consent. Here are a couple of examples:

  • If someone would not expect you to be sharing their data with another organisation, it’s likely you would need to collect their consent to do so.
  • If you are planning to use someone’s data for a completely different purpose, which you didn’t tell them about when you collected their data, you are highly likely to need to collect their consent unless another lawful basis applies (e.g. its needed to meet a legal obligation).

Consent checklist

Consent checklist

You also need to consider other factors, such as if you are requesting consent for another organisation it must be separate and they should be named. Also consent doesn’t last for ever and should be refreshed (especially if anything changes).

If you offer online services which are likely to be accessed by children, you also need to consider whether you will need to seek parental consent and/or implement age verification measures. (Also see Children’s Code – deadline for conforming looms)

When is consent not a good option?

Consent will clearly not be the best approach if you will struggle to meet the requirements.

You should be careful about using consent where there’s likely to be an imbalance of power. In other words, where people might feel they have to give their consent.

This makes consent tricky if used by a business for purposes relating to their employees. Perhaps staff may feel a degree of pressure to give their consent, or feel they will be penalised in some way or treated differently if they refuse.

Saying this, sometimes there seems little option but to rely on an employee’s consent. I know a number of organisations using explicit consent for their diversity monitoring, which clearly entails special category data.

Consent isn’t easy

Collecting valid consent and meeting all the requirements may feel like a bit of a minefield. It does mean you need to take careful decisions. It’s worth double checking what risks may be lurking.

However, it is worth getting right, in the words of the ICO, “Genuine consent should put individuals in charge, build trust and engagement, and enhance your reputation.”

A final word of caution; be careful not to try and shoe-horn your activities into another lawful basis (such as legitimate interests), when consent really would be the most appropriate approach.

The data breach that cost Marriott £18.4 million – what went wrong?

November 2020

The humongous penalty train keeps rolling – after the £20 million fine for British Airways for GDPR violations, the Information Commissioner’s Office (ICO) has slapped an £18.4 million fine on Marriott International Inc.

In its ruling, the ICO says Marriott made multiple failures in its technical and organisational measures for protecting personal data. The case also highlights how when a business acquires another company it becomes accountable for past as well as present compliance.

An estimated (and staggering) 339 million guest records were affected worldwide, following the 2014 cyber-attack on Starwood Hotels and Resorts Worldwide Inc. It’s estimated 7 million of those affected were UK citizens.

Starwood was acquired by Marriott in 2016, and the attack went undetected until September 2018. The ICO has stressed its ruling relates to infringements after GDPR came into force in May 2018.

As the data breach was notified before Brexit, the ICO was able to act as lead supervisory authority, charged with investigating the breach on behalf of all affected EU citizens.

The penalty was signed-off by other EU data protection authorities, under GDPR’s one-stop shop mechanism for cross-border cases. Moving forward post-Brexit, the UK will no longer be part of the one-stop mechanism.

Why was the fine reduced?

In its original ‘Notice of Intention’ to fine in July 2019, the ICO set the figure at an eye-watering £99 million. The Regulator says this amount was reduced taking several factors into consideration;

  • Marriott’s representations to the ICO
  • The action the hotel group took to mitigate the breach’s impact
  • The economic impact of the COVID-19 pandemic

There are some rumblings the pandemic may be proving a handy ‘excuse’ for the ICO; COVID-19 was also cited in the reasons for reducing the British Airways fine.

This begs the question – did the ICO significantly over-estimate in their initial notices, or are they being kind-spirited due to the current financial and operating climate?

What went wrong for Marriott?

  • In 2014 unknown hacker(s) installed code onto a device in the Starwood systems. This gave them the ability to edit the contents of the device remotely.
  • This was exploited to install malware, giving the attacker privileged access. The attacker had unrestricted access to connected devices across the Starwood network. The attacker then continued to install further tools, enhancing the malicious access.
  • In 2016 Marriott acquired Starwood. The ICO’s ruling reveals Marriott was only able to carry out limited due diligence of Starwood’s data processing systems and databases prior to acquisition (those with acquisition experience will know how challenging robust due diligence can be).
  • In September 2018, the attacker made a move which finally tripped an alert. They exported a table which contained card details on which a security trigger had been set. Such alerts were not in place to automatically trigger on other data sets accessed – for example passport details.
  • Marriott notified the ICO and affected individuals in November 2018 after becoming ‘aware’ of the nature of the breach.
  • The data exfiltrated by the hacker(s) affected data included names, email addresses, phone numbers, passport numbers, arrival and departure information, VIP status and loyalty program information.

72-hour data breach notification rules

You may note there was a significant time delay between the trigger being fired in September on Starwood’s systems and Marriott’s notification to the ICO in November.

As part of its representations Marriott challenged the ICO’s initial finding that the 72-hour breach notification rules had been infringed (GDPR Article 33).

This comes down to when a controller can be judged to be ‘aware’ a personal data breach has occurred.

In its final ruling ICO found Marriott was incorrect to claim that;

“The GDPR requires a data controller to be reasonably certain that a personal data breach has occurred before notifying the Commissioner. Rather, a data controller must be able to reasonably conclude that it is likely a personal data breach has occurred to trigger the notification requirement.”

However, in ‘this particular case’ taking into account Marriott’s representations the Commissioner decided to make a finding that Marriott had NOT breached the notification requirements.

Key ICO findings

At a top-level there are four key findings in the ICO’s ruling. It’s worth remembering the ruling applies to the period post 25 May 2018, despite historic pre-2018 concerns.

  1. Insufficient monitoring of privileged accounts
    There was a failure to put in place ongoing network and user activity monitoring. The ICO says Marriott should’ve been aware of the need to have multiple layers of security.
  2. Insufficient monitoring of databases
  3. Failure to implement server hardening – the vulnerability of the server could’ve been reduced, for example, through whitelisting.
  4. Lack of encryption – for example, passport details were not encrypted.

If you are interested in the full details, you can read the full ICO Marriott ruling.

The ICO references the National Cyber Security Guidance: 10 steps for Cyber Security, which is a useful resource for any business wanting to make sure their cyber sec is robust.

There’s little doubt the attack Marriott suffered was sophisticated, but the ICO says their investigation revealed how the hotel group failed to put in place appropriate security measures to address such attacks and other identifiable risks to their systems.

Impact on individuals

In its ruling the ICO ruling took into account the nature of the personal data breached.

Despite assurances given and mitigating steps taken by Marriott, the Regulator concluded it was likely some of the affected individuals will, depending on their circumstances, have suffered anxiety and distress. The Ruling also specifically calls out the duration of the breach, lasting as it did a period of 4 years.

What can we learn from this data breach?

The number of people affected, the nature of the data maliciously accessed, the potential distress caused and the size and profile of Marriott… all of these will have played a part in the £18.4 million fine. This is a scalable problem – but for every business cyber security needs to be a priority.

When acquiring a company, due diligence is crucial prior and post-acquisition, but this must be an ongoing process, not a one-off activity.

The fine’s just the tip of the financial iceberg. Marriott will have spent a significant amount on rectifying the breach and mitigating the impact for affected individuals, before we even contemplate the cost of complex and protracted legal representation.

Alongside this hefty financial hit, the hotel group also faces a class action lawsuit from customers who are seeking compensation. If successful, this could prove even more costly.

It’s worth noting the fine would’ve been higher if Marriott hadn’t proactively sent email communications to affected customers, created a data breach website and set up a call centre to provide a data breach hotline.

It’s often said, because it’s true, you can’t underestimate how crucial it is to be prepared for a data breach.  Making sure you have a robust (and tested) data incident plan, being able to effectively and quickly assess the risk posed, plus having a pre-prepared communications strategy and measures to support those affected.

Commenting on the fine, the UK’s information commissioner Elizabeth Denham said;

“Millions of people’s data was affected by Marriott’s failure; thousands contacted a helpline and others may have had to take action to protect their personal data because the company they trusted it with had not. When a business fails to look after customers’ data, the impact is not just a possible fine, what matters most is the public whose data they had a duty to protect.”

Marriott says they remain committed to the privacy and security of their guests and is continuing to make significant investments in security measures for its systems. Marriott has not admitted liability for the breach, but has indicated it won’t appeal.

 

Need extra support and advice? We can support with your data incident planning and procedures. Get in touch – we can also provide rapid support should you suffer a data incident which requires effective and quick investigation.

British Airways data breach – what can we learn?

October 2020

We’ve finally heard the UK Information Commissioner’s Office (ICO) has fined British Airways £20 million for failing to protect personal and credit card data in their 2018 data breach. A breach which affected more than 400,000 BA customers and staff.

A final decision on this has been expected for some time, we just didn’t know what the figure would be until now. The amount is a fraction of the £183 million initially announced in the ICO’s notice of intention to fine. After considering BA’s representations and factoring in the economic impacts of COVID-19 it has been significantly reduced. But it’s still an eye-watering sum, in fact, the largest fine issued by the ICO.

What are the key lessons other businesses can learn from BA’s painful experience?

Information security must be taken seriously at Board level

Modern businesses rely on data more and more to provide quality services for customers and to create competitive advantage.  However, the risks to personal data are numerous, varied and ever-changing. A data breach can massively harm a business’s reputation with its customers, staff and with the world at large.

It’s often said that with power comes responsibility, so businesses need to recognise their roles as guardian and protector of the personal data of their customers and employees. We have to deliver on the promises we make, for example, in our privacy notices. Any steps your business can take to properly protect personal data and demonstrate to staff and the public how seriously you take data protection will help protect them from harm and also may help you to stand out from competitors in these tough times.

Boards need to show leadership by insisting on a strong and vigilant information security regime. I guess that means they need to be prepared to fund it too! It also means asking tough questions about the levels of data protection in place across the organisation.

Rachel Aldighieri, MD of the Data & Marketing Association (DMA), believes this is a wake up call;

“Brexit and coronavirus have put businesses under immense financial strain. A fine of this magnitude will certainly get the attention of Board members of organisations across the UK. They will certainly not want to risk receiving similar disciplinary action from the ICO. This is the largest fine issued by the ICO to date under the new GDPR laws, highlighting the importance all businesses should place on the security of customers’ data and the need to build in safeguards to protect it.

“Data is a fundamental part of the digital economy, so maintaining its security must be a business imperative. Trust in how brands collect, store and use data is essential to the relationship between businesses and their customers. This message should resonate with businesses now more than ever.”

Security measures must not only be ‘adequate’ but also checked and verified

The ICO said there were numerous measures BA could have used to mitigate or prevent the risk of an attacker accessing their network.

Martin Turner, Managing Director at cybersecurity specialists Full Frame Technology, believes BA missed the basics:

“As with so many serious data breaches, this one was caused by a failure to adopt the most basic security measures, including limiting access to applications, rigorous cybersecurity testing, and protecting accounts with multi-factor authentication.

Login credentials for a domain administrator account were stored in plain text. Software code wasn’t reviewed effectively. These are issues that a cybersecurity audit should have revealed, and BA has yet to explain why this didn’t happen.”

The ICO has (finally) shown us it has teeth!

Could this be a turning point? It’s been a long time coming and many expected it to happen much sooner. The ICO have finally issued a BIG fine more in keeping with the expectations most of us had when GDPR came into force.

Nevertheless, you might feel the ICO has shown a measure of pragmatism, reducing the fine down so much from the original £183m. But it’s not great timing for any business to suffer a body blow like this.

It will be interesting to see what figure the ICO finally decide to fine Marriott International for their Starwood data breach, which first came to our attention around the same time as BA. The ICO’s original ‘intention to fine’ for Marriott was £99 million.

Should we think again about data breach insurance?

You might be thinking afresh about breach insurance. We’d suggest you shop around and pay attention to the fine print, as data breach insurance policies can vary more than you might imagine.

Don’t just look at the price as no two policies are the same and there is little consistency in the way policies are worded. The levels of cover and features on offer can vary significantly. Keep an eye out for exclusions!

One key differentiator you may wish to delve into is the level of support your insurer will provide in the event of a breach or a cyber attack. Do they have a team of specialists in place who will advise and help you to triage a live situation? This is one area where you might get just what you pay for.

This fine was long anticipated and the pandemic has definitely played its part in reducing the final amount. The travel sector has been badly impacted by COVID and £20 million will hit BA hard. BA may decide to appeal against it. It goes to show how important it is to have robust data protection and security measures in place.

Data Protection by Design: Part 3 – Data Protection Impact Assessments

September 2020

Getting your DPIA process on track

Deciding when to carry out a Data Protection Impact Assessment (DPIA), and understanding how to conduct one effectively, is a challenging area.

I’ve come across cases where DPIAs are not being conducted when necessary, or left incomplete. Less frequently, DPIAs are over-used, creating an unnecessary burden on key teams.

DPIAs sit at the heart of Data Protection by Design, and this is part 3 of our series, following on from:

Part 1: Data Protection by Design – The Basics 

Part 2 – How to approach Data Protection by Design

Just to be clear – we may be hearing the term DPIA more frequently, but it’s not a new idea – what changed under GDPR is they were made mandatory in certain circumstances. And even if not mandatory they can be a very useful tool in your data protection toolbox.

So how do you make sure your DPIA process is on track? I’ve taken a look at the key stages you should have in place, and how to get people on-board and improve their understanding.

But first things first.

What is a Data Protection Impact Assessment?

Just to recap, a DPIA is a management tool which helps you:

  • Identify privacy risks
  • Assess these risks
  • Adopt measures to minimise or eliminate risks

It’s a way for you to analyse your processing activities and consider any risks they might pose. It focuses on identifying any risks to people’s rights and freedoms, and considers the principles laid down in data protection law.

The key is to start the assessment process early so you can make sure any problems are found (and hopefully fixed) as soon as possible in any project – be this implementing a new system, designing a new app or creating new processes.

When is a DPIA mandatory?

When considering new systems, technologies or processes a DPIA should be conducted if these might result in a high risk to the rights and freedoms of individuals. A DPIA may also be conducted retrospectively if you believe there are inherent risks.

It’s mandatory, under the GDPR to conduct a DPIA in all of the following scenarios:

  • A systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person
  • processing on a large scale of special categories of data or of personal data relating to criminal convictions and offences
  • a systematic monitoring of a publicly accessible area on a large scale

Each EU regulatory authority has published their own list of other scenarios in which a DPIA would be mandatory. You can find the UK Innformation Commissioner’s Office’s in its DPIA Guidance. This includes;

  • use innovative technology (note the criteria from the European guidelines)
  • process biometric data or genetic data (note the criteria from the European guidelines)
  • match data or combine datasets from different sources
  • collect personal data from a source other than the individual without providing them with a privacy notice (‘invisible processing’) (note the criteria from the European guidelines)
  • track individuals’ location or behaviour (note the criteria from the European guidelines)
  • profile children or target marketing or online services at them – it’s also worth checking the new ‘Children’s Code’ aimed at protecting children online

When a DPIA is not mandatory… but a good idea

The ICO says it’s “good practice to do a DPIA for any other major project which requires the processing of personal data.” Here are some examples of where it might be advisable to conduct a DPIA, if your processing;

  • would prevent or restrict individuals from exercising their rights
  • means disclosing personal data to other organisations
  • is for a new purpose (i.e. not the purpose the data was originally collected for)
  • will lead to transfer of personal data outside the European Economic Area (EEA)
  • involves contacting individuals in a manner which could be deemed intrusive.

What the ICO expects you to do

The ICO DPIA guidance has a handy checklist of areas to focus on:

  • provide training so staff understand the need to consider a DPIA at the early stages of any plan involving personal data
  • make sure existing policies, processes and procedures include references to DPIA requirements
  • understand the types of processing that require a DPIA, and use the screening checklist to identify the need for a DPIA, where necessary
  • create and document a DPIA process
  • provide training for relevant staff on how to carry out a DPIA

How to build a robust DPIA process

So how do you go about fulfilling the ICO’s expectations above? Here are some steps to take.

A. Getting Board / Senior Management buy-in

Growing awareness and buy-in from across the organisation is crucial. It can be helpful to highlight why DPIAs are a good thing, for example;

    • they’re a warning system – they alert compliance teams, and the business as a whole, of risks before they occur. Prevention is always better than cure
    • by identifying risks before they’ve an adverse impact, DPIAs can protect you against potential damage to your brand reputation, e.g. from complaints or enforcement action
    • they help management make informed decisions about how your processing will affect the privacy of individuals
    • they show you take data protection seriously and provide evidence, should you need it, of your compliance

Training is also important, I’ll come on to this in a bit, but first you need to make sure your process is fit for purpose….

B. Creating a screening questionnaire

Create a quick set of questions for business owners or project leads to use, which help to identify if a DPIA is required or not.
These can ask about the type of personal data being used, whether it entails any special category data or children’s data, what the aim of the project is and so on.

The answers can be assessed to judge whether a more detailed assessment is really required or not. (It can also show where more training might be needed, if people struggle to answer the questions).

C. The DPIA itself

You need to develop a robust process for conducting a DPIA. The ICO has a template you can use, but it’s good idea to adapt this to suit your business. Make sure it’s easy to understand and not full of data protection jargon.

These are the core aspects it needs to cover:

    • describe the processing you are planning to do – it’s nature, scope, context and purposes
    • assess its necessity and proportionality
    • identify and asses any risks
    • identify solutions and integrate into a plan
    • sign off and record outcomes
    • implement risk control plans
    • and finally, keep your DPIA under review

Let’s look at these seven key stages in a little more depth…

1. Describe your processing

These are some of the type of questions you’d want answers to (this is not an exhaustive list):

    • how is personal data being collected/used/stored and how long it is retained for?
    • what are the source(s) of the personal data?
    • what is the relationship with individuals whose data will be processed?
    • what types of personal data does it involve, does this include special category data, children’s data or other vulnerable groups?
    • what is the scale of the activity – how many individuals will be affected?
    • is the processing within individuals’ reasonable expectations?
    • will data be transferred to a third party and is this third party based outside the EEA?
    • what risks have already been identified?
    • what are the objectives? Why is it important to the business and / or beneficial for individuals?

2. Necessity and proportionality

Consider the following questions (again, this is not an exhaustive list):

    • what is the most appropriate lawful basis for processing?
    • is there another way to achieve the same outcome?
    • have you ensured that the minimum amount of personal data is used to achieve your objectives (i.e. data minimisation)?
    • how can you ensure data quality and integrity is maintained?
    • how will you inform individuals about any new processing?
    • how will individuals’ rights be upheld?
    • are any processors used and if so how will you ensure their compliance?
    • how will international transfers be protected, what safeguard mechanisms will be used?
    • who will have access to personal data, does this need to be restricted?
    • where will data be stored and how will it be kept secure?
    • how long will data be retained and how will data be destroyed when no longer required?
    • have the relevant staff received appropriate data protection training?

3. Identify and assess the risks

Identify any privacy issues with the project and associated risks. These may be risks to the individuals whose data is being processed, compliance or commercial risks.

Is there potential for harm, whether this be physical, material or non-material? A DPIA should ideally benchmark the level of risk using a risk matrix which considers both the likelihood and the severity of any impact on individuals.

You don’t have to eliminate all risks, but they should be documented, and any residual risks need to be understood and, if appropriate, accepted by the business.

If you identify a high risk that you cannot mitigate, you must consult the ICO before starting the processing.

4. Identify solutions and integrate into a plan

Develop solutions which will eliminate or minimise privacy risks and then consider how these solutions impact on the project.

It can be helpful to use the established ‘four strategies for risk management’ (the 4Ts), i.e.

    • Treat the risk, i.e. adopt measures to minimise or eliminate risk
    • Transfer the risk, e.g. outsource the processing
    • Tolerate, e.g. accept risk if its within the organisations accepted level of risk
    • Terminate it, i.e. stop that specific processing or change the process in such a way that the risk no longer exists

5. Sign off and record outcomes

Someone must sign-off that the DPIA is complete and be accountable for any residual risks. It’s a good idea to log residual risks in your Risk Register.

6. Implement risk control plans

7. And finally, keep your DPIA under review

There’s also lots of useful content on this in the ICO’s DPIA Guidance.

D. Awareness and Training

Once you have your questionnaire and DPIA process ready to go, it’s time to make sure people know about it! If people aren’t aware they’ll be busy doing fabulously innovative things, not considering the potential data protection issues and impact on people’s privacy.

Making sure your teams know what a DPIA is, in simple layman’s terms, is an important step – building an understanding about why it’s important and the benefits to the business as a whole.

Creating short, easy to understand, guidelines and raising awareness via other means helps reinforce the message that DPIAs are a good thing and people need to think data protection in their day to day work.

It’s also important to develop people’s skills. After all the DPO (or team/person responsible for data protection) can’t do this single-handed. You need key people to know;

    • what a DPIA entails
    • how to answer the questions
    • what are the types of risks to look out for
      and
    • what type of solutions will mitigate any identified risks

Holding workshops with relevant staff to discuss how you conduct a DPIA, and / or perhaps run through an example, can help improve people’s skills. My key tip would be to try and not over-complicate things and to keep it straightforward.

In summary, whether you are required by law or not to complete a DPIA they are a useful way to make sure data protection is considered from the outset, with no nasty surprises just before your project launches!

“But it’s essential that we go live on Friday!” If I had a penny for every time I’ve heard this one. If only they’d known, or thought of, speaking to the people responsible for data protection.

Often a DPIA won’t required, but there’ll be times when it’s mandatory or just a very good idea.

 

Data Protection team over-stretched?  We can review your existing DPIA process or help you to develop one. We can also do remote DPIA workshops for key members of your teams – Get in touch

Data Protection by Design: Part 2 – How to approach it

September 2020

How to implement Data Protection by Design 

Following my colleague Phil Donn’s popular article on Privacy By Design (Part 1), I’m delving into the detail of what to consider when you are developing new applications, products and service and the how to approach the assessment process.

Good privacy requires collaboration

As a reminder, Data Protection By Design requires organisations to embed data protection into the design of any new processing, such as an app, product or service, right from the start.

This implies the DPO or Privacy team need to work with any project team leading the development, from the outset. In practice, this means your teams need to highlight any plans at the earliest stages.

A crucial part of a data protection or privacy role is encouraging the wider business to approach you for your input into changes which have implications for privacy.

Building strong relationships with your Project and Development teams, as well as with your CISO or Information Security team, will really help you make a step change to embed data protection into the culture as well as the processes of the organisation.

What are the key privacy considerations for Data Protection by Design?

Here are some useful pointers when assessing data protection for new apps, services and products.

  • Purpose of processing – be very clear about the purpose(s) you are processing personal data for. Make sure these purposes are both lawful and carried out fairly. This is especially important where any special category data or other sensitive data may be used.
  • End-to-end security – how will data be secured both in transit (in and out of the app, service or product) and when it’s at rest?
  • Access controls – check access to data will be restricted only to those who need it for specific business purposes. And make sure the level of access (e.g. view, use, edit, and so on) is appropriate for each user group.
  • Minimisation – collect and use the minimum amounts of personal data required to achieve the desired outcomes.
  • Default settings – aim to agree proactive not reactive measures to protect the privacy of individuals.
  • Data sharing – will personal data be shared with any third parties? If so, what will the lawful basis be for sharing this data?
  • Transparency – have we notified individuals of this new processing? (Remember, this may include employees as well as customers). If we’re using AI, can we explain the logic behind any decisions which may affect individuals? Have we told people their data will be shared?
  • Information rights – make sure processes are in place to handle information rights. For example, can data be accessed to respond to Subject Access Requests? Can data be erased or rectified?
  • Storage limitation –appropriate data retention periods should be set and adhered to. These need to take into account any laws which may apply. To find out more see our Data Retention Guidance.
  • Monitoring – what monitoring will or needs to take place at each stage to ensure data is protected?

The assessment process

If there’s likely to be high risk to individuals, you should carry out a Data Protection Impact Assessment. This should include an assessment covering the requirements above.

Many organisations use a set of screening questions to confirm if a DPIA is likely to be required and I would recommend this approach.

In most cases it will also be appropriate for the Project team to consult with their CISO or Information Security Team. It’s likely a Security Impact Assessment (SIA) will also need to be carried out.

In fact, adopting a joint set of screening questions which indicate if there’s a need for a security assessment and/or a DP assessment is even better!

Embrace the development lifecycle

The typical stages involved when developing a new app, product or service are:

Planning > Design > Development > Testing > Early life evaluation > Production

Sometimes these stages merge together, it’s not always clear where one ends and another starts, or they may run in parallel.

This can make the timing of a data protection assessment tricky, particularly if your business uses an Agile development methodology, where the application design, development and testing happen rapidly in bi-weekly ‘sprints’.

I find when Agile is used the answers to certain data protection questions are not necessarily available early on. Key decisions affecting the design may be deferred until later stages of the project. The final outcomes of the processing can be a moving feast.

I always take the data protection assessment process for new developments step by step. Engaging with the Project team as early as possible and starting with the privacy fundamentals.

For example, try to establish answers to the following questions:

  • What data will be used?
  • Will any new data be collected?
  • What are the purposes for processing?
  • What will the outcomes look like?
  • How will individuals be notified about any new processing?
  • Is the app, service or product likely to enable decisions to be made which could affect certain individuals?

An ongoing dialogue with the Project team is helpful. This can be scheduled in advance of key development sprints and any budget decisions which could affect development.

This way the more detailed data protection requirements can be assessed as the design evolves – enabling appropriate measures and controls to protect personal data to be agreed prior to development and before any investment decisions.

Let me give you an example…

I recently helped a to carry out a DPIA for a new application which aimed to improve efficiency by looking at operational workflow data, including certain data on employees who carried out specific tasks.

When we started the design was only partially known, it wasn’t yet agreed whether certain components were in or out of scope, let alone designed. Therefore data protection considerations such as the minimisation of data (to include only that necessary for the processing), appropriate access controls and specific retention periods had not and couldn’t be decided.

We worked through these items as the scope was agreed. I gave input as possible designs were considered, prior to development sprints. We gradually agreed and deployed appropriate measures and controls to protect the privacy of individuals.

Too often in my experience the privacy team is called in too late.  This only leads to frustration if privacy issues are raised in the later stages of a project.  It can cause costly delays, or the poor privacy team is pushed into making hasty decisions. All of which is unnecessary, if teams know to go to the privacy team from the outset.

It can take time and perseverance to get your colleagues on board.  To help them to understand the benefits of thinking about data protection from the start and throughout the lifecycle of projects. But once you do, it makes your business operations run all the more smoothly.

 

Can we help? Our experienced team can support you with embedding Data Protection By Design into your organisation, or with specific assessments –  contact us

 

Data Protection by Design: Part 1 – The Basics

August 2020

Data Protection by Design and by Default – What does it mean? 

You might hear the terms ‘privacy by design’ and ‘data protection by design and by default’ being used when discussing data protection. We’re frequently told to think privacy first, by considering data protection at the outset of any project and embedding it into policies and processes.

That’s all very well, but what does ‘Data Protection by Design’ really mean (and why is it also called ‘Privacy by Design’)? Do you need to be concerned about it? And how do you approach it in practice?

When you delve into the detail, this stuff quickly becomes complex. I’m going to try and avoid ‘privacy speak’ and jargon as much as I can and give an overview of how it all started and where we are now.

What is Privacy/Data Protection by Design?

Data Protection by Design (and also ‘by Default’) are terms ushered in by GDPR.

But the concept’s not new; the roots lie in Privacy by Design which has been around for some time. The brains behind Privacy by Design is Ann Cavoukian (a former Information and Privacy Commissioner for the Canadian province of Ontario). The concept was officially recognised as an essential component of fundamental privacy protection in 2010.

Cavoukian’s approach led to a new way of integrating privacy into products, business processes and policies. At its core it’s all about incorporating privacy measures at the design stage of a project or policy, rather than bolting them on afterwards.

The basis of this approach is to allow businesses to protect data and privacy without compromising commercial effectiveness right from Day One. I’m sure practitioners in other fields, for example Health and Safety or HR, will be familiar with this approach too.

Privacy by Design is based on seven principles designed to embed privacy into a project’s lifecycle. For more detail take a look at the IAPP’s Privacy by Design the foundational principles.

Fast forward to GDPR…

In the past, Privacy by Design was considered a great approach to take and adopted by many businesses worldwide – but it wasn’t mandatory. What’s different now is GDPR has made it a legal requirement.

GDPR also gave us the new term Data Protection by Design and by Default. This means organisations who fall under the scope of GDPR are obliged to put appropriate technical and organisational measures in place. These are commonly referred to as TOMs.

ICO guidance explains why, ‘businesses have a general obligation to implement appropriate technical and organisational measures to show that you have considered and integrated the principles of data protection into your processing activities.’

You need to make sure data protection principles, such as data minimisation and purpose limitation, are implemented effectively from the start. Crucially, such measures also need to focus on protecting people’s privacy rights.

The ICO has produced detailed guidance on the topic, to help you navigate how to consider data protection and privacy issues at the start of your projects, products and processes.

As an aside, this doesn’t mean everything grinding to a halt, claiming ‘I can’t do that because of GDPR’!

The more familiar you become with the basic principles, the easier it is to explain and incorporate them into your business. That’s not to say it’s always a piece of cake – sometimes it isn’t – but neither does it have to be the ball and chain some make it out to be.

Do you need to worry about this stuff?

There’s a short answer to this question – Yes! It’s a legal requirement under GDPR, albeit some organisations will take this very seriously and others will take a laxer approach.

How to make a start

This is a topic that can feel overwhelming to begin with. It’s common to think, “how on earth do I get everyone across our business to think about data protection and consider people’s privacy in everything we do?”

Here are a few tips on organisational measures;

  • Benefits – think about how this approach is good for business and for your employees. It’s not just about trying to avoid data breaches, it’s about being trustworthy, taking care about how you handle and use people’s information. Privacy can be a brand asset; it can save costs and improve the bottom line. Increasingly organisations want to work with partners who can demonstrate sound privacy credentials. In many instances some of the most sensitive data your handle will be that of your employees. You all have an interest in making sure you handle everyone’s personal data in a secure and private way.
  • Collaborate with InfoSec – The two disciplines of privacy and security are intrinsically linked. Businesses are most successful at protecting personal data when the Info Sec and Data Protection teams are joined up, working in tandem.
  • Innovation – gone are the days when data protection was the place where dreams went to die! Sure, there are checks and balances that need to be considered when a great idea has privacy risks. When this happens, it’s up to the data protection team to be as innovative as their colleagues in helping that idea flourish. You never know – your approach to privacy can add value to a project, not diminish its effectiveness.
  • Awareness – think about fresh ways to get the message across – data protection matters. This is a balancing act, because we wouldn’t want to scare people to the extent they worry about the slightest thing. Try to explain that once data protection principles are embedded, much of it is common sense.
  • DPIAs – data protection impact assessments are one of the most important tools in your data protection by design toolbox (you don’t have one?). DPIAs are like a fire alarm – are your developers busy creating the most fabulous app ever? The DPIA should alert them to issues which, if ignored, might be project-breaking to fix later. As an aside, many DPIA templates I’ve seen are unduly complex and impossible for most staff to even attempt. So, try and make this an easier process – jettison the jargon and ask straight-forward questions.
  • Data Governance – I apologise, this really is the dreariest of terms. Nonetheless, it’s seriously worth developing a governance framework across your business which sets out who is responsible, who is accountable for your data and how the data is used. It can help to make sure processes and policies are robust and kept up to date.
  • Training – there’s nothing more empowering than effective training; making sure your people understand data protection principles, what privacy risks might look like and understand how it’s relevant to their job. Once this stuff is explained simply and effectively, it’s amazing how quickly this falls into place.

There’s an old saying: “What’s the best way to eat an entire elephant?” The answer is, “by breaking it into pieces first.”

You know your business – all you need to do now is break down the data protection stuff into manageable chunks as you apply them to your projects. The first couple might be tricky, but after that? There’s no substitute for getting stuck in and applying the principles to real-world problems. And the good news is there’s plenty of advice, training, templates and guidance available.

Use of automated facial recognition by South Wales Police ruled ‘unlawful’

August 2020

The Court of Appeal has upheld a legal challenge against the use of automated facial recognition (AFR) technology by South Wales Police (SWP).

The appeal was brought by Ed Bridges from Cardiff, backed by the civil rights group Liberty.

The AFR technology in question uses cameras to scan faces within a crowd, then matches these images against a ‘Watch List’ (which can include images of suspects, missing people and persons of interest). This flags up potential matches to officers.

Mr Bridges argued his human rights were breached when his biometric data was analysed without his knowledge or consent.

Liberty’s barrister, Dan Squires QC, argued there were insufficient safeguards within the current laws to protect people from an arbitrary use of the technology, or to ensure its use is proportional.

The Court upheld three of the five specific points of appeal, finding that:

  • There was no clear guidance on where AFR Locate (the technology used) could be used and who could be put on a watchlist. The Court held that this was too broad a discretion to afford to the police officers to meet the standard required by law under Article 8 of the Human Rights Convention.’The Court decided the level of discretion given to police officers was too great to meet the required standard under human rights law (Article 8 of the Human Right Convention)
  • The Data Protection Impact Assessment (DPIA) carried out by South Wales Police was found as ‘deficient’ because it was written on the basis that Article 8 of the Human Rights Convention was not infringed.
  • SWP did not take reasonable steps to find out if the software had a bias on racial or gender grounds.

 

This successful appeal followed the dismissal of the case at the Divisional Court on 4 September 2019 by two senior judges, who concluded that use of AFR technology was not unlawful.

Talking about the latest verdict, Mr Bridges commented:

“I’m delighted that the court has agreed that facial recognition clearly threatens our rights. This technology is an intrusive and discriminatory mass surveillance tool.

“For three years now, South Wales Police has been using it against hundreds of thousands of us, without our consent and often without our knowledge. We should all be able to use our public spaces without being subjected to oppressive surveillance.”

SWP have confirmed that they do not seek to appeal against the Court of Appeal’s judgment.

What impact is this ruling on facial recognition likely to have?

The ruling’s impact will extend across other police forces. However, it may not prevent them from using AFR technologies in the future.
The judges commented the benefits from AFR are “potentially great” and the intrusion into people’s privacy were “minor”. However, more care is clearly needed regarding how it’s used.

To move forward, police forces will need clearer more detailed guidance. For example, the ruling indicates officers should document who they are looking for and what evidence they have that those targets are likely to be in the monitored area.

The England and Wales’ Surveillance Camera Commissioner, Tony Porter, suggested that the Home Office should update their Code of Practice.

It will be interesting to watch how this develops. The benefits clearly need to be carefully balanced with the privacy risks.