Data Protection by Design: Part 2 – How to approach it

September 2020

How to implement Data Protection by Design 

Following my colleague Phil Donn’s popular article on Privacy By Design (Part 1), I’m delving into the detail of what to consider when you are developing new applications, products and service and the how to approach the assessment process.

Good privacy requires collaboration

As a reminder, Data Protection By Design requires organisations to embed data protection into the design of any new processing, such as an app, product or service, right from the start.

This implies the DPO or Privacy team need to work with any project team leading the development, from the outset. In practice, this means your teams need to highlight any plans at the earliest stages.

A crucial part of a data protection or privacy role is encouraging the wider business to approach you for your input into changes which have implications for privacy.

Building strong relationships with your Project and Development teams, as well as with your CISO or Information Security team, will really help you make a step change to embed data protection into the culture as well as the processes of the organisation.

What are the key privacy considerations for Data Protection by Design?

Here are some useful pointers when assessing data protection for new apps, services and products.

  • Purpose of processing – be very clear about the purpose(s) you are processing personal data for. Make sure these purposes are both lawful and carried out fairly. This is especially important where any special category data or other sensitive data may be used.
  • End-to-end security – how will data be secured both in transit (in and out of the app, service or product) and when it’s at rest?
  • Access controls – check access to data will be restricted only to those who need it for specific business purposes. And make sure the level of access (e.g. view, use, edit, and so on) is appropriate for each user group.
  • Minimisation – collect and use the minimum amounts of personal data required to achieve the desired outcomes.
  • Default settings – aim to agree proactive not reactive measures to protect the privacy of individuals.
  • Data sharing – will personal data be shared with any third parties? If so, what will the lawful basis be for sharing this data?
  • Transparency – have we notified individuals of this new processing? (Remember, this may include employees as well as customers). If we’re using AI, can we explain the logic behind any decisions which may affect individuals? Have we told people their data will be shared?
  • Information rights – make sure processes are in place to handle information rights. For example, can data be accessed to respond to Subject Access Requests? Can data be erased or rectified?
  • Storage limitation –appropriate data retention periods should be set and adhered to. These need to take into account any laws which may apply. To find out more see our Data Retention Guidance.
  • Monitoring – what monitoring will or needs to take place at each stage to ensure data is protected?

The assessment process

If there’s likely to be high risk to individuals, you should carry out a Data Protection Impact Assessment. This should include an assessment covering the requirements above.

Many organisations use a set of screening questions to confirm if a DPIA is likely to be required and I would recommend this approach.

In most cases it will also be appropriate for the Project team to consult with their CISO or Information Security Team. It’s likely a Security Impact Assessment (SIA) will also need to be carried out.

In fact, adopting a joint set of screening questions which indicate if there’s a need for a security assessment and/or a DP assessment is even better!

Embrace the development lifecycle

The typical stages involved when developing a new app, product or service are:

Planning > Design > Development > Testing > Early life evaluation > Production

Sometimes these stages merge together, it’s not always clear where one ends and another starts, or they may run in parallel.

This can make the timing of a data protection assessment tricky, particularly if your business uses an Agile development methodology, where the application design, development and testing happen rapidly in bi-weekly ‘sprints’.

I find when Agile is used the answers to certain data protection questions are not necessarily available early on. Key decisions affecting the design may be deferred until later stages of the project. The final outcomes of the processing can be a moving feast.

I always take the data protection assessment process for new developments step by step. Engaging with the Project team as early as possible and starting with the privacy fundamentals.

For example, try to establish answers to the following questions:

  • What data will be used?
  • Will any new data be collected?
  • What are the purposes for processing?
  • What will the outcomes look like?
  • How will individuals be notified about any new processing?
  • Is the app, service or product likely to enable decisions to be made which could affect certain individuals?

An ongoing dialogue with the Project team is helpful. This can be scheduled in advance of key development sprints and any budget decisions which could affect development.

This way the more detailed data protection requirements can be assessed as the design evolves – enabling appropriate measures and controls to protect personal data to be agreed prior to development and before any investment decisions.

Let me give you an example…

I recently helped a to carry out a DPIA for a new application which aimed to improve efficiency by looking at operational workflow data, including certain data on employees who carried out specific tasks.

When we started the design was only partially known, it wasn’t yet agreed whether certain components were in or out of scope, let alone designed. Therefore data protection considerations such as the minimisation of data (to include only that necessary for the processing), appropriate access controls and specific retention periods had not and couldn’t be decided.

We worked through these items as the scope was agreed. I gave input as possible designs were considered, prior to development sprints. We gradually agreed and deployed appropriate measures and controls to protect the privacy of individuals.

Too often in my experience the privacy team is called in too late.  This only leads to frustration if privacy issues are raised in the later stages of a project.  It can cause costly delays, or the poor privacy team is pushed into making hasty decisions. All of which is unnecessary, if teams know to go to the privacy team from the outset.

It can take time and perseverance to get your colleagues on board.  To help them to understand the benefits of thinking about data protection from the start and throughout the lifecycle of projects. But once you do, it makes your business operations run all the more smoothly.

 

Can we help? Our experienced team can support you with embedding Data Protection By Design into your organisation, or with specific assessments –  contact us

 

Data Protection by Design: Part 1 – The Basics

August 2020

Data Protection by Design and by Default – What does it mean? 

You might hear the terms ‘privacy by design’ and ‘data protection by design and by default’ being used when discussing data protection. We’re frequently told to think privacy first, by considering data protection at the outset of any project and embedding it into policies and processes.

That’s all very well, but what does ‘Data Protection by Design’ really mean (and why is it also called ‘Privacy by Design’)? Do you need to be concerned about it? And how do you approach it in practice?

When you delve into the detail, this stuff quickly becomes complex. I’m going to try and avoid ‘privacy speak’ and jargon as much as I can and give an overview of how it all started and where we are now.

What is Privacy/Data Protection by Design?

Data Protection by Design (and also ‘by Default’) are terms ushered in by GDPR.

But the concept’s not new; the roots lie in Privacy by Design which has been around for some time. The brains behind Privacy by Design is Ann Cavoukian (a former Information and Privacy Commissioner for the Canadian province of Ontario). The concept was officially recognised as an essential component of fundamental privacy protection in 2010.

Cavoukian’s approach led to a new way of integrating privacy into products, business processes and policies. At its core it’s all about incorporating privacy measures at the design stage of a project or policy, rather than bolting them on afterwards.

The basis of this approach is to allow businesses to protect data and privacy without compromising commercial effectiveness right from Day One. I’m sure practitioners in other fields, for example Health and Safety or HR, will be familiar with this approach too.

Privacy by Design is based on seven principles designed to embed privacy into a project’s lifecycle. For more detail take a look at the IAPP’s Privacy by Design the foundational principles.

Fast forward to GDPR…

In the past, Privacy by Design was considered a great approach to take and adopted by many businesses worldwide – but it wasn’t mandatory. What’s different now is GDPR has made it a legal requirement.

GDPR also gave us the new term Data Protection by Design and by Default. This means organisations who fall under the scope of GDPR are obliged to put appropriate technical and organisational measures in place. These are commonly referred to as TOMs.

ICO guidance explains why, ‘businesses have a general obligation to implement appropriate technical and organisational measures to show that you have considered and integrated the principles of data protection into your processing activities.’

You need to make sure data protection principles, such as data minimisation and purpose limitation, are implemented effectively from the start. Crucially, such measures also need to focus on protecting people’s privacy rights.

The ICO has produced detailed guidance on the topic, to help you navigate how to consider data protection and privacy issues at the start of your projects, products and processes.

As an aside, this doesn’t mean everything grinding to a halt, claiming ‘I can’t do that because of GDPR’!

The more familiar you become with the basic principles, the easier it is to explain and incorporate them into your business. That’s not to say it’s always a piece of cake – sometimes it isn’t – but neither does it have to be the ball and chain some make it out to be.

Do you need to worry about this stuff?

There’s a short answer to this question – Yes! It’s a legal requirement under GDPR, albeit some organisations will take this very seriously and others will take a laxer approach.

How to make a start

This is a topic that can feel overwhelming to begin with. It’s common to think, “how on earth do I get everyone across our business to think about data protection and consider people’s privacy in everything we do?”

Here are a few tips on organisational measures;

  • Benefits – think about how this approach is good for business and for your employees. It’s not just about trying to avoid data breaches, it’s about being trustworthy, taking care about how you handle and use people’s information. Privacy can be a brand asset; it can save costs and improve the bottom line. Increasingly organisations want to work with partners who can demonstrate sound privacy credentials. In many instances some of the most sensitive data your handle will be that of your employees. You all have an interest in making sure you handle everyone’s personal data in a secure and private way.
  • Collaborate with InfoSec – The two disciplines of privacy and security are intrinsically linked. Businesses are most successful at protecting personal data when the Info Sec and Data Protection teams are joined up, working in tandem.
  • Innovation – gone are the days when data protection was the place where dreams went to die! Sure, there are checks and balances that need to be considered when a great idea has privacy risks. When this happens, it’s up to the data protection team to be as innovative as their colleagues in helping that idea flourish. You never know – your approach to privacy can add value to a project, not diminish its effectiveness.
  • Awareness – think about fresh ways to get the message across – data protection matters. This is a balancing act, because we wouldn’t want to scare people to the extent they worry about the slightest thing. Try to explain that once data protection principles are embedded, much of it is common sense.
  • DPIAs – data protection impact assessments are one of the most important tools in your data protection by design toolbox (you don’t have one?). DPIAs are like a fire alarm – are your developers busy creating the most fabulous app ever? The DPIA should alert them to issues which, if ignored, might be project-breaking to fix later. As an aside, many DPIA templates I’ve seen are unduly complex and impossible for most staff to even attempt. So, try and make this an easier process – jettison the jargon and ask straight-forward questions.
  • Data Governance – I apologise, this really is the dreariest of terms. Nonetheless, it’s seriously worth developing a governance framework across your business which sets out who is responsible, who is accountable for your data and how the data is used. It can help to make sure processes and policies are robust and kept up to date.
  • Training – there’s nothing more empowering than effective training; making sure your people understand data protection principles, what privacy risks might look like and understand how it’s relevant to their job. Once this stuff is explained simply and effectively, it’s amazing how quickly this falls into place.

There’s an old saying: “What’s the best way to eat an entire elephant?” The answer is, “by breaking it into pieces first.”

You know your business – all you need to do now is break down the data protection stuff into manageable chunks as you apply them to your projects. The first couple might be tricky, but after that? There’s no substitute for getting stuck in and applying the principles to real-world problems. And the good news is there’s plenty of advice, training, templates and guidance available.

Use of automated facial recognition by South Wales Police ruled ‘unlawful’

August 2020

The Court of Appeal has upheld a legal challenge against the use of automated facial recognition (AFR) technology by South Wales Police (SWP).

The appeal was brought by Ed Bridges from Cardiff, backed by the civil rights group Liberty.

The AFR technology in question uses cameras to scan faces within a crowd, then matches these images against a ‘Watch List’ (which can include images of suspects, missing people and persons of interest). This flags up potential matches to officers.

Mr Bridges argued his human rights were breached when his biometric data was analysed without his knowledge or consent.

Liberty’s barrister, Dan Squires QC, argued there were insufficient safeguards within the current laws to protect people from an arbitrary use of the technology, or to ensure its use is proportional.

The Court upheld three of the five specific points of appeal, finding that:

  • There was no clear guidance on where AFR Locate (the technology used) could be used and who could be put on a watchlist. The Court held that this was too broad a discretion to afford to the police officers to meet the standard required by law under Article 8 of the Human Rights Convention.’The Court decided the level of discretion given to police officers was too great to meet the required standard under human rights law (Article 8 of the Human Right Convention)
  • The Data Protection Impact Assessment (DPIA) carried out by South Wales Police was found as ‘deficient’ because it was written on the basis that Article 8 of the Human Rights Convention was not infringed.
  • SWP did not take reasonable steps to find out if the software had a bias on racial or gender grounds.

 

This successful appeal followed the dismissal of the case at the Divisional Court on 4 September 2019 by two senior judges, who concluded that use of AFR technology was not unlawful.

Talking about the latest verdict, Mr Bridges commented:

“I’m delighted that the court has agreed that facial recognition clearly threatens our rights. This technology is an intrusive and discriminatory mass surveillance tool.

“For three years now, South Wales Police has been using it against hundreds of thousands of us, without our consent and often without our knowledge. We should all be able to use our public spaces without being subjected to oppressive surveillance.”

SWP have confirmed that they do not seek to appeal against the Court of Appeal’s judgment.

What impact is this ruling on facial recognition likely to have?

The ruling’s impact will extend across other police forces. However, it may not prevent them from using AFR technologies in the future.
The judges commented the benefits from AFR are “potentially great” and the intrusion into people’s privacy were “minor”. However, more care is clearly needed regarding how it’s used.

To move forward, police forces will need clearer more detailed guidance. For example, the ruling indicates officers should document who they are looking for and what evidence they have that those targets are likely to be in the monitored area.

The England and Wales’ Surveillance Camera Commissioner, Tony Porter, suggested that the Home Office should update their Code of Practice.

It will be interesting to watch how this develops. The benefits clearly need to be carefully balanced with the privacy risks.

Dashcams and GDPR: Assessing the privacy implications

August 2020

7 point privacy guide for dashcams

The use of dashcams by taxi firms, business vehicle fleets and others is on the increase. Their use is encouraged by insurers and they are seen as an effective way of combating accident insurance fraud.

As dashcams are highly likely to capture images of people, companies installing them need to take stock and consider data protection law. This is personal data and you should consider the potential privacy impacts on those caught on camera, much in the same way as you would do when using CCTV.

Here’s my 7 point privacy guide for anyone using or preparing to install dashcams.

  1. Confirm what you’re using cameras for. You need to start with a clearly defined purpose (or purposes) for the images you wish to capture. For example to enable us to investigate alleged accidents for insurance purposes.
  2. Create a policy. Your ‘Dashcam Privacy Policy’ should make it clear what specific purposes these images are used for and identify the lawful basis for any processing of personal data images. You should make sure the processing is necessary and limited to those purposes. A policy should also explain what measures and controls are in place to protect individuals whose images are captured.
  3. Brief your drivers. Put simply drivers who operate the dashcams and anyone else who may use the images should fully understand how the cameras should be use and ow the data collected should be handled.
  4. Notify the public. You should consider putting clear signs on vehicles which have cameras. In a similar way to how you would tell people CCTV is in operation. and declare the capture of dashcam images within your company’s privacy notice. In a similar way to how you would tell people CCTV was in operation.
  5. Make sure images are transferred and stored securely. Many modern dashcams used by vehicle fleets provide the capability to schedule image downloads daily to a central image library for storage. These transfers must be secure, as must the location where the images are stored. Access should be restricted, given only to those who are authorised to use the images for the purposes you have specified.
  6. Decide how long to keep the recordings. One of the core data protection principles is not to keep personal data for longer than you need it. You may only need to keep  most images for a very limited period, in case an accident is reported to you. Some claims may come in weeks after the event – so your own experience needs to dictate what is a reasonable period to hold on to recordings. If no accident is reported within the agreed period you should destroy the images. When an accident is reported you may need to retain a specific section of dashcam footage related to the accident or alleged accident whilst the claim is investigated, or if legal hold is required. You should then delete it after a suitable period when its no longer necessary.
  7. You might need to carry out a Data Protection Impact Assessment. If you think the processing of personal data by your dashcams might potentially result in a high risk to individuals, you should conduct a DPIA.

If your vehicle fleet includes heavy goods vehicles (HGVs) of over 12 tonnes gross vehicle weight, which operate within the Greater London area, you should look ahead to complying with the forthcoming Direct Vision Standard.

This new safety standard was created to improve the safety of all road users, including pedestrians, cyclists and motorcyclists. Depending on your specific vehicles, you might be required to fit blind spot cameras which, like dashcams, is likely to capture images of people. The new standard is expected to come into force around 1 March 2021, at the earliest.

Meeting the data protection requirements for business use of dashcams doesn’t need to be onerous, but shouldn’t be overlooked.

Why is it so hard to explain how we use personal data?

June 2020

Five ways to help explain complex and contentious data uses

I was chatting to my niece the other day, a young mum with two young children who spends a lot of time on Facebook. She has hundreds of friends. She had posted a message asking if it was true that when you install the Covid app it will ask permission to share all your contacts from Facebook. One of her friends had posted;

“I am asking you to please delete me and my details from your phone contact list and any other app, as well as un-friend me on Facebook before installing the tracking app on your smart phone.”

I was rather taken aback by this wildly inaccurate assertion given the reality is a far cry from this. The device is basically designed to pick up blue-tooth signals so you are able to track whether you have been in close proximity to anyone who has reported symptoms/tested positive.

I don’t propose to go into the pros and cons of centralised vs de-centralised databases as the arguments have been rehearsed extensively elsewhere. Whatever your political persuasion we need this track and trace programme to succeed. This is a public health crisis and we need everyone to sign up. If there was ever a situation requiring special measures, this must surely be it.

There is a caveat though; we can’t allow carte blanche to collect and keep any data.  Some have expressed valid concerns about the open-ended nature of some of the proposals. Is it really necessary to keep ‘Track and Trace’ data for 20 years?

My niece’s post got me thinking about the importance of clear and transparent communication from Data Controllers around the use of personal data and how, thus far, it has been largely absent.

Successfully explaining the how and why of data processing has to be a top priority otherwise we’ll see many more of those misleading messages spreading like wildfire and resulting in anxious and concerned people avoiding the app and reducing the efficacy of the programme. This point applies to every single business who processes personal data.

To keep things practical here’s a checklist of five ways to help get the message across:

  1. Use different communication methods – not everyone likes reading long screeds of text. Particularly if, like my niece, you are dyslexic. It’s not going to happen. I know it is early days but I hope that NHS and the government indulge in some creative communication methods such as infographics, videos, cartoons to get their message across. Channel 4 are an exemplar as are The Guardian.
  2. Using plain English – if you have to write it down, make sure it’s couched in terms that your target audience will understand. Plain English, short sentences, easy to understand words should be deployed to get your message across. Various reports place average reading age as 8, 9 or 11. Whatever the truth there are large chunks of the population who will not understand what you have written if you restrict your messaging to rather formal and, frankly long-winded, DPIAs and Privacy Statements.
  3. Use layers of communication – the ICO advocates a layered approach to communicating complicated messages. If you create a thread through your messages from clear top-level headlines with clear links to additional information there is a higher chance of achieving better levels of comprehension.
  4. Keep it short and sweet – having read the 30 + page DPIA for the Covid app I was struck by how repetitive it is. Not only do you lose the will to live but comprehension levels are low and confusion levels are high leading to Twitter storms about what is and is not in the document. All of which is rather unhelpful.
  5. Be upfront and transparent – not only is it easier to understand but most sensible people can work out for themselves if the data processing makes sense without anyone needing to embellish it with soothing words which obfuscate and confuse. It can feel scary to tell individuals what is happening with their data but if you can explain why and, crucially, explain what’s in it for the individual all will be fine. For those fans of Gogglebox over the last few weeks, it’s perfectly obvious that people can work out what’s going on.

Overall though, this is a major marketing challenge. Explaining how you use personal data is an important branding project which allows a company to reflect their values and their respect for their customers.

The marketing teams need to get close to their legal colleagues and use their formidable communication skills to make these important data messages resonate and make sense.

Seven Step Ad Tech Guide from DMA and ISBA

May 2020

The DMA and ISBA guide for marketers and advertisers to help navigate through the complexity of handling personal data in Ad Tech.

This guide was written in response to the ICO’s Ad Tech Update which looked into how data was used in auction style Real Time Bidding.

The ICO had identified a number of concerns relating to the protection of the rights of data subjects through the use of Real Time Bidding (RTB) in the programmatic delivery of digital advertising.

As background for the uninitiated, the majority of digital advertising is delivered programmatically (through automation) via a variety of methods including Real Time Bidding (RTB).

RTB is defined as the delivery of programmatic advertising by a real-time auction method. To support this process, there are a myriad of technology solutions (Ad Tech) providers who enable advertisers to identify and target recipients of advertising delivered in real time.

The guide written in collaboration with the DPN and PwC UK, aims to support UK businesses actively engaged in the programmatic delivery of digital advertising to ensure they protect the rights of data subjects.

It is a practical guide to the seven steps participants can take to ensure they adhere to the legal requirements and demonstrate their understanding of the regulator’s concerns. The DMA and ISBA were able to consult with ICO during the development of the guide.

It’s designed as a reference with clearly defined sections allowing readers to read the whole document or dip in as the need arises. Where suppliers are mentioned these are noted as examples and are not recommendations.

This guidance is divided into seven clear steps:

1. Education and Understanding – a comprehensive introduction to cookies and programmatic advertising with a detailed glossary of terms.

2. Special Category Data – the ICO highlighted the importance of treating special category data with care and this section steps you through its definition and usage.

3. Understanding the Data Journey – a key challenge is being able to track how data is captured and who processes it. This section explains how to complete a Record of Processing Activities as well as introducing the IAB’s Transparency and Consent Framework.

4. Conduct a DPIA (Data Protection Impact Assessment) – the ICO noted the limited use of DPIAs in Ad Tech. This section sets out to explain what it is, when to use it as well as some pointers to what questions to ask.

5. Audit the Supply Chain – the ICO highlighted that you cannot rely on contracts to provide assurance around the use of personal data. This section provides audit check lists and questions you need answered when auditing suppliers.

6. Measure Advertising Effectiveness – the ICO have queried whether it’s necessary to use all the data collected through Ad Tech platforms. This section provides links to reference materials for improving insights into advertising effectiveness to allow for a proportionate approach to using personal data.

7. Alternatives to Third Party Cookies – what does a post third-party cookie world look like? This section provides some suggestions about alternative methods of targeting including the adoption of contextual targeting. It also provides references to some industry initiatives which are exploring different ways of targeting in a less intrusive manner.

See the full 7 Step Ad Tech Guide

GDPR: The Right of Access

The right of access is nothing new, but there are some changes ushered in by the EU General Data Protection Regulation (GDPR). There’s also the anticipation that increased awareness (and the removal of the fee) will see the number of requests received rise.

It’s crucial that employees are aware of what a Data Subject Access Request (DSAR) is and the importance of immediately passing such requests to the Data Protection Officer or relevant member of staff/team. Time is of the essence!

What is a data subject access request?

A DSAR is a request from a data subject to be provided with a copy of the personal data being processed by a Controller and an explanation of the purposes for which personal data is being used.  A complaint or general query about how personal data is being used does not constitute a DSAR, for example a query about why marketing is being received or where you got someone’s name from. A DSAR is specifically when anyone asks to receive a copy of the personal data you may hold for them. A request does not need to be formerly called a “subject access request” or “access request” for it to constitute one, and they will rarely be entitled as such.

A request could be sent to any department and come from a variety of sources.  Individuals do not need to officially write a letter addressed to the Data Protection Office for it to be a valid request. They might be submitted by email or social media and may be addressed to the “wrong” department or person.

What are the changes under the GDPR?

Less time to respond: The timescale for responding to a DSAR has been reduced from 40 days to one calendar month, representing a challenge for many organisations.

No fee: Organisations can longer charge a £10 fee for a DSAR. However, where the request is deemed to be excessive or manifestly unfounded organisations can charge a “reasonable fee” to cover the administrative costs of complying with the request. There is also an ability to charge a “reasonable fee” if an individual requests further copies of their data. But, even if you suspect a request may be malicious this is very unlikely to be sufficient grounds for refusing to respond.

Article 15 of the GDPR sets out the the information that individuals have the right to be provided with. Broadly this covers providing information about:

  • What personal data it is being processed
  • The purposes for which the personal data is being
  • Who the personal data has or will be disclosed
  • The existence of any automated decision-making, including profiling. And, at least where this produces legal or similarly significant effects, what logic is being used for that purpose.
  • How long the data will be retained for (or at least the criteria used to determine this)

Initial Response

In order for a formal DSAR to be valid it must come from the individual themselves (or an authorised agent/parent/guardian) and needs to be accompanied by enough information to enable you to extract the personal data pertaining to the individual from your systems.

It is very important to establish that the individual asking for the information is who they say they are, to avoid the damage of inadvertently disclosing personal information to the wrong person. There have been several instances of fraudulent requests in order to aid identity theft.

If the information the individual has provided in their request is insufficient, you should ensure you have a standard initial response process so you can immediately ensure you have enough details to fulfil the request. For example you may need to:

  • request proof of ID (if the requester is an employee or ex employee this may not be necessary if it is obvious to you who they are)
  • request proof of relationship/authority (for example if information is requested about a child or by an agent)
  • ask if they are interested in specific information (if they request ALL personal data you cannot restrict this)
  • ask what their relationship is with your organisation
  • ask if they wish to see CCTV images of them (if relevant) and request a photograph, description of clothes worn, dates of visits etc.
  • ask if they require the information to be provided in writing or whether they will accept it in an electronic from

You have one calendar month to provide your formal response to the individual.

In limited circumstances this can be extended for up to a maximum of a further two months

Gathering the information

Ensure you have a standard process to efficiently check all relevant systems and liaise with other departments. A SAR covers most computerised personal data you hold (including archives and backups) and some paper records (where these are held in a systematic and structure format). Email systems will need to be checked for emails pertaining to the individual (where they are referenced by name or are identifiable).

[Update] Do you need to include deleted records? The ICO’s view in its detailed Right of Access Guidance (published Oct 2020)  is “Information is ‘deleted’ when you try to permanently discard it and you have no intention of ever trying to access it again. The ICO’s view is that, if you delete personal data you hold in electronic form by removing it (as far as possible) from your computer systems, the fact that expensive technical expertise might enable you to recreate it does not mean you must go to such efforts to respond to a SAR.”

Review the information

If no personal data is held about the individual they must be informed of this.

If the information you have gathered contains personal data relating to other individuals you need to carefully (on a case by case basis) consider whether/how to redact this or judge it to be reasonable to disclose. Such information can be disclosed with the consent of other parties. Where consent is not feasible you need to consider the privacy impact and/or how your duty of confidentiality to these other parties could be broken should you disclose this information. You should document any justification for disclosure of personal relating to other parties.

Your formal response

The information you provide must be in an “intelligible form”, in other words one in which the average person would be able to understand. Avoid using jargon or terms that people outside the business might not understand and explain any codes. Ensure the information you are providing covers the requirements under Article 15. When supplying the information use a traceable delivery system.  If agreed with the individual send it via secure electronic means.

And finally, keep a record of your response!