Data Protection Impact Assessments for Agile projects

November 2023

How to assess risks when a project has multiple phases

Agile methodology is a project management framework comprising of several dynamic phases, known as ‘sprints’. Many organisations use Agile for software & technology development projects, which often involve the processing of personal data.

From a data protection perspective, Agile (and indeed other multi-stage projects) present some challenges. The full scope of data processing is often unclear at the start of a project. The team are focussed on sprint one, then sprint two, and so on. So how do you get Privacy by Design embedded into an Agile project?

Conducting a Data Protection Impact Assessment (DPIA) is a legal requirement under data protection law for certain projects. Even when a DPIA is not mandatory it’s a good idea to consider the privacy impacts of any new processing.

Looking at a project through a privacy lens at an early stage can act as a ‘warning light’, highlighting potential risks before they materialise and when measures can still be easily put in place to reduce the risks.

If your organisation uses Agile, it’s likely you’ll need to adapt your DPIA process to work for Agile projects. Understand the overall objectives and direction of travel to get a handle on how data use will evolve and what risks might be involved.

Working together to overcome challenges

It’s important all areas of the business collaborate to make sure projects can proceed at pace, without unnecessary delays. Compliance requirements must be built into Agile plans alongside other business requirements – just as ‘Privacy by Design’ intended.

Those with data protection responsibilities need project management teams to engage with them at an early stage, to explore the likely scope of processing and start to identify any potential privacy risks, while there’s still time to influence solution design.

This isn’t always easy. Given the fluid nature of Agile, which is its great strength, there is often very limited documentation available for review to aid Compliance assessments.

Privacy questions often can’t be answered at the start – there may be many unknowns. So its key to agree what types of data will be used , for what purposes and when more information will be available for the DPIA – crucially before designs are finalised. Timings for assessment need to be aligned to the appropriate sprints.

As many companies have found, embedding privacy awareness into the company culture is a big challenge and ensuring Data Protection  by Design is a key consideration for tech teams at the outset is an on-going task.

Example: data warehouse

Organisations with legacy data systems might want to build a data warehouse / data lake to bring disparate data silos together under one roof, gain new insights and drive new activity. It’s important to assess any privacy impacts this new processing create.

Using Agile, new capabilities may be created over several development phases. So it’s important to conduct an initial assessment at the start, but stay close to as the project evolves and be ready to collaborate again, in line with sprint timings – before data is transferred or before new solutions are created.

Top tips for ‘Agile’ DPIAs

Here are my top tips for a fluid DPIA process;

1. DPIA training & guidance – make sure relevant teams, especially IT, Development and Procurement, all know what a DPIA is (in simple layman’s terms) and why it’s important. They need to recognise the benefits of including privacy in scope from the start (i.e. ‘by Design’).

2. Initial screening – develop a quick-fire set of questions for the business owner or project lead, which will give the key information you need, such as

  • the likely personal data being use
  • any special category data, children’s data or vulnerable people’s data
  • the purposes of processing
  • security measures… and so on

Once it has been identified there is personal data involved you can start assessing the potential risks, if any. As odd as this may sound, it is not uncommon for tech teams to be unsure at the beginning of a project if personal data (as defined under GDPR to include personal identifiers) will in fact be involved.

3. DPIA ‘Lite’ – if there are potential risks, develop a series of questions to evaluate compliance against the core data protection principles of the GDPR.

The Agile environment can prove challenging but also rewarding. Adopting a flexible DPIA process which works in harmony with Agile is a positive step forward for innovative companies, allowing your business to develop new solutions while protecting individuals from data protection risks, as well as protecting your business from any possible reputational damage.

Call for ban on use of live facial recognition

October 2023

Live facial recognition is being used by UK police forces to track and catch criminals and may be used by retailers to crack down on shoplifting. Is live facial recognition a force for good or a dangerous intrusion on people’s privacy?

The announcement by the UK Government of plans for police to access passport photos to help catch criminals has led to a call for an immediate ban on live facial recognition surveillance.

The accuracy of the algorithms behind this technology are being questioned, as are the privacy implications. Where facial recognition is used, there needs to be a strong justification for its use and robust safeguards in place to protect people.

What is live facial recognition?

Live facial recognition (LFR) is a broad term used to describe technologies that identify, catalogue and track human faces. The technology can be used in many ways but probably the biggest topic of debate relates to the use of facial images captured via CCTV or photos which are processed via biometric identifiers.

These identifiers typically include the unique ratios between an individual’s facial features, such as their eyes, nose and mouth. These are matched to an existing biometric ‘watchlist’ to identify and track specific individuals.

Use of LFR by UK police forces

The Home Office says facial recognition has a ‘sound legal basis’, has already led to criminals being caught and could also help the police in searching for missing or vulnerable people.

Facial recognition cameras are being used to scan the faces of members of the public in specific locations. Currently UK police forces using the technology tell people in advance about when and where LFR will be deployed, with physical notices alerting people entering areas where it’s active.

However, the potential for police to be able to access a wider range of databases, such as passports, has led a cross-party group of politicians and privacy campaigners say both police and private companies should ‘immediately stop’ their use of such surveillance, citing concerns about human rights and discrimination.

Silkie Carlo, Director of Big Brother Watch says; “This dangerously authoritarian technology has the potential to turn populations into walking ID cards in a constant police line-up.”

It’s worth noting in 2020 the Court of Appeal in the UK ruled South Wales Police use of facial recognition was unlawful.

Use of LFR by retailers

Some of the UK’s biggest supermarkets and retailers are also turning to face-scanning technology in a bid to combat a significant rise in shoplifting.

Earlier this year the ICO announced its findings from an investigation into the live facial recognition technology provided to the retail sector by the security firm Facewatch. The aim of the technology is to help businesses protect their customers, staff and stock.  People’s faces are scanned in real time as they enter a store and there’s an alert raised if a subject of interest has entered.

During its investigation the ICO raised concerns including surround the amount of personal data collected and protecting vulnerable people by making sure they don’t become a ‘subject of interest’. Based on information provided by Facewatch about improvement made, and ongoing improvements, the ICO concluded the company had a legitimate purpose for using people’s information for the detection and prevention of crime.

Collaboration between police and retailers

Ten of Britain’s largest retailers including John Lewis, Next and Tesco are set to fund a new police operation. Under Project Pegasus, police will run CCTV pictures of shoplifting incidents provided by the retailers against the Police National Database. It’s anticipated the project will be funded by retailers.

The risk of false positives

The use of Live Facial Recognition raises significant privacy and human rights concerns, such as when it is used to match faces to a database for policing and security purposes.

A 2019 study of facial recognition technology in the US by National Institute of Standards and Technology (NIST) discovered that systems were far worse at identifying people of colour than white people. Whilst results were dependent on the algorithms used, NIST found that some facial-recognition software produced far higher rates of false positives for black and Asian people than whites, by a factor of 10 to 100 times.

NIST also found the algorithms were worse at identifying women than men. Clearly there are huge concerns to be addressed, brought into sharp focus now with the Black Lives Matter movement. Interestingly, there was no such dramatic difference in false positives in one-to-one matching between Asian and white faces for algorithms developed in Asia.

Privacy concerns

Any facial recognition technology capable of uniquely identifying an individual is likely to be processing biometric data (i.e. data which relates to the physical, physiological or behavioural characteristics of a person).

Biometric data falls under the definition of ‘special category’ data and is subject to strict rules. To compliantly process special category data in the UK or European Union, a lawful basis must be identified AND a condition must also be found in GDPR Article 9 to justify the processing. In the absence of explicit consent from the individual however, which is not practical in most LFR applications, it may be tricky to prove the processing meets Article 9 requirements.

Other privacy concerns include:

  • Lack of transparency – an intrusion into the private lives of members of the public who have not consented to and may not be aware of the collection or the purposes for which their images are being collected and used.
  • Misuse – images retrieved may potentially be used for other purposes in future.
  • Accuracy – inaccuracies inherent within LFR reference datasets or watchlists may result in false positives and the potential for inaccurate outcomes which may be seen as biased or discriminatory.
  • Automated decision-making – if decisions which may significantly affect individuals are based solely on the outcomes of live facial recognition.

Requirement to conduct a Data Protection Impact Assessment (DPIA)

A DPIA must be conducted before organisations or public bodies begin any type of processing that is likely to result in a ‘high risk’ to the rights and freedoms of individuals.

This requirement includes:

  • the use systematic and extensive profiling with significant effects on individuals;
  • the processing special category or criminal offence data on a large scale; and
  • the systematic monitoring of publicly accessible places on a large scale.

In our view, any planned use of LFR is very likely to fall under the requirement for the organisation or public body to conduct a DPIA in advance of commencing the activity and take appropriate steps to ensure people’s rights and freedoms are adequately protected.

So where does this leave us?

Police forces and other organisations using LFR technology need to properly assess their compliance with data protection law and guidance.

This includes how police watchlists are compiled, which images are used and for what purpose, which reference datasets they use and how accurate and representative of the population these datasets . The potential for false positives or discriminatory outcomes should be addressed.

Any organisation using LFR must be ready to demonstrate the necessity, proportionality and compliance of its use.

Meanwhile, across the Channel, members of the European Parliament have agreed to ban live facial recognition using AI in a draft of the EU’s Artificial Intelligence Act. Will the UK follow suit?

Suppliers – why your contracts and security are important

Processors and controllers are both accountable

Do you provide a service to clients and handle your client’s personal data? If you’re acting as a processor, a recent GDPR fine serves as a helpful reminder to be sure to have all your ducks in a row.

There’s a clear warning you shouldn’t just assume the contracts your clients ask you to sign are okay, nor can you just say you have robust security measures in place, you actually have to have them!

In this recent case a software publisher, acting as a processor for their clients, was fined 1.5 million Euros by the French regulator (CNIL) following a data breach involving sensitive health data.

It was found data was exfiltrated by unauthorised parties from a poorly protected server. In a nutshell the key findings were:

  • Significant gaps in the processor’s security processes
  • Contractual documentation which failed to include mandatory obligations required under Article 28 of GDPR.

It’s worth noting the fine was based on both these counts. The ruling makes it clear processors should be wary of relying on their clients to make sure contractual terms are up to scratch. It’s the responsibility of both parties.

Here’s a quick recap on the how suppliers can minimise their risks.

Getting the relationship clear

The most important first step is to establish the relationship between your company and another.

  • Are you handling a client’s data on their behalf, under their instruction, to provide a service to them?
  • Are you acting as controller, clearly determining how the personal data will be used for your own purpose(s)?
  • Are you both? i.e. acting as a controller in certain circumstances, but a processor for specific services you provide to clients.

Are we controller or are we processor?

What are the contractual requirements?

Once you’re clear you are a processor, acting under your client’s instructions, the law states your arrangements with clients must be covered by a binding agreement. EU and UK GDPR set out specific provisions which must be written into such contracts. In brief these are as follows:

1. Types of personal data & categories of data subject

The contract needs to specify what types of personal data you’ll be handling. It should also include details of whether this data relates to your client’s employees, patients, customers, and so forth.

2. Nature, purpose, duration of processing

The contract should describe the nature of the service(s) you provide, what purpose(s) this serves and the term of the contract. The agreement should cover instructions from your client of what you are permitted to do with their data.

3. The rights and duties of each party

The obligations of both parties should be clearly defined. For example, the client’s obligation to have a lawful basis for processing, its responsibility to fulfil individual privacy rights and your commitment as a supplier to not use your client’s data for any other purpose.

4. Technical and organisational measures

As a supplier you need to provide sufficient guarantees to implement proportionate technical and organisational measures to meet requirements of UK/EU GDPR.

5. Sub-processors

If you engage other companies (‘sub processors’) to support you in delivering your services, you’ll need specific or general written authorisation from your client(s). If you make any changes to which sub-processors you use (including software providers), you’ll need to tell your client and give them the opportunity to object. Contractual terms should stipulate that you are accountable for your sub-processors.

6. International transfers

If relevant, the agreement should include details and provisions for any transfers of personal data to a third country. For example if you are based in the UK, a transfer to any other country. This would include details of any sub-processors based outside the UK. A transfer is often associated with the act of sending or transmitting personal data from one country to another. It should be noted the definition also covers cases where personal data is made ‘available’, in other words can be accessed in a third country.

7. Duty of confidentiality

There must be a confidentiality clause, which commits you to ensuring any of your staff authorised to access the client’s data are committed to a duty of confidentiality or are under a statutory obligation of confidentiality.

8. Assisting your clients

The contract should cover your commitment to assisting your clients, where necessary, with handling individual privacy rights, handling data breaches and conducting data protection impact assessments.

9. Return or destruction of data

It should be clear what happens to the client’s data when the contract ends. Does the client want you to return the data or destroy it?

10. Audits and inspections

As a processor you must agree to make available all information necessary to demonstrate your compliance and agree to audits, including inspections by your client or their authorised auditor.

Processors have obligations

This recent CNIL fine shows you can’t just sign a contract, sit back and relax.

As a processor you’re responsible for your sub-processors, data transfers, staff training and confidentiality, assisting your clients when necessary and so forth. You have to be sure to implement the technical and organisation measures you said you would to protect your client’s data.

While some clients will ask you to jump through multiple hoops as part of their due diligence process, making you clearly demonstrate your security measures are robust, others may not be so picky. But that doesn’t release you from your responsibilities.

The law and this recent fine make it clear processors can be held liable. In the event of a breach, your contractual arrangements and internal practices could come under rigorous scrutiny.

Overcoming the challenges of data retention

January 2022

Clearing out data you no longer need

How long should we keep our data? Sounds simple enough, but a question many businesses struggle with.

The UK GDPR tells us personal data should only be kept ‘as long as necessary for specified purposes’. So if your organisation is found to be storing data for don’t really need now, you could be subject to unwelcome scrutiny.

Perhaps the main risk here is if your business suffers a data breach. It could become far more serious if you couldn’t provide a suitable justification why you were still holding onto unnecessary data which was included in the breach. In effect, it means two violations of the law in one fell swoop! If you have to notify the individuals affected, what would you say?

Tackling the data we’re holding too long

This does require some thought and planning. As a pre-requisite, you’ll need to know what personal data your organisation holds and what purposes it’s being used for.

Creating a data retention policy is straightforward enough, but developing a record retention schedule can be more complex.

Most organisations use personal data for multiple purposes. You need to take account of each specific purpose and identify the appropriate lawful basis for that processing, before you consider an appropriate retention period. An up-to-date Record of Processing Activities can be a real asset here.

Deciding on suitable retention periods

Firstly, check if there’s a law which mandates you how long certain data must be kept. Laws may dictate minimum or maximum retention periods.

For example, in the UK employment law requires data on ex-employees to be kept for at least 6 years after they leave the business. In certain situations the retention period may be longer. For example, let’s imaging you’re a building firm and your employees have come into contact with hazardous substances as part of their job and you carry out health monitoring. The retention period for these records is much longer.

In many scenarios, however, there are no relevant laws which specify how long the data must be help. Examples include marketing, sales & account management records. In these situations organisations need to judge for themselves what an appropriate retention period should be, and be ready to justify their decision. Take a balanced and reasonable approach, based on your reasons for processing that data.

Deciding what period is ‘necessary’

Where there is no statutory requirement, we suggest speak with internal data owners / relevant functions. The following questions should help you reach the appropriate decision on a period you can justify:

a. Are there any industry standards, guidelines or known good-practice guidelines?
b. Does the product lifecycle have an impact on retention?
c. What are the business drivers for retention? Are they justifiable?
d. What evidence is there that the data is needed for the proposed amount of time?
e. Is there potential for litigation if its keep too long (or deleted too soon)?
f. Is it necessary to keep personal data to handle complaints?

Don’t forget your processors service providers

Controllers who use service providers acting as data processors, should make sure they provide clear contractual instructions about their data retention requirements.

Tell them the retention periods you need and give specific actions they should take when a retention period ends. For example, should they delete the data, return it to you or anonymise it? These may be listed in a data schedule, appended to the main contract or agreement.

Key takeaways

Data retention can be tackled effectively if you get key stakeholders across the business engaged and involved. Agree retention periods and get started on implementing them.

For more tips, tools and templates…

Why not download DPN’s Data Retention Guide.

 

Is working from home a security nightmare?

September 2021

Yes! Here’s our checklist of what do to and watch out for with your WFH teams.

I was on yet another zoom call with my DPN colleagues the other day and we were baffled by some dreadful echoing of our voices. Everything I said bounced back at me.

We logged out, logged back in again but nothing changed. I turned my phone off – no change. Then I remembered that I was sitting in the kitchen with my Alexa turned on. When I unplugged Alexa, the echo disappeared.

That felt odd – we concluded that my Alexa was listening for instructions and so was listening into our call. That felt creepy!!

As we all work from home, this led to a discussion about whether we should put in place additional measures to maintain security over and above the work we had recently done to achieve Cyber Essentials.

The cyber essentials questionnaire doesn’t mention Alexa style devices or much about the location of workspace when you’re WFH.

With thanks to the ICO guidance and the Cyber Essentials documentation, here is our checklist for safely working from home.

1. Policies

Make sure you have policies and procedures in place which all your employees must adhere to. Make sure employees have read and understood the policies you’ve created. Even better, test them on it.

2. BYOD (Bring your own device)

Do decide whether employees can use their own devices. Some organisations have very firm “no personal devices” policies but some are more ambiguous. It is an inescapable fact that letting employees use their own devices is high risk; you’re mixing up business related apps and software with random stuff your employee may have downloaded from the web.

3. Network Access

Decide how employees are going to access business servers – is there a VPN in place? Do you need strong security protocols? It’s important to be proportionate with security measures. Obviously, a bank will feel different to a consultancy that handles no data.

4. WFH in coffee shops/cafes

Does your employee ever work outside the home? In a café for instance? Should you supply them with screens for their devices? Have they been briefed on the importance of keeping their devices secure in a public space and never leaving them alone?

5. The home environment

Does your WFH employee share their home with others? Are they using their personal broadband connection? If so, make sure they change the original passcode on the Wi-Fi to make access more secure. Can they lock their rooms or lock their devices away? Are there any Alexa style devices nearby?

In some instances, you may decide there is no circumstance under which an employee can work from home if the data they’re handling is too sensitive. Make sure you risk assess who can and cannot work at home and provide clear guidance.

6. 2FA and MFA

Where possible, enforce two factor or multi-factor authentication. There is often a lot of resistance to this additional security but, if available, they should be mandatory.

7. Passwords

How about password length – I suspect a surprising number of people still use simple passwords like, say, “12345”. They should be unique and complex with a mixture of letters, numbers and symbols and, ideally, change enforced on a regular basis.

Increasingly it makes sense to use a password manager to keep all you unique and complex passwords in one place. You still need one master password for that system but at least that’s only one you need to remember.

8. Software updates

Are you able to update the user’s software remotely? If they’re using their own device, how do you ensure software is up to date? What safeguards are in place?

9. Cloud Storage

How are documents and files stored? Is there a cloud-based storage facility such as Sharepoint? How is this accessed and who controls the access? There are plenty of opportunities to inadvertently share a document with multiple people by allowing the sharing of links. Try not to let that happen.

10. Email

When using email, all the usual safeguards should apply when it comes to phishing attacks. The IT team should be carrying out tests on a regular basis and provide up to date training on what to watch out for.

Even though our cabinet ministers seem to do it, never ever use your personal email account for work related correspondence!!

How does this all add up?

If you do nothing else, consider the following actions:

  • Gain Cyber Essentials or Cyber Essentials Plus certification: Ensure that you’ve carried out the Cyber Essentials evaluation. It’s particularly important for small businesses but large organisations have also found it useful as well.
  • Conduct a DPIA: Carry out a Data Protection Impact Assessment. This can identify the circumstances under which people are working from home and introduce measures to mitigate the identified risks.
  • Create or bolster your Infosec policy: Create and maintain a robust and proportionate information security policy and ensure all employees are familiar with its contents. Maybe a short test would work well?