Is YOUR data training THEIR AI? In our personal and working lives we’re under a barrage of notifications inviting us to use new AI functionality. Sometimes it’s not even a feature we actively turn on, it’s automatically on by default and we need to take steps disable it or opt-out. The problem is we often don’t know how this AI actually works, what it might do, what data it uses, or what data is used to train it. Recently LinkedIn announced it has started sharing user generated content for LLM training. While you may be happy with this, for those that aren’t, you need to actively go into your data privacy settings and switch it off. More broadly, many organisations are being encouraged to take advantage of shiny new AI capabilities offered by their existing software providers. HR, Finance, IT and CRM software can seemingly do so much more if the latest AI tool is enabled. It’s very tempting to give it a try. And I suspect many data protection teams are struggling to keep up with parts of the business which have drifted into using AI tools without much deliberation. We need to be aware using AI for seemingly innocuous purposes can have unexpected consequences. We’ve written about the risks to consider when using AI to transcribe or record meetings. Did you know there’s a feature on MS Teams which can automatically detect when an employee is connected to the company wi-fi and update their location to ‘in the office’? This may seem like a simple and useful feature to switch on. But in essence this is a form of workplace tracking, and may raise some considerations, not least is it proportionate and lawful? Even with our existing suppliers, we’d be wise to conduct some due diligence. AI functionality isn’t always a straight-forward extension of an existing service. We should assess the benefits and risks, be clear about our objectives and whether we are a controller or joint controller. Make sure our activities are lawful, fair and transparent. Be sure our data is still being processed by the same party and in the same country. Understand if our data will be used to use to train the software providers’ models, and where data is anonymised or aggregated, be confident this is effective enough in preventing risk. It may feel daunting, but we should try and have some level of understanding about how a third-party supplied AI system works. Ultimately, we’re responsible for complying with data protection law for any personal data we allow to be used in or by an AI system. Recently I was reviewing the AI usage of a client’s existing software provider. They were ambiguous about the use of the client’s personal data to train their own models. It became clear processing was no longer taking place in Ireland, but in the United States and India. I’ve seen other AI software where it’s transpired it’s not been developed by the software provider themselves, but they are using AI provided by a third party (who the data is shared with). Which made me wonder; is the AI provider using that data for their own purposes, such as AI training? Ideally we should be asking AI providers, whether they be new or existing suppliers, to work with us to conduct a Data Protection Impact Assessment. If they’re reluctant to help, or not able to answer key questions, this might raise concerns. I’m not saying all AI tools are inherently a bad thing. There are many benefits to be gained! Just do some digging, and keep your eyes open. How to govern your organisation’s use of AI
GDPR RoPA simplification Will EU proposals to change Records of Processing Activities requirements have an impact in practice? As GDPR passes its 7th birthday, there’s been a flutter of excited commentary about European plans to make changes to the ground-breaking data protection law. In particular, potential amendments aimed at easing the compliance burden on small to medium-sized businesses. So far, it’s fair to say the proposed changes from the European Commission are far from earth-shattering (albeit there could be more in the pipeline). A key proposal relates to Article 30, Records of Processing Activities. The obligation to keep a RoPA would no longer apply to organisations with fewer than 750 employees provided their processing activities are unlikely to pose a ‘high risk‘ to the rights and freedoms of individuals. The proposal also clarifies the processing of special category data for purposes related to employment, social security and social protection would not, on their own, trigger the requirement to maintain Article 30 records. For comparison, the existing exception only applies to organisations with less than 250 employees, unless the processing carried out is: ⏹ Likely to result in a risk to the rights and freedoms of data subjects, ⏹ The processing is not occasional, or ⏹ The processing includes special category data or personal data relating to criminal convictions and offences. What impact might this RoPA change have? As many organisations process special category data (even if just for their employees), and processing activities are often routine, not occasional, the current exception for smaller companies is limited in scope. The proposed wider exemption would clearly apply to far more organisations. I can absolutely see why the Commission has homed in on RoPA requirements, as in my experience many organisations struggle to maintain an up-to-date RoPA, or don’t have one at all. But how helpful could this change actually be? In practice, organisations subject to GDPR will still need to assess whether their processing activities involve ‘high risk’ to individuals. To do this they will need to weigh up their purpose(s) for processing, their lawful basis, how long they keep personal data, who it is shared with, whether any international data transfers are involved, what security measures are in place and so on. It seems a bit of a catch 22 – a RoPA is a great way of capturing this vital information and clearly ascertaining where risk might occur. Alongside this, organisations will still need to meet transparency requirements and the right to be informed. And, yes you guessed it, an accurate RoPA is very helpful ‘checklist’ in making sure a privacy notice is complete. We’ve written more about the benefits of a RoPA here. Importantly, if this proposed change goes ahead, it won’t apply to organisations which fall under the scope of UK GDPR (unless the UK Govt decides to adopt a similar change). Notably, fairly significant changes to UK GDPR’s accountability requirements were on the cards under the previous Conservative Government’s data reform bill. However, seen as too controversial, these were swiftly dropped after the election in the new Labour Government’s Data (Use and Access) Bill (DUA). It’s possible the UK could regret not being more ambitious in the DUA Bill; there’s an obvious irony given oft-heard criticisms of EU overregulation – here’s a case where the EU’s easing of certain requirements could leave UK organisations with more onerous rules.
GDPR: Consent and why records are crucial The ICO has fined a telemarketing firm £90k for their inability to demonstrate valid and specific consent was collected from the people they’d contacted. Data was collected directly, via the telemarketer’s website and via a third-party survey company. Crucially, the firm couldn’t produce evidence of consent. This led me to think about other organisations; you may have gone to great efforts to make sure the consent you collect meets the GDPR standard, but are you keeping adequate records? Occasionally, the old legal adage applies – ‘If it isn’t written down, it didn’t happen.’ If your consent is subject to regulatory scrutiny, proof is highly likely to be requested. A customer might ask for evidence, and could escalate a complaint if you’re unable to produce it. So, what records do we need to keep? Here’s a refresher on the consent rules and how to retain adequate evidence. For simplicity’s sake when I refer to GDPR in this article I mean both GDPRs – the EU and UK flavours. Consent is ONE of SIX lawful bases for processing Consent is just one of six lawful bases. GDPR requires organisations to select an appropriate lawful basis for each purpose for processing personal data. They’re all equally valid; no single basis is better than another. You should choose the most appropriate basis for each activity. Often consent might not be appropriate, but sometimes consent is required by law for certain activities. Just be mindful; don’t rely on consent if another lawful basis would be more appropriate. But also be careful not to try and shoe-horn your activities into another lawful basis (such as legitimate interests), when consent really would be the best approach, or is legally required. What constitutes valid consent GDPR defines consent as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her”. Let’s break this down… Freely given consent ■ People must be given a genuine choice ■ People should be able to refuse to give their consent without detriment ■ Consent should be easy to withdraw ■ Consent shouldn’t be bundled into T&Cs, unless necessary for the service It’s also sometimes important to weigh up any ‘imbalance of power’ over the individual whose consent you seek. For example, consent may not be freely given if the individual feels they don’t really have a choice. Consent can therefore be tricky in employer-employee relationships, if staff might feel a degree of pressure, or feel they will be penalised or treated differently if they refuse. Specific and informed consent ■ It must be clear who people are giving their consent to. The organisation relying on the consent must be clearly identified. If you want to rely on consent collected for you by a third party, your organisations must be named at the time consent is collected. ■ Consent must specifically cover all of the purposes for which it’s being collected. Separate consent should be collected, wherever possible, for different activities. For example, collecting separate marketing consents for different marketing channels. This isn’t a hard and fast rule and isn’t required if it would be unduly disruptive, or the activities are clearly interdependent. ■ It must be clear people can withdraw their consent at any time (and the ICO advises you include details of how to do so). Remember, there’s specific information you’ll always need to provide when you collect people’s personal details. There are distinct transparency requirements and people have the right to be informed. You may choose to take a layered approach, and it’s advisable to always have a clear link to a Privacy Notice (aka Privacy Policy), or details of how to access this. Consent by an unambiguous indication and clear affirmative action Consent must be given by a deliberate and specific action to opt-in or agree. For example; an opt-in box, clicking ‘submit, signing a statement, or verbal confirmation. Failing to opt-out is not consent. Pre-ticked boxes are not consent. For more information see ICO consent guidance, which covers how to collect consent, how to manage requests to withdraw, and more. Evidence of consent GDPR states: “Where processing is based on consent, the controller shall be able to demonstrate that the data subject has consented to processing of his or her personal data.” This means, organisations must have an audit trail to meet their accountability obligations. This is what the telemarketing firm failed to grasp. In practice, this means keeping records of: ■ Who consented e.g. their name or other identifier. ■ When they consented e.g. an online time stamped record, a copy of a dated document or a note of the time and date verbal consent was given. ■ What they were told at the time e.g. a copy of the consent statement used at the time, along any separate privacy notice or other privacy information used at the time. ■ How consent was given e.g. a copy of the data capture form or a note of a verbal conversation. ■ Any withdrawal of consent, and when. This is why we recommend when your updating consent statements or privacy notice(s) keeping copies of older notices and the dates they were operative. This doesn’t need to extend to keeping copies of every web form, but records held on your CRM or other relevant system need to be accurate. The ICO guidance on keeping records of consent is a useful resource. Consent isn’t easy Collecting valid consent can feel like a minefield. It means carefully ticking off requirements and keeping evidence. This isn’t hard once you’ve established a routine and get into the habit of thinking ‘that needs keeping hold of.’ Getting this right, means you’ll breathe a sigh of relief if you’re ever subjected to scrutiny. For more detail on when consent is legally required under UK ePrivacy law for marketing activities see our guides to the email marketing rules and telemarketing rules.
Data Protection Impact Assessments for Agile projects How to assess risks when a project has multiple phases Agile methodology is a project management framework comprising of several dynamic phases, known as ‘sprints’. Many organisations use Agile for software & technology development projects, which often involve the processing of personal data. From a data protection perspective, Agile (and indeed other multi-stage projects) present some challenges. The full scope of data processing is often unclear at the start of a project. The team are focussed on sprint one, then sprint two, and so on. So how do you get Privacy by Design embedded into an Agile project? Conducting a Data Protection Impact Assessment (DPIA) is a legal requirement under data protection law for certain projects. Even when a DPIA is not mandatory it’s a good idea to consider the privacy impacts of any new processing. Looking at a project through a privacy lens at an early stage can act as a ‘warning light’, highlighting potential risks before they materialise and when measures can still be easily put in place to reduce the risks. If your organisation uses Agile, it’s likely you’ll need to adapt your DPIA process to work for Agile projects. Understand the overall objectives and direction of travel to get a handle on how data use will evolve and what risks might be involved. Working together to overcome challenges It’s important all areas of the business collaborate to make sure projects can proceed at pace, without unnecessary delays. Compliance requirements must be built into Agile plans alongside other business requirements – just as ‘Privacy by Design’ intended. Those with data protection responsibilities need project management teams to engage with them at an early stage, to explore the likely scope of processing and start to identify any potential privacy risks, while there’s still time to influence solution design. This isn’t always easy. Given the fluid nature of Agile, which is its great strength, there is often very limited documentation available for review to aid Compliance assessments. Privacy questions often can’t be answered at the start – there may be many unknowns. So its key to agree what types of data will be used , for what purposes and when more information will be available for the DPIA – crucially before designs are finalised. Timings for assessment need to be aligned to the appropriate sprints. As many companies have found, embedding privacy awareness into the company culture is a big challenge and ensuring Data Protection by Design is a key consideration for tech teams at the outset is an on-going task. Example: data warehouse Organisations with legacy data systems might want to build a data warehouse / data lake to bring disparate data silos together under one roof, gain new insights and drive new activity. It’s important to assess any privacy impacts this new processing create. Using Agile, new capabilities may be created over several development phases. So it’s important to conduct an initial assessment at the start, but stay close to as the project evolves and be ready to collaborate again, in line with sprint timings – before data is transferred or before new solutions are created. Top tips for ‘Agile’ DPIAs Here are my top tips for a fluid DPIA process; 1. DPIA training & guidance – make sure relevant teams, especially IT, Development and Procurement, all know what a DPIA is (in simple layman’s terms) and why it’s important. They need to recognise the benefits of including privacy in scope from the start (i.e. ‘by Design’). 2. Initial screening – develop a quick-fire set of questions for the business owner or project lead, which will give the key information you need, such as the likely personal data being use any special category data, children’s data or vulnerable people’s data the purposes of processing security measures… and so on Once it has been identified there is personal data involved you can start assessing the potential risks, if any. As odd as this may sound, it is not uncommon for tech teams to be unsure at the beginning of a project if personal data (as defined under GDPR to include personal identifiers) will in fact be involved. 3. DPIA ‘Lite’ – if there are potential risks, develop a series of questions to evaluate compliance against the core data protection principles of the GDPR. The Agile environment can prove challenging but also rewarding. Adopting a flexible DPIA process which works in harmony with Agile is a positive step forward for innovative companies, allowing your business to develop new solutions while protecting individuals from data protection risks, as well as protecting your business from any possible reputational damage.
Call for ban on use of live facial recognition Live facial recognition is being used by UK police forces to track and catch criminals and may be used by retailers to crack down on shoplifting. Is live facial recognition a force for good or a dangerous intrusion on people’s privacy? The announcement by the UK Government of plans for police to access passport photos to help catch criminals has led to a call for an immediate ban on live facial recognition surveillance. The accuracy of the algorithms behind this technology are being questioned, as are the privacy implications. Where facial recognition is used, there needs to be a strong justification for its use and robust safeguards in place to protect people. What is live facial recognition? Live facial recognition (LFR) is a broad term used to describe technologies that identify, catalogue and track human faces. The technology can be used in many ways but probably the biggest topic of debate relates to the use of facial images captured via CCTV or photos which are processed via biometric identifiers. These identifiers typically include the unique ratios between an individual’s facial features, such as their eyes, nose and mouth. These are matched to an existing biometric ‘watchlist’ to identify and track specific individuals. Use of LFR by UK police forces The Home Office says facial recognition has a ‘sound legal basis’, has already led to criminals being caught and could also help the police in searching for missing or vulnerable people. Facial recognition cameras are being used to scan the faces of members of the public in specific locations. Currently UK police forces using the technology tell people in advance about when and where LFR will be deployed, with physical notices alerting people entering areas where it’s active. However, the potential for police to be able to access a wider range of databases, such as passports, has led a cross-party group of politicians and privacy campaigners say both police and private companies should ‘immediately stop’ their use of such surveillance, citing concerns about human rights and discrimination. Silkie Carlo, Director of Big Brother Watch says; “This dangerously authoritarian technology has the potential to turn populations into walking ID cards in a constant police line-up.” It’s worth noting in 2020 the Court of Appeal in the UK ruled South Wales Police use of facial recognition was unlawful. Use of LFR by retailers Some of the UK’s biggest supermarkets and retailers are also turning to face-scanning technology in a bid to combat a significant rise in shoplifting. Earlier this year the ICO announced its findings from an investigation into the live facial recognition technology provided to the retail sector by the security firm Facewatch. The aim of the technology is to help businesses protect their customers, staff and stock. People’s faces are scanned in real time as they enter a store and there’s an alert raised if a subject of interest has entered. During its investigation the ICO raised concerns including surround the amount of personal data collected and protecting vulnerable people by making sure they don’t become a ‘subject of interest’. Based on information provided by Facewatch about improvement made, and ongoing improvements, the ICO concluded the company had a legitimate purpose for using people’s information for the detection and prevention of crime. Collaboration between police and retailers Ten of Britain’s largest retailers including John Lewis, Next and Tesco are set to fund a new police operation. Under Project Pegasus, police will run CCTV pictures of shoplifting incidents provided by the retailers against the Police National Database. It’s anticipated the project will be funded by retailers. The risk of false positives The use of Live Facial Recognition raises significant privacy and human rights concerns, such as when it is used to match faces to a database for policing and security purposes. A 2019 study of facial recognition technology in the US by National Institute of Standards and Technology (NIST) discovered that systems were far worse at identifying people of colour than white people. Whilst results were dependent on the algorithms used, NIST found that some facial-recognition software produced far higher rates of false positives for black and Asian people than whites, by a factor of 10 to 100 times. NIST also found the algorithms were worse at identifying women than men. Clearly there are huge concerns to be addressed, brought into sharp focus now with the Black Lives Matter movement. Interestingly, there was no such dramatic difference in false positives in one-to-one matching between Asian and white faces for algorithms developed in Asia. Privacy concerns Any facial recognition technology capable of uniquely identifying an individual is likely to be processing biometric data (i.e. data which relates to the physical, physiological or behavioural characteristics of a person). Biometric data falls under the definition of ‘special category’ data and is subject to strict rules. To compliantly process special category data in the UK or European Union, a lawful basis must be identified AND a condition must also be found in GDPR Article 9 to justify the processing. In the absence of explicit consent from the individual however, which is not practical in most LFR applications, it may be tricky to prove the processing meets Article 9 requirements. Other privacy concerns include: Lack of transparency – an intrusion into the private lives of members of the public who have not consented to and may not be aware of the collection or the purposes for which their images are being collected and used. Misuse – images retrieved may potentially be used for other purposes in future. Accuracy – inaccuracies inherent within LFR reference datasets or watchlists may result in false positives and the potential for inaccurate outcomes which may be seen as biased or discriminatory. Automated decision-making – if decisions which may significantly affect individuals are based solely on the outcomes of live facial recognition. Requirement to conduct a Data Protection Impact Assessment (DPIA) A DPIA must be conducted before organisations or public bodies begin any type of processing that is likely to result in a ‘high risk’ to the rights and freedoms of individuals. This requirement includes: the use systematic and extensive profiling with significant effects on individuals; the processing special category or criminal offence data on a large scale; and the systematic monitoring of publicly accessible places on a large scale. In our view, any planned use of LFR is very likely to fall under the requirement for the organisation or public body to conduct a DPIA in advance of commencing the activity and take appropriate steps to ensure people’s rights and freedoms are adequately protected. So where does this leave us? Police forces and other organisations using LFR technology need to properly assess their compliance with data protection law and guidance. This includes how police watchlists are compiled, which images are used and for what purpose, which reference datasets they use and how accurate and representative of the population these datasets . The potential for false positives or discriminatory outcomes should be addressed. Any organisation using LFR must be ready to demonstrate the necessity, proportionality and compliance of its use. Meanwhile, across the Channel, members of the European Parliament have agreed to ban live facial recognition using AI in a draft of the EU’s Artificial Intelligence Act. Will the UK follow suit?
Overcoming the challenges of data retention Clearing out data you no longer need How long should we keep our data? Sounds simple enough, but a question many businesses struggle with. The UK GDPR tells us personal data should only be kept ‘as long as necessary for specified purposes’. So if your organisation is found to be storing data for don’t really need now, you could be subject to unwelcome scrutiny. Perhaps the main risk here is if your business suffers a data breach. It could become far more serious if you couldn’t provide a suitable justification why you were still holding onto unnecessary data which was included in the breach. In effect, it means two violations of the law in one fell swoop! If you have to notify the individuals affected, what would you say? Tackling the data we’re holding too long This does require some thought and planning. As a pre-requisite, you’ll need to know what personal data your organisation holds and what purposes it’s being used for. Creating a data retention policy is straightforward enough, but developing a record retention schedule can be more complex. Most organisations use personal data for multiple purposes. You need to take account of each specific purpose and identify the appropriate lawful basis for that processing, before you consider an appropriate retention period. An up-to-date Record of Processing Activities can be a real asset here. Deciding on suitable retention periods Firstly, check if there’s a law which mandates you how long certain data must be kept. Laws may dictate minimum or maximum retention periods. For example, in the UK employment law requires data on ex-employees to be kept for at least 6 years after they leave the business. In certain situations the retention period may be longer. For example, let’s imaging you’re a building firm and your employees have come into contact with hazardous substances as part of their job and you carry out health monitoring. The retention period for these records is much longer. In many scenarios, however, there are no relevant laws which specify how long the data must be help. Examples include marketing, sales & account management records. In these situations organisations need to judge for themselves what an appropriate retention period should be, and be ready to justify their decision. Take a balanced and reasonable approach, based on your reasons for processing that data. Deciding what period is ‘necessary’ Where there is no statutory requirement, we suggest speak with internal data owners / relevant functions. The following questions should help you reach the appropriate decision on a period you can justify: a. Are there any industry standards, guidelines or known good-practice guidelines? b. Does the product lifecycle have an impact on retention? c. What are the business drivers for retention? Are they justifiable? d. What evidence is there that the data is needed for the proposed amount of time? e. Is there potential for litigation if its keep too long (or deleted too soon)? f. Is it necessary to keep personal data to handle complaints? Don’t forget your processors service providers Controllers who use service providers acting as data processors, should make sure they provide clear contractual instructions about their data retention requirements. Tell them the retention periods you need and give specific actions they should take when a retention period ends. For example, should they delete the data, return it to you or anonymise it? These may be listed in a data schedule, appended to the main contract or agreement. Key takeaways Data retention can be tackled effectively if you get key stakeholders across the business engaged and involved. Agree retention periods and get started on implementing them. For more tips, tools and templates… Why not download DPN’s Data Retention Guide.