Is your data use compatible with what you collected it for? Have the ways you use people’s data strayed too far from the original purpose(s)? An ICO reprimand issued to a Government department serves as a welcome reminder to be careful about what we’re using data for, who we’re sharing it with, and what they might use it for. Is what we’re doing transparent, fair and reasonable? Are the tasks we now use data for still in line with what we originally collected it for? In this public sector case, the ICO has chosen not to issue a fine, but rather a warning with a requirement to implement specific measures. Commercial businesses are unlikely to face the same leniency. What went wrong? The Department for Education (DfE) received a reprimand from the ICO after it came to light a database containing the learning records of up to 28 million children had been used to check whether people who opened online gambling accounts were aged 18 or over. The ICO investigation criticised the DfE for failing to protect young people’s data from unauthorised processing by third parties, whose purposes were found to be incompatible with the original purposes the data was collected for. The DfE has overall responsibility for the Learning Records Service (LRS) database, which provides a record of pupil’s qualifications for education providers to access. Its main purpose is to enable schools, colleges, higher education and other education providers to verify data for educational purposes – such as the academic qualifications of potential students, or check if they are eligible for funding. LRS is only supposed to be used for education purposes. But the DfE also allowed access to LRS to Trust Systems Software UK Ltd (trading as Trustopia), an employment screening firm. They in turn offered their services commercially to other companies, including GB Group, which used it to help betting companies screen new online gambling customers to confirm they were 18 or over. Trustopia had access to the LRS database from September 2018 to January 2020, carrying out searches involving 22,000 learners. This incident followed an audit of the DfE’s data activities by the ICO in 2020, which also found the DfE broke data protection laws in how it handled pupil data. What were the failings? The ICO found against the DfE in two respects: It failed in its obligations (as data controller) to use and share children’s data fairly, lawfully and transparently. Individuals were unaware of what was happening and could not object or withdraw from the processing. DfE failed to have appropriate oversight to protect against unauthorised processing of personal data held on the LRS database. It was also found to have failed to ensure confidentiality by failing to prevent unauthorised access to children’s data. The DfE’s lack of oversight and appropriate controls to protect the data enabled it to be used for other purposes, which were not compatible with the provision of educational services. In its reprimand the ICO set out clear measure the DfE need to action to improve data protection practices and make sure children’s learning records are properly protected. Since the incident, the DfE has confirmed they have permanently removed Trustopia’s access to the data. In fact, they have removed access from 2,600 organisations. A spokesperson for DfE said the department takes the security of data we hold “extremely seriously” and confirmed it will publish a full response to the ICO by the end of 2022 giving “detailed progress in respect of all the actions identified”. Why wasn’t there a massive fine? In keeping with the ICO’s Regulatory Action Policy, they considered issuing a fine of £10 million. This is the amount considered to be ‘effective, proportionate and dissuasive’. However, the Information Commissioner has chosen not to issue a fine in this case, in line with its revised approach to public sector enforcement, announced in June 2022. Some may find this surprising, so let’s dig deeper. John Edwards, UK Information Commissioner, said: “No-one needs persuading that a database of pupils’ learning records being used to help gambling companies is unacceptable. Our investigation found that the processes put in place by the Department for Education were woeful. Data was being misused, and the Department was unaware there was even a problem until a national newspaper informed them. “We all have an absolute right to expect that our central government departments treat the data they hold on us with the utmost respect and security. Even more so when it comes to the information of 28 million children. “This was a serious breach of the law, and one that would have warranted a £10 million fine in this specific case. I have taken the decision not to issue that fine, as any money paid in fines is returned to government, and so the impact would have been minimal. But that should not detract from how serious the errors we have highlighted were, nor how urgently they needed addressing by the Department for Education.” So Govt Departments can break the law and not be fined? Well on the face of it, in the case of data protection, yes! Mr Edwards has confirmed the ICO are trialling a new approach to public sector enforcement which will see more public reprimands without fines, in all but the most serious cases. In return, the ICO has received a commitment from the Cabinet Office and DCMS to create a cross-Whitehall senior leadership group, to encourage compliance with high data protection standards. Hmmm… how do we feel about this? I totally understand issuing a fine to the DfE is, ultimately, a fine against public funds for education. Which means our children could potentially be the ones who would suffer if a hefty fine was imposed. Nobody wins here. But on the flipside, could this approach significantly weaken the deterrent? Will public sector employees feel motivated enough to go take appropriate steps to comply with data protection laws when there’s little risk of being fined? After all, the private sector will continue to be fined as appropriate when they’re found to have violated the data laws. What do you think? We’d love to hear your thoughts at info@dpnetwork.org.uk A timely reminder? This case serves as a helpful reminder that we need to take care the personal data we collect and hold as an organisation is not used for purposes which are incompatible with the original purposes. Due diligence is especially important when the data is shared with other organisations, who might use it for their own purposes. We must always be clear and transparent about how we use people’s data so they have an opportunity to exercise their right to object, and indeed any other privacy rights. Ask yourself this key question; ‘Is your data use compatible with what you collected it for?’

What keeps a DPO awake at night? A scary collection of Data Protection Officer nightmares For DPOs the stuff of nightmares doesn’t involve monsters, falling off a cliff or being naked in a job interview. In fact, that’s small beer compared to their true nightmares; Data Transfer Impact Assessments and people in snazzy ICO enforcement jackets knocking on the office door. No, being a DPO isn’t for the faint-hearted. It’s a perilous existence where hardy souls must navigate a hostile wilderness of data protection hazards. It’s an ever-changing wilderness, too. Just when you’ve frightened away one data protection predator, another pops up from nowhere to take its place. And remember, this must be achieved in a ruthless economic climate where every penny counts. So, what’s the really scary stuff? The scariest of the slithery data protection monsters hiding in the semi-opened cupboard? I asked a few friendly Data Protection Officers: ‘What keeps you awake at night?’ Seven chilling privacy nightmares 1. Fear of the unknown – DPO, education sector Being worried about what staff in my organisation are doing with personal data that I know nothing about (which they know they probably shouldn’t be doing). Another big nightmare at the moment is trying to unravel the intricacies of IDTAs and SCCs for both the UK and EU whilst factoring in other international data protection regimes that my organisation is subject to by virtue of their extra territorial scope – I see you, China! And a general worry that I’m going to miss, and therefore not mitigate, a risk. The pressure of being seen as the person with all the answers and ultimately the one responsible (or who will be blamed) if anything goes wrong is not the stuff of dreams. 2. The recurring nightmare of data flowing overseas – Director of Privacy, financial sector What keeps me awake at night? Mapping international data flows. What sends me to sleep… counting DTIAs! 3. Drowning in a sea of paperwork – DPO, publishing sector Keeping track of changing processing activities in a large organisation without blocking progress by over-administration. Plus ensuring appropriate documentation of the growing share of online Data Processing Agreements concluded with large suppliers (like pre-signed downloadable SCCs from Google, Meta …) 4. Encircled by continually moving parts – DPO, charity sector Facing our third legislative change in 5 years and the on/off nature of what that may be. The ability to keep on top of the “what, when, how, and why” of the technical changes – horizon scanning versus meeting current needs and the complexities of planning to implement uncertain changes with limited resources. All whilst maintaining consistency and expertise in the advice and guidance so staff make appropriate decisions in the here and now. A productive, pragmatic, commercially minded, problem solving attitude to data protection is enough to keep anyone awake at night, without factoring in constantly moving legislative goalposts. 5. Hounded by familiar, but angry faces – DPO, hospitality sector Employee-related Data Subject Access Requests. We’re not a big business, we don’t get many DSARs, and we don’t have the fancy technology. But lay-offs this year has led to a persistent stream of DSARs. As soon as one is nearly cleared, another one drops (it’s as if they’re planning it!). Despite support from HR, the requests are ultimately my responsibility to handle. I don’t have a team to support me, nor on-tap internal legal support. Sometimes there is no assuaging people, and yes, we have heard from the ICO after someone complained to them about our response. I often press send, and lie awake praying we didn’t disclose something we shouldn’t have, or missed something we should. Not all DPOs lie awake at night. In fact, some hit the hay and are out like a light. But what are their daytime nightmares made off? 6. Being held to ransom – Matthew Kay, DPO, Metro Bank When I was first asked to write this my opening thought was, with being quite a deep sleeper, that it takes quite a lot to keep me awake! Quickly realising this wasn’t what DPN was after, I came to the conclusion that the data protection challenges I’m currently worrying about centre around two things. First is the enhanced threat resulting from the war in Ukraine, and ensuring appropriate technical measures are in place to see off any potential cyber-attacks, and second is closely monitoring the perceived increase of inside threat to organisations, resulting from the cost of living crisis. 7. Encircled by ICO enforcement jackets – Michael Bond, Group DPO, News UK As a father of two young boys, not much keeps me up at night beyond about 8.30pm! But as I settle under the covers and wait for sleep, let me envisage my worst nightmare instead. It’s a quiet Friday afternoon and with one eye on the clock, a phone call comes in: “Hello, it’s the ICO. Did you know that large volumes of personal data originating from your brands are now publicly available online for all to see?” … a long pause. The case officer goes on: “Yes, the data looks to be a mix of hundreds of thousands of customer profiles, as well as what appears to be employee personnel files”. As a bead of cold sweat rolls down my neck, the ICO case officer asks me: “Why haven’t you notified us about this incident? It’s very serious, as I am sure you’re aware and we’re going to have to take immediate action; enforcement officers are on their way…” I wake, startled. Phew. Don’t worry, just a dream… *the phone rings – caller ID – Wilmslow* Yikes! I’ll leave you with one final, spine-chilling thought. A new type of cosmic privacy horror. I’ve heard rumours a social media platform, one with a controversial new proprietor, could have a potential vacancy for a new… …Data Protection Officer.

Are we conducting too many DPIAs – or not enough? How to decide when to conduct Data Protection Impact Assessments Make no mistake, Data Protection Impact Assessments (DPIAs) are a really useful risk management tool. They help organisations to identify likely data protection risks before they materialise, so corrective action can be taken. Protecting your customers, staff and the interests of the business. DPIAs are key element of the GDPR’s focus on accountability and Data Protection by Design. It’s not easy working out when a DPIA is necessary, or when it might be useful, even if not strictly required by law. Businesses need to be in control of their exposure to risk, but don’t want to burden their teams with unnecessary work. So it falls to privacy professionals to use their judgement in what can be a delicate balancing act. Lack of clarity around when DPIAs are genuinely needed could lead businesses to carry out far more DPIAs than needed – whilst others may carry out too few. When are DPIAs required? We should check if a DPIA is required during the planning stage of new projects, or when changes are being planned to existing activity. Where needed, DPIAs must be conducted BEFORE the new processing begins. DPIAs are considered legally necessary when the processing of personal data is likely to involve a ‘high risk’ to the rights and freedoms of individuals. What does ‘high risk’ look like? Why types of activity might fall into ‘high risk’ isn’t always clear. Fortunately the ICO have given examples of processing likely to result in high risk to help you make this call. Regulated sectors, such as financial services and telecoms, have specific regulatory risks to consider too. Give consideration to the scope, types of data used and the manner of processing. It’s wise to also take account of any protective measures already in place. In situations where the nature, scope, context and purposes of processing are very similar to another activity, where a DPIA has already been carried out, you may not need to conduct another. Three key steps for a robust DPIA screening process 1. Engage your key teams In larger organisations, building good relationships with key teams such as Procurement, IT, Project Management, Legal and Information Security can really help. They might hear about projects involving personal data before you do. Make sure they’re aware when a DPIA may be required. This means they’ll be more likely to ‘raise a hand’ and let you know when a project which might require a DPIA comes across their desk. In smaller businesses there may still be others who can help ‘raise a hand’ and let you know about relevant projects. Work out who those people are. 2. Confirm the businesses appetite for risk Is your organisation the sort which only wants DPIAs to be carried out when strictly required by law? Or perhaps you want a greater level of oversight? Choosing to carry out DPIAs as your standard risk assessment methodology for any significant projects involving personal data – even if they might appear to involve lower levels of risks to individuals. Logic says you’ll never be 100% sure unless you carry out an assessment and DPIAs are a tried and tested way to give you oversight and confidence. But this approach requires more time, resources and commitment from the business. You need to strike the right balance for your organisation. 3. Adopt a DPIA screening process If you don’t currently use a screening process, you really should consider adopting one. It’s a quick and methodical way to identify if a project does or does not require a DPIA. You can use a short set of standard questions, which can be provided for stakeholders to complete and return or discussed in a call. So the question ‘Is a DPIA needed or not?’ can be reached rapidly and with confidence. Personally I prefer to arrange a short call with the stakeholders, using my screening questionnaire as a prompt to guide the discussion. Don’t forget to keep a record of your decisions! Including when you decide a DPIA isn’t necessary. Try not to burden colleagues with unnecessary assessments for every project, if there really is minimal risk. This is unlikely to be a well-received approach. Raise awareness and have a built-in DPIA screening process to make sure you catch the projects which really do warrant a deeper dive.  

Is your marketing profiling lawful, fair and transparent? ICO fines catalogue retailer £1.35 million for ‘invisible processing’ Many companies want to know their customers better. This is not a bad thing. Information gathered about people is regularly used for a variety of activities including improving products and services, personalisation or making sure marketing campaigns are better targeted. However, the significant fine dished out to catalogue retailer Easylife highlights why companies need to be transparent about what they do, have a robust lawful basis, be careful about making assumptions about people and take special care with special category data. It also shows how profiling is not limited to the realms of online tracking and the adtech ecosystem, it can be a simpler activity. What did the catalogue retailer do? Easylife had what were termed ‘trigger products’ in its Health Club catalogue. If a customer purchased a certain product, it triggered a marketing call to the individual to try and sell other related products. This was done using a third-party call centre. Using previous transactions to tailor future marketing is not an unusual marketing tactic, often referred to as ‘NBA – Next Best Action’. The key in this case is Easylife inferred customers were likely to have certain health conditions based on their purchase of trigger products. For example, if a customer bought a product which could be associated with arthritis, this triggered a telemarketing call to try and sell other products popular with arthritis sufferers – such as glucosamine and bio-magnetic joint patches. Data relating to medical conditions, whether provided by the individual or inferred from other data, is classified as special category data under data protection law and handling this type of data requires special conditions to be met. The ICO’s ruling To summarise the ICO’s enforcement notice Easylife was found have failed to: have a valid lawful basis for processing meet the need to have an additional condition for processing special category data be transparent about its profiling of customers It was found to have conducted ‘invisible processing’ of 145,000 customers. There were no complaints raised about this activity; it only came to light due to a separate ICO investigation into contraventions of the telemarketing rules. The ICO says it wasn’t surprised no one had complained, as people just wouldn’t have been aware this profiling was happening, due to the lack of transparency. It just goes to show ICO fines don’t always arise as a result of individuals raising complaints. Key findings Easylife argued it was just processing transactional data. The ICO ruled when this transactional data was used to influence its telemarketing decisions, it constituted profiling. The ICO said while data on customer purchases constituted personal data, when this was used to make inferences about health conditions, this became the processing of special category data. The ICO said this was regardless of the statistical confidence Easylife had in the profiling it had conducted. Easylife claimed it was relying on the lawful basis of Legitimate Interests. However, the Legitimate Interests Assessment (LIA) the company provided to the ICO during its investigation actually related to a previous activity, in which health related data wasn’t used. When processing special category data organisations need to make sure they not only have a lawful basis, but also comply with Article 9 of UK GDPR. The ICO advised the appropriate basis for handling this special category data was with the explicit consent of customers. In other words legitimate interests was not an appropriate basis to use. Easylife was found to have no lawful basis, nor a condition under Article 9. It was ruled there was a lack of transparency; customers hadn’t been informed profiling was taking place. Easylife’s privacy notice was found to have a ‘small section’ which stated how personal data would be used. This included the following: *Keep you informed about the status of your orders and provide updates or information about associated products or additional products, services, or promotions that might be of interest to you. *Improve and develop the products or services we offer by analysing your information. This was ruled inadequate and Easylife was found to have failed to give enough information about the purposes for processing and the lawful bases for processing. The ICO’s enforcement notice points out it would have expected a Data Protection Impact Assessment to have been conducted for for the profiling of special category data. This had not been done. The Data Processing Agreement between Easylife and its processor; the third-party call centre, was also scrutinised. While it covered key requirements such as confidentiality, security, sub-contracting and termination, it failed to indicate the types of personal data being handled. Commenting on the fine, John Edwards, UK Information Commissioner, said: “Easylife was making assumptions about people’s medical condition based on their purchase history without their knowledge, and then peddled them a health product – that is not allowed. The invisible use of people’s data meant that people could not understand how their data was being used and, ultimately, were not able to exercise their privacy and data protection rights. The lack of transparency, combined with the intrusive nature of the profiling, has resulted in a serious breach of people’s information rights.” Alongside the £1.35 million fine, Easylife’s been fined a further £130,000 under PECR for making intrusive telemarketing calls to individuals registered on the Telephone Preference Service. Currently the maximum fine for contravening the marketing rules under PECR is £500,000, much lower than potential fines under DPA 2018/UK GDPR. Update March 2023: The ICO announces reduction in GDPR fine from £1.35 million to £250,000. 6 key takeaways 1. If you are profiling your customers, try to make sure this is based on facts. Making the type of assumptions Easylife was making will always carry risks. 2. Be sure to be transparent about your activities. This doesn’t mean you have to use the precise term ‘profiling’ in your privacy notice, but the ways in which you use personal information should be clear. 3. Make sure your clearly state the lawful bases you rely upon in your privacy notice. It can be helpful and clear to link lawful bases to specific business activities. 4. If you’re processing special category data, collected directly or inferred from other data, make sure you can meet a condition under Article 9. For marketing activities the only option is explicit consent. 5. If you’re conducting profiling using special category data, carry out a DPIA. 6. Always remember the marketing rules under PECR for whatever marketing channel you’re using. For telemarketing, if you don’t have the consent of individuals, be sure to screen lists against the TPS.

Is bias and discrimination in AI a problem? Artificial Intelligence – good governance will need to catch up with the technology The AI landscape We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include: Cybersecurity analysis to identify anomalies in IT structures Automating repetitive maintenance tasks and guiding technical support teams Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement Reviewing job applications to identify the best-qualified candidates in HR Research scientists looking for patterns in health to identify new cures for cancer Predicting equipment failure in manufacturing Detecting fraud in banking by analysing irregular patterns in transactions. TV and movie recommendations for Netflix users Inventory optimisation and demand forecasting in retail & transportation Programming cars to self-drive Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.   In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications.  Why recruitment applications? AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest.  The Amazon recruitment tool – an example of bias/discrimination The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications. What is AI? AI can be defined as:  “using a non-human system to learn from experience and imitate human intelligent behaviour” The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.   Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes.  In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account. You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity.  How is AI used?  AI can be used to make decisions: 1.     A prediction – e.g. you will be good at a job 2.     A recommendation – e.g. you will like this news article 3.     A classification – e.g. this email is spam.  The benefits of AI AI is generally a force for good: 1.     It can automate a process and save time 2.     It can optimise the efficiency of a process or function (often seen in factory or processing plants) 3.     It can enhance the ability of individuals – often by speeding processes Where do data protection and AI intersect? An explanation of AI-assisted decisions is required:  1.     If there is a process without any human involvement 2.     It produces legal or similarly significant effects on an individual – e.g. not getting a job.  Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing.  What data is in scope? Personal data used to train a model Personal data used to test a model On deployment, personal data used or created to make decisions about individuals If no personal data is included in a model, AI is not in scope for data protection.  How to approach an AI project?  Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation.  Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination.  Where does bias/discrimination creep in? Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”: Age Disability Gender reassignment Marriage and civil partnership Pregnancy and maternity Race Religion and belief Sex Sexual orientation.  When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination.  Our Top 10 Tips Ask how the algorithm has been trained – the “black box” excuse isn’t good enough Review the training inputs to identify possible bias with the use of historic data Test the outcomes of the model – this really seems so obvious but not done regularly enough Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes Consider how to compensate for bias built into the training – a possible form of positive discrimination Have a person review the outcomes of the model if it is challenged and give that person authority to challenge Incorporate your AI projects into your data protection governance structure Ensure that you’ve done a full DPIA identifying risks and mitigations Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised?  In summary AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable.  In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged. 

Data Retention Guide Data retention tools, tips and templates This comprehensive guides take you through the key steps and considerations when approaching data retention. Whether you’re starting out or reviewing your retention policy and schedules, we hope this guide will support your work. This guide was developed and written by data protection specialists from a broad range of sectors.  A huge thank you to all those who made it possible.

Data Subject Access Request Guide Being prepared and handing DSARs Handling Data Subject Access Requests can be complex, costly and time-consuming. How do you make sure you’re on the front foot, with adequate resources, understanding and the technical capability to respond within a tight legal timeframe? This guide aims to take you through the key steps to consider, such as… Being prepared Retrieving the personal data Balancing complex requests Applying redactions & exemptions How technology can help

Dark patterns: is your website tricking people? Why should we be concerned about dark patterns? Do you ever feel like a website or app has been designed to manipulate you into doing things you really don’t want to do? I bet we all have. Welcome to the murky world of ‘dark patterns’. This term was originally penned in 2010 by user experience specialist, Harry Brignull, who defines dark patterns as “features of interface design crafted to trick people into doing things they might not want to do and which benefit the business”. Whenever we use the internet, businesses are fighting for our attention and it’s often hard for them to get cut through. And we often don’t have the time or inclination to read the small print, we just want to achieve what we set out to do. Business can take advantage of this. Sometimes they make it difficult to do things which should, on the face of it, be really simple. Like cancel or say no. They may try to lead you down a different path that suits their business better and leads to higher profits. These practices are in the spotlight, and businesses could face more scrutiny in future. Sometimes dark patterns are deliberate, sometimes they may be accidental. What are dark patterns? There are many interpretations of dark patterns and many different examples. Here’s just a handful to give you a flavour – it’s by no means an exhaustive list. ‘Roach Motel’ – this is where the user journey makes it easy to get into a situation, but hard to get out. (Perhaps ‘Hotel California’ might have been a better name?). For example, when it’s easy to sign up to a service, but very difficult to cancel it because it’s buried somewhere you wouldn’t think to look. And when you eventually find it, you still have to wade through several messages urging you not to cancel. FOMO (Fear Of Missing Out) – this for example is when you’re hurried into making a purchase by ‘urgent’ messages showing a countdown clock or alert messages saying the offer will end imminently. Overloading – this is when we’re confronted with a large number of requests, information, options or possibilities to prompt us to share more data. Or it could be used to prompt us to unintentionally allow our data to be handled in a way we’d never expect. Skipping – this is where the design of the interface or user experience is done is such a way that we forget, or don’t think about, the data protection implications. Some cookie notices are designed this way. Stirring – this affects the choices we make by appealing to our emotions or using visual cues. For example, using a certain colour for buttons you’d naturally click for routine actions – getting you into the habit of clicking on that colour. Then suddenly using that colour button for the paid for service, and making the free option you were after hard to spot. Subliminal advertising – this is the use of images or sounds to influence our responses without us being consciously aware of it. This is banned in many countries as deceptive and unethical. Social engineering? Some argue the ‘big players’ in search and social media have been the worst culprits in the evolution and proliferation of dark patterns. For instance, the Netflix video ‘The Social Dilemma’ argued that Google and Facebook have teams of engineers mining behavioural data for insights on user psychology to help them evolve their interface and user experience. The mountain of data harvested when we search, browse, like, comment, post and so on can be used against us, to drive us to behave they way they want us to – without us even realising. The rapid growth of AI could push this all to a whole new level if left unchecked. The privacy challenge Unsurprisingly there’s a massive cross-over between dark patterns and negatively impacting on a user’s privacy. The way user interfaces are designed can play a vital role in good or bad privacy. In the EU, discussions about dark patterns (under the EU GDPR) tend to concentrate on to how dark patterns can be used to manipulate buyers to give their consent – and point out consent would be invalid if it’s achieved deceptively or not given freely. Here are some specific privacy related examples. Tricking you to installing an application you didn’t want, i.e. consent is not unambiguous or freely given. When the default privacy settings are biased to push you in a certain direction. For example, on cookie notices where it’s much simpler to accept than object, and can take more clicks to object. Confusing language may also be used to manipulate behaviour. ‘Privacy Zuckering’, is a term used for when you’re tricked into publicly sharing more information about yourself than you really intended to. Named after Facebook’s co-founder & CEO, but it isn’t unique to them, for example, LinkedIn have been fined for this. When an email unsubscribe link is hidden within other text. Where more screen space is given to selecting options the business wants you to take, and less space for what might be more preferable options the customer. For example, e.g. the frequency of a subscription, rather than a one-off purchase. Should businesses avoid using dark patterns? Many will argue YES! Data ethics is right at the heart of the debate. Businesses should ask themselves if what they are doing is fair and reasonable to try to encourage sales and if their practices could be seen as deceptive. Are they doing enough to protect their customers? Here are just a few reasons to avoid using dark patterns: They annoy your customers and damage their experience of your brand. A survey by Hubspot found 80% of respondents said they had stopped doing business with a company because of a poor customer experience. If your customers are dissatisfied, they can and will switch to another provider. They could lead to higher website abandon rates. Consent gathered by manipulating consumer behaviour is unlikely to meet the GDPR consent requirements, i.e. freely given, informed, explicit and unambiguous. So your processing could turn out to be unlawful. Can these effects happen by mistake? Dark patterns aren’t always deliberate. They can arise due to loss of focus, short-sightedness, poorly trained AI models, or a number of other factors. However they are more likely to occur when designers are put under pressure to deliver on time for a launch date, particularly when commercial objectives are prioritised above all else. Cliff Kuang, author of “User Friendly”, says there’s a tendency to make it easy for users to perform the tasks that suit the company’s preferred outcomes. The controls for limiting functionality or privacy controls can sometimes be an afterthought. What can businesses do to prevent this? In practice it’s not easy to strike the right balance. We want to provide helpful information help our website / app users to make decisions. It’s likely we want to ‘nudge’ them in the ‘right’ direction. But we should be careful we don’t do this in ways which confuse, mislead or hurry users into doing things they don’t really want to do. It’s not like the web is unique in this aim (it’s just that we have a ton of data to help us). In supermarkets, you used to always see sweets displayed beside the checkout. A captive queuing audience, and if it didn’t work on you, a clear ‘nudge’ to your kids! But a practice now largely frowned upon. So how can we do ‘good sales’ online without using manipulation or coercion? It’s all about finding a healthy balance. Here’s a few suggestions which might help your teams: Train your product developers, designers & UX experts – not only in data protection but also in dark patterns and design ethics. In particular, help them recognise dark patterns and understand the negative impacts they can cause. Explain the principles of privacy by design and the conditions for valid consent. Don’t allow business pressure and priorities to dictate over good ethics and privacy by design. Remember data must always be collected and processed fairly and lawfully. Can dark patterns be regulated? The proliferation of dark patterns over recent years has largely been unrestricted by regulation. In the UK & Europe, where UK & EU GDPR are in force, discussions about dark patterns have mostly gravitated around matters relating to consent – where that consent may have been gathered by manipulation and may not meet the required conditions. In France, the CNIL (France’s data protection authority) has stressed the design of user interfaces is critical to help protect privacy. In 2019 CNIL took the view that consent gathered using dark patterns does not qualify as valid freely given consent. Fast forward to 2022 and the European Data Protection Board (EDPB) has released guidelines; “Dark patterns in social media platform interfaces: How to recognise and avoid them”. These guidelines offer examples, best practices and practical recommendations to designers and users of social media platforms, on how to assess and avoid dark patterns in social media interfaces which contravene GDPR requirements. The guidance also contains useful lessons for all websites and applications. They remind us we should take into account the principles of fairness, transparency, data minimisation, accountability and purpose limitation, as well the requirements of data protection by design and by default. We anticipate EU regulation of dark patterns may soon be coming our way. The International Association of Privacy Professionals (IAPP) recently said, “Privacy and data protection regulators and lawmakers are increasingly focusing their attention on the impacts of so-called ‘dark patterns’ in technical design on user choice, privacy, data protection and commerce.” Moves to tackle dark patterns in the US The US Federal Trade Commission has indicated it’s giving serious attention to business use of dark patterns and has issued a complaint against Age of Learning for its use of dark patterns for their ABC Mouse service. Looking at state-led regulations, in California modifications to the CCPA have been proposed to tackle dark patterns. The Colorado Privacy Act also looks to address this topic. What next? It’s clear businesses should be mindful of dark patterns and consider taking an ethical stance to protect their customers / website users. Could your website development teams be intentionally or accidentally going too far? Its good practice to train website and development teams so they can prevent dark patterns occurring, intentionally or by mistake.