Why the Right of Access is broken DSARs are an overly onerous and often pointless exercise There’s been murmuring for years about the ‘weaponisation’ of the right of access. Individuals submitting Data Subject Access Requests in an effort to try and ‘dig up dirt’ for another matter. Maybe during an unfair dismissal claim, a disciplinary case, employment tribunal, an ongoing complaint or prior to litigation. Organisations sometimes believe the person is submitting a DSAR just to be downright awkward and find themselves unable to meet the threshold to refuse the request (in part, or in full) as ‘manifestly’ unfounded or excessive. Businesses are spending excessive amounts of time responding to more tricky requests. We’re told we need to be prepared and have enough resources to handle requests. But is it reasonable to expect small-to-medium sized organisations to have teams on standby for 6-7 requests a year? Often one or two people have to dedicate hours… days, to respond by the statutory deadline. This can be a whole calendar month where they’ve done little else. We also know countless local councils, police services, NHS trusts and other public bodies have been on the receiving end of official ICO reprimands for failing to address their massive backlogs of requests. Something needs to change. It’s getting worse not better. Anecdotally, I’m hearing the number of requests is steadily increasing. No one is immune. Companies that have never received a DSAR have had the horror of their first one from a disgruntled ex-employee. Charities, housing associations, travel operators, retailers, publishers are all in the firing line. The problem. Fulfilling this right is often not straightforward. The ICO’s guidance is over 100 pages long. I can deliver a whole day’s DSAR training session and not cover every nuanced consideration. The specific circumstances of a request can throw up new challenges. Yes, we can always improve our procedures and make efficiencies. But ultimately, with difficult requests there will always be time-consuming issues which can’t be automated. There may be brilliant software available to streamline the process. But many small-to-medium sized companies and charities, with limited budgets, will struggle to justify the cost of new technology when the volume of requests is not very high, and fluctuates significantly. Some redaction technology can almost make things worst by over-redacting. Then, after all our efforts are people happy with what they receive? It seems not. While I can’t find the most recently figures, the ICO’s 2023/24 Annual Report reveals nearly 40,000 complaints were received by the regulator. A staggering 39% of these concerned DSARs. Those submitting requests are clearly further disgruntled with what they receive. By June 2026 UK organisations will be legally required to have a data protection complaints procedure. And yes, this will inevitably mean a percentage of the DSARs you get out the door, will come straight back in as a formal complaint. More time and effort, while the individual’s frustration grows. I fear we’ll see public bodies not just being accused of failing to address a massive backlog of DSARs, but a massive backlog of unresolved data protection complaints too. Of course, we’re not all saints. Some organisations do a bad job with DSARs. I’ve seen cases where individuals have been provided with reams of overly redacted documents which make no sense. Some organisations blatantly ignore requests. A Care Home manager has been personally fined for deliberately destroying and withholding information, when faced with a DSAR. There are the cases where bad practices can be exposed. There are the high-profile cases. Nigel Farage successfully revealed via a DSAR that NatWest had closed his Coutts account due to his political opinions. And then via a second request exposed how NatWest employees had made disparaging comments about him. But I like to believe there are plenty of organisations trying their very best to do the right thing. I work with some who spend painstaking hours retrieving, assessing and redacting, only to look at what they’re providing and think ‘is this of any value to the person?’ Often, a DSAR seems far from the most suitable route for the individual to get the information or resolution they’re seeking. If we take a step back to why this right exists in data protection law it seldom feels like DSARs are being submitted in the spirit of what legislators intended. GDPR states: The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data… Recital 63 gives us further clarification: A data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing. For its part the ICO says: It is a fundamental right for individuals. It helps them understand how and why you are using their data and check you are doing it lawfully. In reality? The times when an individual is actually expressing an interest in the ‘lawfulness of processing’ are, in my experience, exceptionally rare. In the past twenty years, I can’t think of any case I’ve dealt with where the requestee has stated an interest in this. So, my interest was piqued by a proposal in the European Commission’s Digital Omnibus, which is looking at amending aspects of GDPR. It suggests requests could be rejected, or a fee charged, if a controller considers the request is being used by someone for other purposes than the ‘protection of their personal data’. On the face of it, this seems a good idea. An attempt to take the right of access back to what the legislation originally intended it to be. But the devil will be in the detail. How would organisations make this judgement call? Will people just get smart and add new wording to make sure their requests meet the bar? If the EU does proceed with significant changes, I would encourage the UK Government to follow suit. Others in my field may gasp and shake their heads, but I was disappointed the UK Data (Use and Access) Act only clarified in law right of access matters which already happen in established practice. I wish it had gone further. There are other areas which could be looked at. When an individual insists they want all their personal data, should organisations really be under an obligation to include information the individual already has? Is the timescale too short? Could we at least not have to count bank holidays! Can the threshold for manifestly unfounded or excessive be lowered, or changed? As it stands, I believe some but not all DSARs are too onerous for organisations to fulfil, and often provide no meaningful benefit for the individual. No one seems to win, and complaints grow. Please can something change. Unfortunately, as the law is unlikely to be amended any time soon, either in the EU or UK, I’ll leave you with a few quick tips: A DSAR is not a right to documentation. It’s a person’s right to receive a copy of their personal data and other supplementary information. A request for specific information isn’t a DSAR just because it includes personal data – in fact treating a specific request like a DSAR can be to the individual’s detriment and create an unnecessary burden on resources. Managing expectations right from the start can help to reduce complaints. People often have a flimsy grasp of what the right actually entitles them too. Talking with the requestee can often resolve much more than relying on emails. I’ve written more about managing employee related requests and do check out the ICO’s helpful employee DSAR Q&A.
Understanding data protection harms What consequences are we trying to prevent? We hear a lot about data protection risks, but equally important is knowing what data protection harms look like. Harms are essentially the range of potential consequences for individuals, or indeed society more broadly, should data protection risks materialise. Insufficient training, weak access controls, ‘invisible’ processing and over-retention of personal data are just some examples of data protection risks which, if left unaddressed or if an incident occurs, could cause harm to individuals. When conducting risk assessments (such as Legitimate Interests Assessments, DPIAs and AI Assessments) we don’t just need to identify the risks, we also need to think about what possible outcomes and harms we are trying to prevent. And crucially when a data breach occurs, we need to know the consequences this could have for those affected – the harm it could cause. In our experience, assessments and data breach plans don’t always clearly spell out the nature of harm which could materialise if the appropriate measures and controls are not put in place to prevent or mitigate them. This is where the ICO’s Taxonomy of Data Protection Harms is a useful document. It includes a non-exhaustive table of harms, which would be a handy appendix to any DPIA template or Data Incident Procedure. Data protection harms aren’t always obvious. They can be nuanced, complex, overlapping or more intangible. Financial loss may be easier to gauge than psychological damage. We need to assess both the likelihood of harm and its severity. Broadly using the ICO’s taxonomy here are some examples of data protection harms:
The Digital Omnibus: Plans to revise EU digital laws Is Europe on a collision course between cutting red tape and protecting fundamental rights? There’s been plenty of chatter about the European Commission’s Digital Omnibus. The leaked text has been poured over and now the official draft has been published. For some it raises concerns of weakened regulation impacting on people’s fundamental rights. For others it represents a hope the burden of compliance will be eased and innovation unleashed. The EC is very much pitching this as “innovation friendly AI rules” and an “innovation friendly privacy framework” I suspect the UK Government will be watching developments across the Channel closely and could find itself wishing it had been bolder with the Data (Use and Access) Act 2025 (DUAA). What is the Digital Omnibus? This is not a new law, nor a complete overhaul of existing legislation but an EC proposal to streamline, align and introduce specific legislative updates to existing digital rules such as the EU AI Act, GDPR, ePrivacy Directive, Data Act and Data Governance Act. An attempt to remove duplication and inconsistencies, along with alleviating some the burden of compliance for European organisations and others who operate within the EU. What’s potentially on the cards? AI Act Key proposals include a delay in the applicable date for obligations in relation to high-risk AI systems, reducing AI literacy obligations, removing obligations for providers to register on the EU’s public database and introducing reduced penalties for small to medium sized businesses. GDPR and ePrivacy Key legislative adjustments which could be ushered in include the following: Personal data A narrower definition is proposed whereby information would not be considered personal data for a given entity when that entity does not have ‘means reasonably likely’ to identify individuals. This could ease the current and not inconsiderable issues and debates caused by assessing whether people can be ‘indirectly’ identifiable. The existing GDPR definition states: ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly. Interestingly a similar tweak was proposed in the UK under the previous Conservative Government’s data reform plans, but wasn’t carried over into DUAA. Special category data The idea is data would only be classified as special category data if it ‘directly reveals’ information about an individual’s health, sex life, racial of ethnic origin, political opinions, trade union membership, religious or philosophical beliefs. If introduced this would mark a step change away from the current broader inference-based rule and is likely to be particularly contentious. The following two new exemptions are also proposed to the existing prohibitions on processing special category data: 1) allowing for the ‘residual processing’ of special category data for the development and operation of an AI system or AI model – subject to certain conditions. 2) permitting the processing of biometric data when necessary to confirm someone’s identity, and where the data and means of verification are under the sole control of that individual i.e. where biometrics are on the user’s device. Right of Access – Data Subject Access Requests ‘Abusive’ requests could be rejected or a fee charged, if a controller considers the request is being used by someone for other purposes than the ‘protection of their personal data’. Enhanced clarification is also expected on the conditions under which a request can be deemed excessive. This recognises a growing issue of DSARs being ‘weaponised’ and used for other purposes, such as litigation. I imagine there are plenty of organisations hoping this proposal will not be ditched during negotiations. I for one would welcome this move and know plenty of UK organisations would benefit from a similar legislative amendment in the UK. Personal Data Breaches It’s proposed the requirement to report data breaches to a supervisory authority would only kick in where there was a ‘high risk’ rather than the current threshold of ‘risk’. This would align the threshold for both reporting to regulators and notification to affected individuals. The deadline for reporting could also be extended from 72 to 96 hours. Data Protection Impact Assessments In a move to try and make sure there’s a consistent approach across the EU, the European Data Protection Board is expected to be tasked with creating harmonised lists of processing activities requiring a DPIA and those which would be exempt. The EDPB would also develop a common template and methodology for conducting DPIAs. Automated decision making We could see more freedom to rely on entirely automated decisions with legal or similarly significant effect when necessary for a contract, even if the same decision could be made manually by a human. Cookies and similar technologies In an attempt to try and alleviate the confusion and annoyance for users, as well as the cost to business, the EC is proposing simplify the rules. The stated aim is to reduce the number of times cookie banners pop up and allow users to indicate their consent with ‘one-click’, with preferences saved via their browser and operating system’s settings. Any processing of personal data is expected to be governed solely by GDPR – not the ePrivacy Directive. It’s also proposed certain purposes which pose a low risk to people’s rights and freedoms will not require consent, for example when cookies and similar technologies are used for security and aggregated audience measurement. EU legislators may find themselves looking across the pond to California’s new “Opt Me Out Act”. From January 2027 this requires web browsers to offer a one-click opt-out which automatically tells websites not to sell or share their personal information. While just one state’s law this is expected to have a more far-reaching impact. It will be simpler for browsers to roll this feature out more widely, as they won’t know if the organisation which runs a website is based in California or not. AI and legitimate interests A new provision could be introduced confirming the lawful basis of legitimate interests could be used for processing personal data for training AI models. It’s highly likely this would still be subject to a balancing test. Privacy notices Providing a privacy notice to an individual may not become necessary if a controller believes the individual already knows the organisation’s identity, the organisation’s purposes for processing and how to contact any Data Protection Officer. What next? None of the above is set in stone, and all is subject to change. And for those of you who remember the years of wrangling trying to amend the ePrivacy Directive, which ultimately failed, there’s a long road of negotiation and lobbying ahead. Ultimately, will technological advances continue to streak ahead with legislators struggling to keep up? Also see EC Press Release and EC Digital Omnibus proposals
10 tips to prevent email errors It’s confession time. I recently copied the wrong person on an email. Same first name, different surname. Thankfully, it was easily resolved. But for someone in my line of work? Shameful. It’s like a chef putting ketchup on a pasta dish. Nonetheless, I decided to try my best to learn from the experience. Which got me thinking about two issues in particular: a) Email errors are not just one of the major causes of personal data breaches, but also downright awkward even where there’s no personal data risk. They can lead to sharing commercially sensitive information, or opinions. They can breach client trust. b) What are the best ways of reducing instances of human error? I know I’m not alone. Other data protection folk have admitted making the occasional mistake too. A good friend of mine once accidentally sent an email to a client – not a data breach but she did lose the client. I’ll also never forget receiving an email and finding myself reading a fellow colleague’s rather disparaging views about my team. Of course, there are the frequent data breaches – often small, sometimes big, caused by matters like emailing the wrong recipient, or using the CC field for multiple recipients. Yet, for many, it’s ‘just one of those things.’ Oops! Then the embarrassment fades… until next time. So is it really enough to keep reminding people to double check before sending? Won’t there always be times when we’re overworked, dashing to go on holiday, or distracted by personal issues? Is it good enough to rely on recall features? Probably not, when in practice they’re often completely ineffective. People will continue to make mistakes. To err is human. What else can we do? 10 email tips Here are a few suggestions for reducing the risk. 1. Disable or restrict auto-fill Yes auto-fill is a handy way to quickly go through our address book and predict who we want to email. Nonetheless, it sometimes chooses the wrong person… and we don’t notice. This is what got me. I’ve disabled this feature, and shouldn’t have had it enabled in the first place. I am now very content to spend a couple of seconds finding the correct email address. 2. Avoid email altogether Encourage (or insist) that staff who need to share attachments, personal data or any other sensitive information use links to protected SharePoint folders/files rather than using email. 3. Attachments Use software to prevent or restrict any email containing an attachment. 4. Detect personal data If 3. is a a step too far, look at using software which can automatically detect personal data in attachments or email content and prevents it being sent – or prompts people to check they really want to send. 5. External recipients Implement user prompts for external email recipients – ‘are you sure you want to send this externally?’ 6. Multiple recipients Use controls to alert users if they’re emailing multiple recipients using the CC field – prompting them to use BCC. Alternatively for teams who routinely send emails using BCC, use a bulk mail solution. 7. Delay on send How often do you spot an error just after you’ve sent an email? Setting up a delay on send for your staff, gives people a chance to correct their mistakes. 8. ‘Reply to All’ Set an alert if people are about to reply to all, prompting them to check whether this is appropriate. 9. Revoke access after sending Some more advanced email security solutions give you the ability to recall or revoke access to an email and its attachments, even after it hits the recipient’s inbox. 10. Email review Where teams are responsible for routinely sending sensitive information by email, and there’s no alternative, have a review process so someone else checks before sending. It’s worth checking what controls are available on your email system or looking at additional software solutions. Some of the prompts mentioned above are available using Outlook’s MailTips. Of course training, continually raising awareness and clear rules all play their part. Making sure your people know how you expect them to behave is crucial. It also needs to be clear what action people should take when they’ve made a mistake. Are staff permitted to try and rectify this themselves, or does it always need to be immediately reported? The steps you expect your staff to take need to be easily understood and reinforced in training and culture. This also means supervisors should lead by example. I’m a fan of quick reference guides supporting more detailed policies and procedures. In this case, a ‘golden rules for emails’ on one page, in plain English. with the rules and clear steps for what to do when things go wrong. Laminate it, turn it into posters – do whatever works to get the message home. Ultimately, mistakes are inevitable. What isn’t inevitable, though, is the impact mistakes have once the ‘send’ button’s been hit. Every little step taken to mitigate email errors lessens the impact when one inevitably slips through the net. Most of us, after all, recognise the occasional mistake will occur. The problem is if they happen too often, it can undermine confidence in your people, your organisation and your brand.
The Little Book of Data Protection Nuggets
When is it okay to record and transcribe meetings? Key considerations when using AI-enabled tools It’s increasing common for online meetings and phone calls to be recorded and/or transcribed. A plethora of AI-enabled tools have popped up to make this very easy to do. Transcriptions can be really helpful to provide a written record, a short summary of the key points, or even to automate key actions. Often handy for those who can’t attend or for people with certain disabilities. Some apps can combine words with recorded video or audio content for reference. However, while we rush to take advantage of these apps, we should be mindful of some privacy risks and be sure to have some measures and controls in place. Unauthorised use and data leakage Are people in your organisation going ahead with a ‘free trial’ and using recording or transcription services which have not been properly vetted or approved? This could result in poor controls on the outputs and data leakage to third parties. People need to know what they’re permitted to do, and what is not company policy. The safest bet is to go with an Enterprise version, so you can make sure there’s sufficient control and oversight of its use. Does it turn on automatically? Some apps are set to ‘on’ by default, so the settings may need editing to stop them automatically recording or transcribing when you don’t want them to. Do you have permission? It’s important to make sure everyone’s happy for the meeting to be recorded and/or transcribed. Good practice would be to let participants know in advance when there will be a recording and/or transcription made and ask them to let you know if they object. Also remind them at the start of the meeting, before you actually click ‘start’. Is it accurate? AI transcription tools can be extremely accurate, often better than humans. But even so, AI can still make mistakes. For example, AI can misinterpret certain nuances in the human voice or behaviours, or fail to grasp the context. This could affect the accuracy of the written output, or even its meaning. What we say isn’t always what we mean! Take different forms of humour, such as sarcasm, which might not come across well in raw text. Human oversight is key – don’t assume everything you read is 100% accurate to the words or the context. Data minimisation and retention Do we really need both a video recording and a transcription? Depending on the nature of meetings, this could create a significant volume of personal data, or perhaps commercially sensitive data. One of the first things we should think about is deleting anything we don’t need at the earliest opportunity. Sharing transcripts and recordings Have we set any restrictions on who the outputs are shared with an in what form? We should take particular care to prevent unauthorised disclosure of sensitive information – either of a personal, confidential or commercial nature. Sensitive meetings Just because a meeting is of a sensitive nature, doesn’t necessarily mean it can’t be recorded or transcribed. We know of circumstances where both parties have been in agreement on this, for example in grievance proceedings meetings. However, in such cases all the other points above can become even more important – is it an approved app? is the output accurate? who should have access to it? And so on. Can we handle privacy rights requests? If recording and transcription tools are not set up and managed well, they may cause an unwelcome headache further down the line. Recordings and transcriptions may all be in scope if you receive a DSAR or erasure request. It’s therefore good to nail down, how long materials will be kept for, where they will be saved, and making sure they are searchable. 5 Quick Tips 1. DPIA: Depending on your planned use and how sensitive the personal data captured is likely to be, consider if a DPIA is required (or advisable). 2. Internal policy / guidelines for usage: Set guidelines on when and how recording and transcription services should and should not be used. Include expected standards such as telling people in advance, giving them an opportunity to object, rules on sharing, deletion etc 3. Access controls: Update your access controls to make sure only authorised individuals can access recordings and transcriptions. 4. Retention: Update your data retention policy/schedule to confirm retention periods. Clearly there may be exceptions to the rule, if there is information which needs to be kept longer. 5. DSARs: Update your DSAR procedure to reflect personal data captured in recordings and transcriptions may be within scope.