Is YOUR data training THEIR AI? In our personal and working lives we’re under a barrage of notifications inviting us to use new AI functionality. Sometimes it’s not even a feature we actively turn on, it’s automatically on by default and we need to take steps disable it or opt-out. The problem is we often don’t know how this AI actually works, what it might do, what data it uses, or what data is used to train it. Recently LinkedIn announced it has started sharing user generated content for LLM training. While you may be happy with this, for those that aren’t, you need to actively go into your data privacy settings and switch it off. More broadly, many organisations are being encouraged to take advantage of shiny new AI capabilities offered by their existing software providers. HR, Finance, IT and CRM software can seemingly do so much more if the latest AI tool is enabled. It’s very tempting to give it a try. And I suspect many data protection teams are struggling to keep up with parts of the business which have drifted into using AI tools without much deliberation. We need to be aware using AI for seemingly innocuous purposes can have unexpected consequences. We’ve written about the risks to consider when using AI to transcribe or record meetings. Did you know there’s a feature on MS Teams which can automatically detect when an employee is connected to the company wi-fi and update their location to ‘in the office’? This may seem like a simple and useful feature to switch on. But in essence this is a form of workplace tracking, and may raise some considerations, not least is it proportionate and lawful? Even with our existing suppliers, we’d be wise to conduct some due diligence. AI functionality isn’t always a straight-forward extension of an existing service. We should assess the benefits and risks, be clear about our objectives and whether we are a controller or joint controller. Make sure our activities are lawful, fair and transparent. Be sure our data is still being processed by the same party and in the same country. Understand if our data will be used to use to train the software providers’ models, and where data is anonymised or aggregated, be confident this is effective enough in preventing risk. It may feel daunting, but we should try and have some level of understanding about how a third-party supplied AI system works. Ultimately, we’re responsible for complying with data protection law for any personal data we allow to be used in or by an AI system. Recently I was reviewing the AI usage of a client’s existing software provider. They were ambiguous about the use of the client’s personal data to train their own models. It became clear processing was no longer taking place in Ireland, but in the United States and India. I’ve seen other AI software where it’s transpired it’s not been developed by the software provider themselves, but they are using AI provided by a third party (who the data is shared with). Which made me wonder; is the AI provider using that data for their own purposes, such as AI training? Ideally we should be asking AI providers, whether they be new or existing suppliers, to work with us to conduct a Data Protection Impact Assessment. If they’re reluctant to help, or not able to answer key questions, this might raise concerns. I’m not saying all AI tools are inherently a bad thing. There are many benefits to be gained! Just do some digging, and keep your eyes open. How to govern your organisation’s use of AI
The Digital Omnibus: Plans to revise EU digital laws Is Europe on a collision course between cutting red tape and protecting fundamental rights? There’s been plenty of chatter about the European Commission’s Digital Omnibus. The leaked text has been poured over and now the official draft has been published. For some it raises concerns of weakened regulation impacting on people’s fundamental rights. For others it represents a hope the burden of compliance will be eased and innovation unleashed. The EC is very much pitching this as “innovation friendly AI rules” and an “innovation friendly privacy framework” I suspect the UK Government will be watching developments across the Channel closely and could find itself wishing it had been bolder with the Data (Use and Access) Act 2025 (DUAA). What is the Digital Omnibus? This is not a new law, nor a complete overhaul of existing legislation but an EC proposal to streamline, align and introduce specific legislative updates to existing digital rules such as the EU AI Act, GDPR, ePrivacy Directive, Data Act and Data Governance Act. An attempt to remove duplication and inconsistencies, along with alleviating some the burden of compliance for European organisations and others who operate within the EU. What’s potentially on the cards? AI Act Key proposals include a delay in the applicable date for obligations in relation to high-risk AI systems, reducing AI literacy obligations, removing obligations for providers to register on the EU’s public database and introducing reduced penalties for small to medium sized businesses. GDPR and ePrivacy Key legislative adjustments which could be ushered in include the following: Personal data A narrower definition is proposed whereby information would not be considered personal data for a given entity when that entity does not have ‘means reasonably likely’ to identify individuals. This could ease the current and not inconsiderable issues and debates caused by assessing whether people can be ‘indirectly’ identifiable. The existing GDPR definition states: ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly. Interestingly a similar tweak was proposed in the UK under the previous Conservative Government’s data reform plans, but wasn’t carried over into DUAA. Special category data The idea is data would only be classified as special category data if it ‘directly reveals’ information about an individual’s health, sex life, racial of ethnic origin, political opinions, trade union membership, religious or philosophical beliefs. If introduced this would mark a step change away from the current broader inference-based rule and is likely to be particularly contentious. The following two new exemptions are also proposed to the existing prohibitions on processing special category data: 1) allowing for the ‘residual processing’ of special category data for the development and operation of an AI system or AI model – subject to certain conditions. 2) permitting the processing of biometric data when necessary to confirm someone’s identity, and where the data and means of verification are under the sole control of that individual i.e. where biometrics are on the user’s device. Right of Access – Data Subject Access Requests ‘Abusive’ requests could be rejected or a fee charged, if a controller considers the request is being used by someone for other purposes than the ‘protection of their personal data’. Enhanced clarification is also expected on the conditions under which a request can be deemed excessive. This recognises a growing issue of DSARs being ‘weaponised’ and used for other purposes, such as litigation. I imagine there are plenty of organisations hoping this proposal will not be ditched during negotiations. I for one would welcome this move and know plenty of UK organisations would benefit from a similar legislative amendment in the UK. Personal Data Breaches It’s proposed the requirement to report data breaches to a supervisory authority would only kick in where there was a ‘high risk’ rather than the current threshold of ‘risk’. This would align the threshold for both reporting to regulators and notification to affected individuals. The deadline for reporting could also be extended from 72 to 96 hours. Data Protection Impact Assessments In a move to try and make sure there’s a consistent approach across the EU, the European Data Protection Board is expected to be tasked with creating harmonised lists of processing activities requiring a DPIA and those which would be exempt. The EDPB would also develop a common template and methodology for conducting DPIAs. Automated decision making We could see more freedom to rely on entirely automated decisions with legal or similarly significant effect when necessary for a contract, even if the same decision could be made manually by a human. Cookies and similar technologies In an attempt to try and alleviate the confusion and annoyance for users, as well as the cost to business, the EC is proposing simplify the rules. The stated aim is to reduce the number of times cookie banners pop up and allow users to indicate their consent with ‘one-click’, with preferences saved via their browser and operating system’s settings. Any processing of personal data is expected to be governed solely by GDPR – not the ePrivacy Directive. It’s also proposed certain purposes which pose a low risk to people’s rights and freedoms will not require consent, for example when cookies and similar technologies are used for security and aggregated audience measurement. EU legislators may find themselves looking across the pond to California’s new “Opt Me Out Act”. From January 2027 this requires web browsers to offer a one-click opt-out which automatically tells websites not to sell or share their personal information. While just one state’s law this is expected to have a more far-reaching impact. It will be simpler for browsers to roll this feature out more widely, as they won’t know if the organisation which runs a website is based in California or not. AI and legitimate interests A new provision could be introduced confirming the lawful basis of legitimate interests could be used for processing personal data for training AI models. It’s highly likely this would still be subject to a balancing test. Privacy notices Providing a privacy notice to an individual may not become necessary if a controller believes the individual already knows the organisation’s identity, the organisation’s purposes for processing and how to contact any Data Protection Officer. What next? None of the above is set in stone, and all is subject to change. And for those of you who remember the years of wrangling trying to amend the ePrivacy Directive, which ultimately failed, there’s a long road of negotiation and lobbying ahead. Ultimately, will technological advances continue to streak ahead with legislators struggling to keep up? Also see EC Press Release and EC Digital Omnibus proposals