The Digital Omnibus: Plans to revise EU digital laws Is Europe on a collision course between cutting red tape and protecting fundamental rights? There’s been plenty of chatter about the European Commission’s Digital Omnibus. The leaked text has been poured over and now the official draft has been published. For some it raises concerns of weakened regulation impacting on people’s fundamental rights. For others it represents a hope the burden of compliance will be eased and innovation unleashed. The EC is very much pitching this as “innovation friendly AI rules” and an “innovation friendly privacy framework” I suspect the UK Government will be watching developments across the Channel closely and could find itself wishing it had been bolder with the Data (Use and Access) Act 2025 (DUAA). What is the Digital Omnibus? This is not a new law, nor a complete overhaul of existing legislation but an EC proposal to streamline, align and introduce specific legislative updates to existing digital rules such as the EU AI Act, GDPR, ePrivacy Directive, Data Act and Data Governance Act. An attempt to remove duplication and inconsistencies, along with alleviating some the burden of compliance for European organisations and others who operate within the EU. What’s potentially on the cards? AI Act Key proposals include a delay in the applicable date for obligations in relation to high-risk AI systems, reducing AI literacy obligations, removing obligations for providers to register on the EU’s public database and introducing reduced penalties for small to medium sized businesses. GDPR and ePrivacy Key legislative adjustments which could be ushered in include the following: Personal data A narrower definition is proposed whereby information would not be considered personal data for a given entity when that entity does not have ‘means reasonably likely’ to identify individuals. This could ease the current and not inconsiderable issues and debates caused by assessing whether people can be ‘indirectly’ identifiable.  The existing GDPR definition states: ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly. Interestingly a similar tweak was proposed in the UK under the previous Conservative Government’s data reform plans, but wasn’t carried over into DUAA. Special category data The idea is data would only be classified as special category data if it ‘directly reveals’ information about an individual’s health, sex life, racial of ethnic origin, political opinions, trade union membership, religious or philosophical beliefs. If introduced this would mark a step change away from the current broader inference-based rule and is likely to be particularly contentious. The following two new exemptions are also proposed to the existing prohibitions on processing special category data: 1) allowing for the ‘residual processing’ of special category data for the development and operation of an AI system or AI model – subject to certain conditions. 2) permitting the processing of biometric data when necessary to confirm someone’s identity, and where the data and means of verification are under the sole control of that individual i.e. where biometrics are on the user’s device. Right of Access – Data Subject Access Requests ‘Abusive’ requests could be rejected or a fee charged, if a controller considers the request is being used by someone for other purposes than the ‘protection of their personal data’. Enhanced clarification is also expected on the conditions under which a request can be deemed excessive. This recognises a growing issue of DSARs being ‘weaponised’ and used for other purposes, such as litigation. I imagine there are plenty of organisations hoping this proposal will not be ditched during negotiations. I for one would welcome this move and know plenty of UK organisations would benefit from a similar legislative amendment in the UK. Personal Data Breaches It’s proposed the requirement to report data breaches to a supervisory authority would only kick in where there was a ‘high risk’ rather than the current threshold of ‘risk’. This would align the threshold for both reporting to regulators and notification to affected individuals. The deadline for reporting could also be extended from 72 to 96 hours. Data Protection Impact Assessments In a move to try and make sure there’s a consistent approach across the EU, the European Data Protection Board is expected to be tasked with creating harmonised lists of processing activities requiring a DPIA and those which would be exempt. The EDPB would also develop a common template and methodology for conducting DPIAs. Automated decision making We could see more freedom to rely on entirely automated decisions with legal or similarly significant effect when necessary for a contract, even if the same decision could be made manually by a human. Cookies and similar technologies In an attempt to try and alleviate the confusion and annoyance for users, as well as the cost to business, the EC is proposing simplify the rules. The stated aim is to reduce the number of times cookie banners pop up and allow users to indicate their consent with ‘one-click’, with preferences saved via their browser and operating system’s settings. Any processing of personal data is expected to be governed solely by GDPR – not the ePrivacy Directive. It’s also proposed certain purposes which pose a low risk to people’s rights and freedoms will not require consent, for example when cookies and similar technologies are used for security and aggregated audience measurement. EU legislators may find themselves looking across the pond to California’s new “Opt Me Out Act”. From January 2027 this requires web browsers to offer a one-click opt-out which automatically tells websites not to sell or share their personal information. While just one state’s law this is expected to have a more far-reaching impact. It will be simpler for browsers to roll this feature out more widely, as they won’t know if the organisation which runs a website is based in California or not. AI and legitimate interests A new provision could be introduced confirming the lawful basis of legitimate interests could be used for processing personal data for training AI models. It’s highly likely this would still be subject to a balancing test. Privacy notices Providing a privacy notice to an individual may not become necessary if a controller believes the individual already knows the organisation’s identity, the organisation’s purposes for processing and how to contact any Data Protection Officer. What next? None of the above is set in stone, and all is subject to change. And for those of you who remember the years of wrangling trying to amend the ePrivacy Directive, which ultimately failed, there’s a long road of negotiation and lobbying ahead. Ultimately, will technological advances continue to streak ahead with legislators struggling to keep up? Also see EC Press Release and EC Digital Omnibus proposals 

10 tips to prevent email errors It’s confession time. I recently copied the wrong person on an email. Same first name, different surname. Thankfully, it was easily resolved. But for someone in my line of work? Shameful. It’s like a chef putting ketchup on a pasta dish. Nonetheless, I decided to try my best to learn from the experience. Which got me thinking about two issues in particular: a) Email errors are not just one of the major causes of personal data breaches, but also downright awkward even where there’s no personal data risk. They can lead to sharing commercially sensitive information, or opinions. They can breach client trust. b) What are the best ways of reducing instances of human error? I know I’m not alone. Other data protection folk have admitted making the occasional mistake too. A good friend of mine once accidentally sent an email to a client – not a data breach but she did lose the client. I’ll also never forget receiving an email and finding myself reading a fellow colleague’s rather disparaging views about my team. Of course, there are the frequent data breaches – often small, sometimes big, caused by matters like emailing the wrong recipient, or using the CC field for multiple recipients. Yet, for many, it’s ‘just one of those things.’ Oops! Then the embarrassment fades… until next time. So is it really enough to keep reminding people to double check before sending? Won’t there always be times when we’re overworked, dashing to go on holiday, or distracted by personal issues? Is it good enough to rely on recall features? Probably not, when in practice they’re often completely ineffective. People will continue to make mistakes. To err is human. What else can we do? 10 email tips Here are a few suggestions for reducing the risk. 1. Disable or restrict auto-fill Yes auto-fill is a handy way to quickly go through our address book and predict who we want to email. Nonetheless, it sometimes chooses the wrong person… and we don’t notice. This is what got me. I’ve disabled this feature, and shouldn’t have had it enabled in the first place. I am now very content to spend a couple of seconds finding the correct email address. 2. Avoid email altogether  Encourage (or insist) that staff who need to share attachments, personal data or any other sensitive information use links to protected SharePoint folders/files rather than using email. 3. Attachments Use software to prevent or restrict any email containing an attachment. 4. Detect personal data If 3. is a a step too far, look at using software which can automatically detect personal data in attachments or email content and prevents it being sent – or prompts people to check they really want to send. 5. External recipients Implement user prompts for external email recipients – ‘are you sure you want to send this externally?’ 6. Multiple recipients Use controls to alert users if they’re emailing multiple recipients using the CC field – prompting them to use BCC. Alternatively for teams who routinely send emails using BCC, use a bulk mail solution. 7. Delay on send How often do you spot an error just after you’ve sent an email? Setting up a delay on send for your staff, gives people a chance to correct their mistakes. 8. ‘Reply to All’ Set an alert if people are about to reply to all, prompting them to check whether this is appropriate. 9. Revoke access after sending Some more advanced email security solutions give you the ability to recall or revoke access to an email and its attachments, even after it hits the recipient’s inbox. 10. Email review Where teams are responsible for routinely sending sensitive information by email, and there’s no alternative, have a review process so someone else checks before sending. It’s worth checking what controls are available on your email system or looking at  additional software solutions. Some of the prompts mentioned above are available using Outlook’s MailTips. Of course training, continually raising awareness and clear rules all play their part. Making sure your people know how you expect them to behave is crucial. It also needs to be clear what action people should take when they’ve made a mistake. Are staff permitted to try and rectify this themselves, or does it always need to be immediately reported? The steps you expect your staff to take need to be easily understood and reinforced in training and culture. This also means supervisors should lead by example. I’m a fan of quick reference guides supporting more detailed policies and procedures. In this case, a ‘golden rules for emails’ on one page, in plain English. with the rules and clear steps for what to do when things go wrong. Laminate it, turn it into posters – do whatever works to get the message home. Ultimately, mistakes are inevitable. What isn’t inevitable, though, is the impact mistakes have once the ‘send’ button’s been hit. Every little step taken to mitigate email errors lessens the impact when one inevitably slips through the net. Most of us, after all, recognise the occasional mistake will occur. The problem is if they happen too often, it can undermine confidence in your people, your organisation and your brand.

The Little Book of Data Protection Nuggets

When is it okay to record and transcribe meetings? Key considerations when using AI-enabled tools It’s increasing common for online meetings and phone calls to be recorded and/or transcribed. A plethora of AI-enabled tools have popped up to make this very easy to do. Transcriptions can be really helpful to provide a written record, a short summary of the key points, or even to automate key actions. Often handy for those who can’t attend or for people with certain disabilities. Some apps can combine words with recorded video or audio content for reference. However, while we rush to take advantage of these apps, we should be mindful of some privacy risks and be sure to have some measures and controls in place. Unauthorised use and data leakage Are people in your organisation going ahead with a ‘free trial’ and using recording or transcription services which have not been properly vetted or approved? This could result in poor controls on the outputs and data leakage to third parties. People need to know what they’re permitted to do, and what is not company policy. The safest bet is to go with an Enterprise version, so you can make sure there’s sufficient control and oversight of its use. Does it turn on automatically? Some apps are set to ‘on’ by default, so the settings may need editing to stop them automatically recording or transcribing when you don’t want them to. Do you have permission? It’s important to make sure everyone’s happy for the meeting to be recorded and/or transcribed. Good practice would be to let participants know in advance when there will be a recording and/or transcription made and ask them to let you know if they object. Also remind them at the start of the meeting, before you actually click ‘start’. Is it accurate? AI transcription tools can be extremely accurate, often better than humans. But even so, AI can still make mistakes. For example, AI can misinterpret certain nuances in the human voice or behaviours, or fail to grasp the context. This could affect the accuracy of the written output, or even its meaning. What we say isn’t always what we mean! Take different forms of humour, such as sarcasm, which might not come across well in raw text. Human oversight is key – don’t assume everything you read is 100% accurate to the words or the context. Data minimisation and retention Do we really need both a video recording and a transcription? Depending on the nature of meetings, this could create a significant volume of personal data, or perhaps commercially sensitive data. One of the first things we should think about is deleting anything we don’t need at the earliest opportunity. Sharing transcripts and recordings Have we set any restrictions on who the outputs are shared with an in what form? We should take particular care to prevent unauthorised disclosure of sensitive information – either of a personal, confidential or commercial nature. Sensitive meetings Just because a meeting is of a sensitive nature, doesn’t necessarily mean it can’t be recorded or transcribed. We know of circumstances where both parties have been in agreement on this, for example in grievance proceedings meetings. However, in such cases all the other points above can become even more important – is it an approved app? is the output accurate? who should have access to it? And so on. Can we handle privacy rights requests? If recording and transcription tools are not set up and managed well, they may cause an unwelcome headache further down the line. Recordings and transcriptions may all be in scope if you receive a DSAR or erasure request. It’s therefore good to nail down, how long materials will be kept for, where they will be saved, and making sure they are searchable. 5 Quick Tips 1. DPIA: Depending on your planned use and how sensitive the personal data captured is likely to be, consider if a DPIA is required (or advisable). 2. Internal policy / guidelines for usage: Set guidelines on when and how recording and transcription services should and should not be used. Include expected standards such as telling people in advance, giving them an opportunity to object, rules on sharing, deletion etc 3. Access controls: Update your access controls to make sure only authorised individuals can access recordings and transcriptions. 4. Retention: Update your data retention policy/schedule to confirm retention periods. Clearly there may be exceptions to the rule, if there is information which needs to be kept longer. 5. DSARs: Update your DSAR procedure to reflect personal data captured in recordings and transcriptions may be within scope.

ICO fines charity for destroying personal data We often talk about the risks of holding onto personal data for too long. The need to make sure data is destroyed when it’s no longer required and how the impact of a data breach could be far worse if it involves personal records which shouldn’t have kept. But now we have a case where it’s the destruction of records which caused a data breach. The Scottish charity Birthlink has been fined £18,000 by the ICO for destroying approximately 4,800 records, some of which were irreplaceable photographs and letters. The findings make for sobering reading. A catalogue of errors; lack of accountability, lack of policies and procedures, no appropriate data protection training and a failure to report a data breach for more than two years. Who are Birthlink and what do they do? Birthlink has maintained the Adoption Contact Register for Scotland since 1984. This is a service for adopted people or their relatives, and for birth parents or their relatives. It enables people to register their details with the hope of being ‘linked’ and potentially reunited. Where a link is made, records are classified at “Linked Records”, and the personal data contained within such records can include sensitive documents such as: ■ Original birth certificates ■ Adoption Contact Register application form ■ Correspondence between Birthlink and service users ■ Other information relevant to the adoption ■ Irreplaceable items (e.g. handwritten letters from birth parents and birth families, photographs and other sensitive personal information) These are physical documents relating to adopted people’s individual circumstances, which the charity held in filing cabinets. What went wrong? In January 2021 Birthlink was running out of space in the filing cabinets the Linked Records were stored in, so assessed whether they could destroy them. After a board meeting it was agreed there were no barriers to the destruction of the records, that retention periods should apply and only replaceable records should be destroyed. However, it’s evident from the enforcement notice this was very badly managed. Due to poor records management, bags of paperwork were destroyed without a full understanding of what the documents entailed. To make matters worse, despite concerns being raised at the time about shredding people’s photographs and letters, the destruction continued. More than two years later and following an inspection by the Care Inspectorate, the Board became aware irreplaceable items had in fact been destroyed. It was only then the data breach was reported to the ICO. And the woeful tale continues. Poor record keeping means not only will the extent of what was destroyed never be fully known, Birthlink have also been left unable to identify people affected by the breach. Key findings Routinely in an article like this I’d write a bit about the key findings, but in this case I think they speak for themselves. You’ll not be surprised to learn Birthlink says there was limited knowledge of their data protection obligations at the time this breach took place. Sally Anne Poole, Head of Investigations at ICO, said: “It is inconceivable to think, due to the very nature of its work, that Birthlink had such a poor understanding of both its data protection responsibilities and records management process. We do however welcome the improvements the charity has subsequently put in place, not least by appointing a data protection officer to monitor compliance and raise awareness of data protection throughout the organisation. “Whilst we acknowledge the important work charities do, they are not above the law and by issuing and publicising this proportionate fine we aim to promote compliance, remind all organisations of the requirement to take data protection seriously and ultimately deter them from making similar mistakes.” Key learnings It’s too easy to see the mistakes here, and easy to pour scorn on Birthlink. However, all organisations will recognise taking a robust approach to data retention can be challenging to deliver in practice. Many organisations face a careful balance between destroying personal data they have no justification for holding on to, and making sure they continue to retain records they still need to keep. Robust records management procedures, secure storage and archiving, clear data retention periods, and clear authorisation when the time comes for destruction are crucial – especially when handling sensitive information. Sometimes a specific law tells us how long certain records should be kept, or personal data needs to be retained to meet contractual obligations. Often we need to consider people’s reasonable expectations – would they expect us to be still holding on to their personal details or not? In the case of Birthlink, the answer was almost undoubtedly, yes, people would have expected irreplaceable records to be retained, or perhaps returned to them, rather than destroyed. I can’t stress enough to effectively tackle data retention it needs shared ownership – clear accountability with assigned roles and responsibilities across the organisation. Good data governance is the key. If this has given you an unwelcome nudge to revisit your approach to retention, see our 3 Steps to decide your data retention periods and our detailed Data Retention Guide.

DUA Act and Legitimate Interests The Data Use and Access Act (DUAA) introduces changes to the concept of legitimate interests under UK GDPR. Once provisions take effect there will be a seventh lawful basis of recognised legitimate interests and legal clarity on activities which may be considered a legitimate interest. Recognised Legitimate Interests The DUAA amends Article 6 of GDPR to expand the six lawful bases for processing to seven, to include recognised legitimate interests. While a necessity test will still be required, for the following recognised legitimate interests there will no longer be a requirement for an additional balancing test (Legitimate Interests Assessment): ■ Disclosures to public bodies, or bodies carrying out public tasks where the requesting body has confirmed it needs the information to carry out its public task. This means private and third sector organisations which work in partnership with public bodies will just need confirmation the public body needs the information to carryout out its public task. This is likely to give more confidence to organisations (such as housing associations and charities) when sharing information with public sector partners. Data Sharing Agreements, Records of Processing Activities (RoPAs) and privacy notices may need to be updated to reference recognised legitimate interests as the lawful basis where appropriate. Staff training may also need updating. ■ Safeguarding vulnerable individuals – this allows for the use of personal data for safeguarding purposes. There are also definitions given for the public interest condition of “safeguarding vulnerable individuals”, which the ICO has written more about here. ■ Crime – this allows use of personal information where necessary for the purposes of detecting, investigating or preventing a crime; or apprehending or prosecuting offenders. ■ National security, public security and defence – this allows the use personal information where necessary for purposes of safeguarding national security, protecting public security or defence. ■ Emergencies – this allows use personal information where necessary when responding to an emergency. An emergency is defined by the Civil Contingencies Act 2004 and means an event or situation with threatens serious damage to human welfare or the environment, or war or terrorism which threatens serious damage to the security of the UK. The ICO is planning to publish guidance on recognised legitimate interests over Winter 2025/26. For a timeline of when we can anticipate other DUAA related guidance from the ICO see DUAA – Next Steps. Types of processing that may be considered a legitimate interest There are some examples of activities which may be considered a legitimate interest in the recitals of UK GDPR. As such they provided an interpretation of the law but were not legally binding. DUAA moves the following examples of legitimate interests from the recitals into the body of the law: ■ direct marketing ■ intra-group sharing of data for internal administrative purposes, and ■ processing to ensure network and information security. This may give organisations more confidence when relying on the lawful basis of legitimate interests however, unlike recognised legitimate interests, the above will still be subject to a Legitimate Interests Assessment. The core rules under the Privacy & Electronic Communications Regulations (PECR) are not changing – unless you’re a charity wishing to benefit from the ‘soft opt-in’. For direct marketing activities, legitimate interests will still only be an option for specific marketing activities which don’t require specific and informed consent under PECR. An update to both the ICO’s Legitimate Interests Guidance and PECR guidance is expected in Winter 2025/26.

Why Data Protection Officer isn’t just a title How misunderstanding lingers about DPOs When GDPR came into force more than seven years ago, it made it mandatory for certain organisations to appoint a Data Protection Officer (DPO) – certainly not all organisations. As a result there are more than 500,000 organisations with Data Protection Officers registered across Europe, according to IAPP research. But even after so long, a good deal of confusion remains about which organisations need to appoint a DPO, and what the role actually entails. The DPO isn’t just a title you can dish out to whoever you choose. When a DPO is mandatory The law tells us organisations must appoint a DPO if you’re a Controller or a Processor and the following apply: ■ you’re a public authority or body (except for courts acting in their judicial capacity); or ■ your core activities require large-scale, regular and systematic monitoring of individuals (for example, online behaviour tracking); or ■ your core activities consist of large-scale processing of special categories of data or data relating to criminal convictions and offences. This raises questions about what’s meant by ‘large-scale’ and what happens if your organisation falls within the criteria above but fails to appoint a DPO. When it comes to interpreting ‘large-scale’ activities, the European Data Protection Board Guidelines on Data Protection Officers provide some useful examples. Despite the previous Conservative government’s data reform proposals including the removal of DPO role, I should stress under the soon to be enacted Data (Use & Access) Act, these requirements remain unchanged. What to do if it’s not mandatory to appoint a DPO Many small to medium-sized organisations won’t fall within the set criteria for mandatory appointment of a DPO. For many organisations, their processing is neither ‘large scale’ nor particularly sensitive in nature. The ICO tells us all organisations need to have ‘sufficient staff and resources to meet the organisation’s obligations under the UK GDPR’. So, if you assess you don’t fall under the mandatory requirement, you have a choice: ■ voluntarily appoint a DPO, or ■ appoint an individual or team to be responsible for overseeing data protection. You can take a proportionate approach, based on the size of your organisation and the nature of the personal data you handle. The DPO’s position Many organisations don’t realise the law sets out the DPO’s position and their specific responsibilities. If you have a DPO, their responsibilities are not optional or up for debate. The law tells us DPOs must: ■ report directly to the highest level of management ■ be an expert in data protection ■ be involved, in a timely manner, in all issues relating to data protection ■ be given sufficient resources to be able to perform their tasks ■ be given the independence and autonomy to perform their tasks It’s worth stressing appointing a DPO places a duty on the organisation itself (particularly senior management), to support the DPO in fulfilling their responsibilities. As you can see above, this includes providing resources, and enabling independence and autonomy. Not just anybody can be your DPO. While they can be an internal or external appointment, and one person can represent several different organisations, steps should be taken to make sure there are no conflicts of interest. A CEO being the DPO, or the Head of Marketing might be obvious examples of where a conflict could easily arise. The law sets out the DPO must perform their role in an independent manner. Their organisation shouldn’t influence which projects they should be involved in, nor interfere with how to execute their role. A DPO therefore needs to someone of character and resilience who can stand their ground, even in the face of potential conflict. When it comes to being an ‘expert’, there’s a judgement call to make, as the law doesn’t specify particular credentials or qualifications. The level of experience and specialist skills can be proportionate to the type of organisation and the nature of the processing. The tasks a DPO should perform The formal set of tasks a DPO is required to perform are as follows: ■ inform and advise the organisation and its employees about their obligations under GDPR and other data protection laws. This includes laws in other jurisdictions which are relevant to the organisation’s operations. It’s worth noting the DPO is an advisory role, i.e. to advise the organisation and its people. Their role is not to make decisions on the processing activities. There should be a clear separation between advisor and decision-maker roles. The organisation doesn’t need to accept the advice of their DPO, but the DPO would be wise to document when their advice is ignored. In many smaller organisations people may undoubtedly be spinning multiple plates and will need to do some (or plenty) of the ‘doing’ work. ■ monitor the organisation’s compliance with the GDPR and other data protection laws. This includes ensuring suitable data protection polices are in place, training staff (or overseeing this), managing data protection activities, conducting internal reviews & audits and raising awareness of data protection issues & concerns so they can be tackled effectively. This doesn’t mean a DPO has to write every data protection related policy, or stand up and deliver training. ■ advise on, and to monitor data protection impact assessments (DPIAs). ■ be the first point of contact for individuals in relation to data protection and for liaison with the ICO. A DPO must also be easily accessible, for individuals, employees and the ICO. Their contact details should be published, e.g. in your privacy notice (this doesn’t have to include their name) and the ICO should be informed you’ve appointed a DPO. A DPO shouldn’t be penalised for carrying out their duties. The ICO points out a DPO’s tasks cover all the organisation’s processing activities. Not just those which required a DPO to be appointed – such as ‘large scale processing of special category data’. However, the ICO accepts a DPO should prioritise and focus on more risky activities. ICO Data Protection Officer Guidance. We’d always advise making sure a DPO’s responsibilities are clearly set out in a job description, to save any debate about the role. It’s helpful to make sure the management team and key stakeholders are briefed on the DPO’s legal role. What’s clear is being a DPO requires many qualities, and a broad skill set, which we’ve written more about here: What does it take to do the job?

AI Risk, Governance and Regulation The Artificial Intelligence landscape’s beginning to remind me of a place Indiana Jones might search for hidden treasure. The rewards are near-magical, but the path is littered with traps. Although, in the digital temple of ‘The New AI’, he’s not going to fall into a pit of snakes or be squished by a huge stone ball. No, Indy is more likely to face other traps. Leaking sensitive information. Litigation. Loss of adventuring advantage to competing explorers. A new, looming regulatory environment, one even Governments have yet to determine. And the huge stone ball? That will be when the power of the Lost AI goes awry, feeding us with incorrect information, biased outcomes and AI hallucinations. Yes, regulation is important in such a fast-moving international arena. So is nimble decision-making, as even the European Commission considers pausing its AI Act. Nobody wants to be left behind. Yet, as China and the US vie for AI supremacy, are countries like the UK sitting on the fence? AI has an equal number of devotees and sceptics, very broadly divided along generational lines. Gen Z and X are not as enamoured with AI as Millennials (those born between 1981 and 1996). A 2025 Mckinsey report found Millennials to be the most active AI users. My Gen Z son, says of AI, ‘I’m not asking a toaster a question.’ He also thinks AI’s insatiable thirst for energy will make it unsustainable in the longer term. Perhaps he has a point, but I think every industry will somehow be impacted, disrupted and – perhaps – subsumed by AI. And as ever, with transformational new technologies, mistakes will be made as organisations balance risk versus advantage. How, in this ‘Temple of the New AI,’ do organisations find treasure… without falling into a horrible trap? How to govern your organisation’s use of AI While compliance with regulations will be a key factor for many organisations, protecting the business and brand reputation may be an even bigger concern. The key will be making sure AI is used in an efficient, ethical and responsible way. The most obvious solution is to approach AI risk and governance with a clear framework covering accountability, policies, ongoing monitoring, security, training and so on. Organisations already utilising AI may have already embedded robust governance. For others, here are some pointers to consider: ■ Strategy and risk appetite Senior leadership needs to establish the organisation’s approach to AI; your strategy and risk-appetite. Consider the benefits alongside the potential risks associated with AI and implement measures to mitigate them. ■ AI inventory Create an inventory to record what AI systems are already in use across the business, the purposes they are used for, and why. ■ Stakeholders, accountability & responsibilities Identify which key individuals and/or departments are likely to play a role in governing how AI is developed, customised and/or used in your organisation. Put some clear guard rails in place. Determine who is responsible and accountable for each AI system. Establish clear roles and responsibilities for AI initiatives to make sure there’s accountability for all aspects of AI governance. ■ Policies and guidelines Develop appropriate policies and procedures, or update existing policies so people understand internal standards, permitted usage and so on. ■ Training and AI literacy Provide appropriate training. Consider if this needs to be role specific, and factor in ongoing training in this rapidly evolving AI world. Remember, the EU AI ACT includes a requirement for providers and developers of AI systems to make sure their staff have sufficient levels of AI literacy. If you don’t know where to start, Use AI Securely provide a pretty sound free introductory course. ■ AI risk assessments Develop and implement a clear process for identifying potential vulnerabilities and risks associated with each AI system. For many organisations who are not developing AI systems themselves, this will mean a robust method for assessing the risks associate with third-party AI tools, and how you intend to use those tools. Embedding an appropriate due diligence process when looking to adopt (perhaps also customise) third-party AI SAAS solutions is crucial. Clearly not all AI systems or tools will pose the same level of risk, so having a risk-based methodology to enable you to prioritise risk, will also prove invaluable. ■ Information security Appropriate security measures are of critical importance. Vulnerabilities in AI models can be exploited, input data can be manipulated, malicious attacks can target training datasets, unauthorised parties may access sensitive, personal and/or confidential data. Data can be leaked via third party AI solutions. We also need to be mindful of how online criminals exploit AI to create ever more sophisticated and advanced malware. For example, to automate phishing attacks. On this point, the UK Government has published a voluntary AI cyber security code of practice. ■ Transparency and explainability Are you being open and up front about your use of AI? Organisations need to be transparent about how AI is being used, especially when it impacts on individuals or makes decisions that affect them. A clear example here is AI tools being used for recruitment – is it clear to job seekers you’re using AI? Are they being fairly treated? Using AI Tools in Recruitment Alongside this there’s the crucial ‘explainability’ piece – the ability to understand and interpret the decision-making processes of artificial intelligence systems. ■ Audits and monitoring Implement a method for ongoing monitoring of the AI systems and/or AI tools you are using . ■ Legal and regulatory compliance Keep up to date with latest developments and how to comply with relevant laws and regulations in different jurisdictions relevant for your operations. My colleague Simon and I recently completed the IAPP AI Governance Professional training, led by Oliver Patel. I’d highly recommend his Substack which is packed with tips and detailed information on how to approach AI Governance. Current regulatory landscape European Union The EU AI Act was implemented in August 2024, and is coming into effect in stages. Some people fear this comprehensive and strict approach will hold back innovation and leave Europe languishing behind the rest of the world. It’s interesting the European Commission is considering pausing its entry into application, which DLA Piper has written about here. On 2nd February this year, rules came into effect in relation to AI literacy requirements, definition of an AI system and a limited number of prohibited AI use cases, which the EU determines pose an unacceptable risk. Like GDPR, the AI Act has extra-territorial scope, meaning it applies to organisations based outside the EU (as well as inside) where they place AI products on the market or put them into service in the EU, and/or where outputs produced by AI applications are used by people within the EU. We’ve already seen how EU regulation has led to organisations like Meta and Google excluding the EU from use of its new AI products for fear of enforcement under the Act. The European Commission has published guidelines alongside prohibited practices coming into effect. Guidelines on Prohibited Practices & Guidelines on Definition of AI System UK For the time being it looks unlikely the UK will adopt a comprehensive EU-style regulation. A ‘principles-based framework’ is supported for sector specific regulators to interpret and apply. Specific legislation for those developing the most powerful AI models looks the most likely direction of travel. The Information Commissioner’s Office published a new AI and biometrics strategy on 5th June with a focus on promoting compliance with data protection law, preventing harm but also enabling innovation. Further ICO activity will include: ■ Developing a statutory code of practice for organisations developing or deploying AI. ■ Reviewing the use of automated decision making (ADM) systems for recruitment purposes ■ Conducting audits and producing guidance on the police’s use of facial recognition technology (FRT) ■ Setting clear expectations to protect people’s personal information when used to train generative AI foundation models v Scrutinising emerging AI risks and trends. The soon to be enacted Data (Use and Access) Act will to a degree relax current strict rules in relation to automated decision making which produces legal or similarly significant effects. The ICO for it’s part is committed to producing updated guidance on ADM and profiling by Autumn 2025. DUA Act: 15 key changes ahead Other jurisdictions are also implementing or developing a regulatory approach to AI, and it’s worth checking the IAPP Global AI Regulation Tracker. AI is here. It’s transformative and far-reaching. To take the fullest advantage of AI’s possibilities, keeping abreast of developments along with agile and effective AI governance will be key.