AI Risk, Governance and Regulation

June 2025

The Artificial Intelligence landscape’s beginning to remind me of a place Indiana Jones might search for hidden treasure. The rewards are near-magical, but the path is littered with traps. Although, in the digital temple of ‘The New AI’, he’s not going to fall into a pit of snakes or be squished by a huge stone ball. No, AI Indy is more likely to face other traps. Leaking sensitive information. Litigation. Loss of adventuring advantage to competing explorers. A new, looming regulatory environment, one even Governments have yet to determine.

And the huge stone ball? That will be when the power of the Lost AI goes awry, feeding us with incorrect information, biased outcomes and AI hallucinations.

Yes, regulation is important in such a fast-moving international arena. So is nimble decision-making, as even the European Commission considers pausing its AI Act. Nobody wants to be left behind. Yet, as China and the US vie for AI supremacy, are countries like the UK sitting on the fence?

AI has an equal number of devotees and sceptics, very broadly divided along generational lines. Gen Z and X are not as enamoured with AI as Millennials (those born between 1981 and 1996). A 2025 Mckinsey report found Millennials to be the most active AI users. My Gen Z son, says of AI, ‘I’m not asking a toaster a question.’ He also thinks AI’s insatiable thirst for energy will make it unsustainable in the longer term.

Perhaps he has a point, but I think every industry will somehow be impacted, disrupted and – perhaps – subsumed by AI. And as ever, with transformational new technologies, mistakes will be made as organisations balance risk versus advantage.

How, in this ‘Temple of the New AI,’ do organisations find treasure… without falling into a horrible trap?

How to govern your organisation’s use of AI

While compliance with regulations will be a key factor for many organisations, protecting the business and brand reputation may be an even bigger concern. The key will be making sure AI is used in an efficient, ethical and responsible way.

The most obvious solution is to approach AI risk and governance with a clear framework covering accountability, policies, ongoing monitoring, security, training and so on. Organisations already utilising AI may have already embedded robust governance. For others, here are some pointers to consider:

Strategy and risk appetite

Senior leadership needs to establish the organisation’s approach to AI; your strategy and risk-appetite. Consider the benefits alongside the potential risks associated with AI and implement measures to mitigate them.

AI inventory

Create an inventory to record what AI systems are already in use across the business, the purposes they are used for, and why.

Stakeholders, accountability & responsibilities

Identify which key individuals and/or departments are likely to play a role in governing how AI is developed, customised and/or used in your organisation. Put some clear guard rails in place. Determine who is responsible and accountable for each AI system. Establish clear roles and responsibilities for AI initiatives to make sure there’s accountability for all aspects of AI governance.

Policies and guidelines

Develop appropriate policies and procedures, or update existing policies so people understand internal standards, permitted usage and so on.

Training and AI literacy

Provide appropriate training. Consider if this needs to be role specific, and factor in ongoing training in this rapidly evolving AI world.

Remember, the EU ACT (already in effect) includes a requirement for providers and developers of AI systems to make sure their staff have sufficient levels of AI literacy.

If you don’t know where to start, Use AI Securely provide a pretty sound free introductory course.

AI risk assessments

Develop and implement a clear process for identifying potential vulnerabilities and risks associated with each AI system.

For many organisations who are not developing AI systems themselves, this will mean a robust method for assessing the risks associate with third-party AI tools, and how you intend to use those tools. Embedding an appropriate due diligence process when looking to adopt (perhaps also customise) third-party AI SAAS solutions.

Clearly not all AI systems or tools will pose the same level of risk, so having a risk-based methodology to enable you to prioritise risk, will prove invaluable.

Information security

Appropriate security measures are of critical importance. Vulnerabilities in AI models can be exploited, input data can be manipulated, malicious attacks can target training datasets, unauthorised parties may access sensitive, personal and/or confidential data. Data can be leaked via third party AI solutions.

We also need to be mindful of how online criminals exploit AI to create ever more sophisticated and advanced malware. For example, to automate phishing attacks. On this point, the UK Government has published a voluntary AI cyber security code of practice.

 Transparency and explainability

Are you being open and up front about your use of AI? Be transparent about how AI is being used, especially when it impacts on individuals or makes decisions that affect them. A clear example here is AI tools being used for recruitment – is it clear to job seekers you are using AI? Are they being fairly treated. Using AI Tools in Recruitment

Alongside this there’s a crucial explainability piece – the ability to understand and interpret the decision-making processes of artificial intelligence models

Audits and monitoring

Implement a method for ongoing monitoring of the AI systems and/or AI tools you are using .

Legal and regulatory compliance

Keep up to date with latest developments and how to comply with relevant laws and regulations in different jurisdictions relevant for your operations.

My colleague Simon and I recently completed the IAPP AI Governance Professional training, led by Oliver Patel. I’d highly recommend his Substack which is packed with tips and detailed information on how to approach AI Governance.

Current regulatory landscape

European Union

The EU AI Act was implemented in August 2024, and is coming into effect in stages. Some people fear this comprehensive and strict approach will hold back innovation and leave Europe languishing behind the rest of the world. It’s interesting the European Commission is considering pausing its entry into application. DLA Piper has written about this here.

On 2nd February this year, rules came into effect in relation to AI literacy requirements, definition of an AI system and a limited number of prohibited AI use cases, which the EU determines pose an unacceptable risk.

Like GDPR, the AI Act has extra-territorial scope, meaning it applies to organisations based outside the EU (as well as inside) where they place AI products on the market or put them into service in the EU, and/or where outputs produced by AI applications are used by people within the EU. We’ve already seen how EU regulation has led to organisations like Meta and Google excluding the EU from use of its new AI products for fear of enforcement under the Act.

The European Commission has published guidelines alongside prohibited practices coming into effect. Guidelines on Prohibited Practices & Guidelines on Definition of AI System

UK

For the time being it looks unlikely the UK will adopt a comprehensive EU-style regulation. A ‘principles-based framework’ is supported for sector specific regulators to interpret and apply. Specific legislation for those developing the most powerful AI models looks the most likely direction of travel.

The Information Commissioner’s Office has released a new AI and biometrics strategy on 5th June 2025 with a focus on promoting compliance with data protection law, preventing harm but also enabling innovation. Further ICO activity will include:

Developing a statutory code of practice for organisations developing or deploying AI.
Reviewing the use of automated decision making (ADM) systems for recruitment purposes
Conducting audits and producing guidance on the police’s use of facial recognition technology (FRT)
Setting clear expectations to protect people’s personal information when used to train generative AI foundation models
v Scrutinising emerging AI risks and trends.

The soon to be enacted Data (Use and Access) Act will to a degree relax current strict rules in relation to automated decision making which produces legal or similarly significant effects. The ICO for it’s part is committed to producing updated guidance on ADM and profiling by Autumn 2025. DUA Act: 15 key changes ahead.

Other jurisdictions are also implementing or developing a regulatory approach to AI, and it’s worth checking out the IAPP Global AI Regulation Tracker.

AI is here. It’s transformative and far-reaching. To take the fullest advantage of AI’s possibilities, keeping abreast of developments along with agile and effective AI governance will be key.