AI: Risk and Regulation

February 2025

Artificial Intelligence is an epoch-defining opportunity. The biggest game-changer since the Internet. Governments, businesses and other organisations fear losing out, unless they embrace innovation and change. Yet the benefits of AI carry with them ethical challenges. There’s the risk of considerable harms to individuals, wider society and the environment. Organisations also have to navigate risks; reputational, commercial and regulatory.

To regulate, or not to regulate AI?

The AI regulatory landscape’s far from settled – in fact, it’s a new frontier. On the one hand, the first phase of the EU AI Act comes into effect – the world’s first comprehensive AI regulation. On the other, President Trump has ‘ripped up’ Joe Biden’s AI Executive Order of 2023. The new US administration wants to remove barriers it claims stifle innovation. All previous policies, directives, regulations and orders in relation to AI are under review. The focus is on making sure America is a global leader in AI technology.

In the UK, an EU-style regulation looks unlikely. For the time-being a ‘principles-based framework’ is supported for sector specific regulators to interpret and apply. Specific legislation for those developing the most powerful AI models looks the most likely direction of travel.

John Edwards, the UK Information Commissioner penned a letter to the Prime Minister (in response to a request from Government to key regulators, to set out how they’ll support economic growth) in which he says; “regulatory uncertainty risks being a barrier to businesses investing and adopting transformative technology”. The Commissioner says his office will “develop rules for those developing and using AI products, to make it easier to innovate and invest responsibly”. Interestingly, the Commissioner supports the idea of a statutory Code of Practice for AI, saying this would give regulatory certainty to businesses wanting to invest in AI in the UK.

AI regulation has supporters and critics in equal measure. The EU’s strict approach has led to fears Europe will languish behind the rest of the world. Others argue it’s crucial to enforce an ethical and responsible approach to AI – in the absence of regulation, the argument goes, AI could be more malevolent than benign.

The divisions were crystal clear at a high-level AI Summit in Paris on 11 February, as the US and UK refused to sign President Macron’s declaration calling for open and ethical AI.

The potential is there for the UK to find a sweet spot, positioning its approach between its cautious European neighbours on one side and the ‘Wild West’ on the other?

EU AI ACT – first phase now applicable

The AI Act was implemented in August 2024, and is coming into effect in stages. On 2nd February rules in relation to AI literacy requirements, definition of an AI system and a limited number of prohibited AI use cases, which the EU determines pose an unacceptable risk, came into effect.

Like GDPR, the AI Act has extra-territorial scope, meaning it applies to organisations based outside the EU, where they place AI products on the market or put them into service in the EU, and/or where outputs produced by AI applications are used by people within the EU. We’ve already seen how EU regulation has led to organisations like Meta and Google excluding the EU from use of its new AI products.

In brief the prohibited uses under the AI Act are:

 Facial recognition – the use of AI systems which create or expand facial recognition databased through the untargeted scraping of images from the internet or CCTV footage. Social scoring – AI systems which evaluate and score people on their behaviour or characteristics, where this might lead to detrimental or unfavourable treatment in an unrelated context. Or where it could lead to detrimental or unfavourable treatment which is unjustified or disproportionate.

Predictive criminal risk assessments based on profiling.

Subliminal manipulation or other deceptive techniques which distort people’s behaviour and cause them to take decisions they wouldn’t have otherwise taken which are likely to cause significant harm.

Exploitation of vulnerabilities – such as someone’s age, disability or social/economic disadvantage.

Inferring emotions in the workplace or educational setting.

Biometric categorisation which infers special category data.

Real-time remote biometric identification for law enforcement purposes.

The European Commission has published guidelines alongside these prohibited practices coming into effect. Guidelines on Prohibited PracticesGuidelines on Definition of AI System

EU AI Act – what comes next?

The rules are complex and organisations which fall within the scope of the AI Act will need to comply with tiered requirements dependent on risk, which at a very top level involves;

For AI systems classified as high-risk there will be core requirements, such as mandatory Fundamental Rights Impact Assessments (FRIA), registration on a public EU database, data governance and transparency requirements, human oversight and more.
General-purpose AI (GPAI) systems, and the GPAI models they are based on, will be required to adhere to transparency requirements, including technical documentation, compliance with EU copyright law and detailed summaries about content used for training AI systems.
For Generative AI applications, people will have to be informed when they are interacting with AI, for example a Chatbot.

It’s worth bearing in mind an AI system could, for example, be both high-risk and GPAI.

Managing AI use

While compliance will be a key factor for many organisations, protecting the organisation’s reputation may be an even bigger concern. So, how do we ensure AI is used in an efficient, ethical and responsible way?

Organisations already utilising AI are likely to have embedded robust governance, enabling smart investment and innovation to take place within a clear framework to mitigate potential pitfalls. For others, here are some points to consider:

Senior leadership oversight
Establish your organisation’s approach to AI; your strategy and risk-appetite.

Key stakeholders
Identify key individuals and/or departments likely to play a role in governing how AI is developed, customised and/or used.

Roles and responsibilities
Determine who is responsible and accountable for each AI system.

Knowledge of AI use
Understand and record what AI systems are already in use across the business, and why.

Policies and procedures
Develop appropriate policies and procedures, or update existing policies so people understand internal standards and relevant regulatory requirements.

Training, awareness and AI literacy
Provide appropriate training, consider if this should be role specific. Remember, already in effect under the EU ACT is a requirement for providers and developers of AI systems to make sure their staff have sufficient levels of AI literacy)

Risk assessments
Develop a clear process for assessing and mitigating potentials AI risks. While a Data Protection Impact Assessment (DPIA) may be required, this is unlikely to be sufficient on its own.

Supplier management
Embed appropriate due diligence processes when looking to adopt (and indeed customise) third-party AI SAAS solutions.

AI security risks

Appropriate security measures are of critical importance. Vulnerabilities in AI models can be exploited, input data can be manipulated, malicious attacks can target training datasets, unauthorised parties may access sensitive, personal and/or confidential data. Data can be leaked via third party AI solutions. We also need to be mindful of how online criminals exploit AI to create ever more sophisticated and advance malware, for example, to automate phishing attacks. On this point, the UK Government has recently published a voluntary AI cyber security code of practice.

AI is here. It’s genuinely transformative and far-reaching; organisations unable or unwilling to embrace change – and properly manage the risks – will be left behind. To take the fullest advantage of AI’s possibilities, agile and effective governance is key.