EU AI Act adopted, and UK approach
The EU has adopted the world’s first Artificial Intelligence Act. The legal language has yet to be set in stone but once this has been finalised, and published the Act will be enforced. This is expected in May/June 2024.
It’s worth noting the law will then take effect in stages. There will be six months to ban prohibited AI systems, twelve months to enforce rules against ‘general-purpose’ AI systems, and 36 months to meet requirements for what the law has designated as ‘high risk’ AI systems.
As the EU pushes full steam ahead with AI legislation, the UK is for now sticking to a non-statutory principles-based approach. We take a look at both approaches.
UK approach to AI regulation
The UK Government says it’s keen not to rush in and legislate on AI. It fears specific rules introduced too swiftly could quickly become outdated or ineffective. The Government says it wants to take “a bold and considered approach that is strongly pro-innovation and pro-safety.”
For the time being, key regulators are being asked to take the lead. They’re being given funding to research and upskill, and have been asked to publish plans by the end of April on how they are responding to the risks and opportunities of AI, in their respective domains.
These regulators include the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), the Competitions and Markets Authority (CMA) and the Medicines & Healthcare products Regulatory Agency (MHRA).
The Government has also set up the Digital Regulation Cooperation Forum (DRCF) to “conduct cross-sector risk assessment and monitoring to guard against existing and emerging AI risks”.
Alongside this, a pilot scheme for a new advisory service; the AI and Digital Hub has been launched. This will be run by expert regulators including OfCom, CMA, FCA and ICO.
There’s a recognition advanced General Purpose AI may require binding rules, and the need for international cooperation on AI is also emphasised. The government’s approach is set out in its response to the consultation on last year’s AI Regulation White Paper
The EU AI Act
In March 2024 the European Union adopted the EU AI Act. Its aim is to ban unacceptable use of artificial intelligence and introduce specific rules for AI systems proportionate to the risk they pose. It will impose extensive requirements on those developing and deploying high-risk AI systems.
It’s likely the Act won’t just govern AI systems operating in the EU, with it’s scope extending to foreign entities which place AI systems on the market or put them into service in the EU.
The Act uses the definition of AI systems proposed by the OECD: An AI system is a machine-based system that infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments.
EU AI Act summary
1. Banned applications
There will be prohibited uses of AI which threaten democracy and people’s rights. For example this includes but is not limited to; biometric categorisation systems which use special category data, real-time and remote biometric identification systems (such as facial recognition) and emotion recognition in the workplace and educational institutions.
2. Law enforcement and national security exemptions
There will be a series of safeguards and narrow exemptions allowing for the use of biometric identification systems in publicly accessible spaces for law enforcement purposes. The legislation will not apply to systems which are exclusively used for defence or military applications.
3. Tiered risk-based approach
The requirements organisations will need to meet, will be tiered dependent on the risk. For example;
- For AI systems classified as high-risk there will be core requirements, such as mandatory fundamental rights impact assessments, registration on a public EU database, data governance, transparency, human oversight and more.
- General-purpose AI (GPAI) systems, and the GPAI they are based on, will need to adhere to transparency requirements, including having technical documentation, being compliant with EU copyright law and having detailed summaries about the content used for training systems.
- For Generative AI applications, people will have to be informed when they are interacting with AI, for example a Chatbot.
4. Right to complain
People will have the right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems which impact their rights.
5. Higher fines than GDPR
Non-compliance with the rules could lead to fines of up to 35 million Euros or 7% of global annual turnover. This is a notable hike from GPDR which sets a maximum of 4% of annual worldwide turnover.
The EU AI Act represents the world’s first comprehensive legislative framework for regulating AI. Could it become a global standard, like GDPR has for data protection? Or will other countries take a non-statutory approach like we’re seeing in the UK, at this stage?
What’s clear is organisations need to take steps now to raise awareness and upskill employees. For example in compliance teams, legal, data protection, security and (by no means least) product development.
Decisions should be made about who needs a greater understanding of AI, how it will be internally regulated and where responsibilities for AI governance rest within the organisation.