Artificial intelligence – how should it be regulated?

Businesses are increasing using artificial intelligence for a wide range of applications, including improving customer experience, automating processes and generating insight. 

The European Commission has set out a proposal for a bold new regulation designed to establish a set of rules to govern artificial intelligence (AI).

It’s aim is to “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.”

Data, including personal data, plays a vital role in the development of AI and machine learning technologies, and the proposed regulation has the potential to have a profound impact on the use of AI.

This news is of great interest to data protection and privacy practitioners and I think worthy of a little investigation.

Will a new regulation really be able to balance the safety and rights of people? Will it hinder innovation and be yet more rules for businesses for abide by? Will technology run ahead too fast for any regulation to keep up with?

I’ve asked some privacy experts for their views, but first here’s a short summary of what’s proposed.

Who will the AI Regulation apply to?

Much like GDPR, the regulation would apply to users of AI systems in the European Union and is planned to have wider extraterritorial scope.

It’s proposed providers based outside the EU will be subject to the regulation’s requirements if they make their AI system available in the EU. In theory this would therefore include systems developed by businesses based in the UK, US and elsewhere.

A risk-based approach to AI

The Commission has adopted a risk-based approach to AI, which recognises the potential benefits of AI but also considers the potential for harm these technologies may present to the rights of individuals.

The Commission’s view is harm can arise if AI systems are unsafe or if they involve risks to individuals’ fundamental rights – such as privacy and the right to not be discriminated against.

The proposal splits AI systems into three categories:

1.  Prohibited AI systems

These include highly invasive massive surveillance systems as well as systems which aim to manipulate human behaviour, or exploit their vulnerabilities.

The proposal suggests four types of AI practices will be prohibited:

  • AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes that person or another person physical or psychological harm.
  • AI systems that exploit vulnerabilities of a specific group of persons due to their age, physical or mental disability to materially distort the behaviour of a person pertaining to the group in a manner that causes that person or another person physical or psychological harm.
  • AI systems run by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons with the social score leading to detrimental or unfavourable treatment that is either unrelated to the contexts in which the data was originally generated or unjustified or disproportionate.
  • AI systems which use “real-time” remote biometric identification systems in publicly accessible spaces for law enforcement purposes. There are some broad exemptions including prior authorization for each individual use (to be granted by a judicial authority or an independent administrative body) in the member state where the system is used.

2.  Heavily regulated high-risk AI systems.

This category includes AI used for credit scoring, risk assessment, biometric identification and recruitment processes.  These systems would be permitted subject to tough certification and monitoring processes.

Providers of high-risk AI systems would be expected to implement governance and risk management controls.

3.  Less heavily regulated AI systems.

For AI systems that are not deemed as ‘high-risk’ the Commission has taken a more pragmatic approach. Providers will be expected to inform individuals when they are interacting with AI systems (unless it is obvious). But they will not be expected to provide detailed explanations.

The initial proposal was published on 21st April 2021 and there are likely to be updated versions as it progresses through the legislative process.

Fines of up to 6% of annual global turnover have been proposed. However these are only for a limited range of circumstances, such as a violation of the data governance requirements.

Making sure data is used responsibly and ethically in AI systems

It’s proposed that data governance will form a key part of the obligations for AI providers and users, particularly with regard to high-risk AI systems.

Providers would be required to adopt a range of techniques to datasets which are used for training, validation and testing of machine learning and similar technologies.

This includes identifying and evaluating potential biases, checking for inaccuracies and assessing the suitability of the data.

Reaction to the AI Regulation proposal

The announcement has generated plenty of interest and concerns about how it will impact technology companies that develop automated intelligence systems and the businesses that use them.

Reacting to the proposal, the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, supported the proposal. However he expressed regret that requests for a ban on the use of remote biometric identification systems, including facial recognition, in publicly accessible spaces had not been addressed.

I asked some members of our DPN Advisory Board for their reaction.

Robert Bond, Senior Counsel at Bristows LLP:

On the one hand the EU initiative is a welcome focus on the need to bring regulation to the rapidly developing use of AI across all sectors and encourage the further development of solutions for society, but on the other hand I hope that the EU does not over-regulate.

Many organisations have already taken a compliance and ethics approach to self-governance of AI and data. A good example is the work of the United Nations Global Pulse on Governance of AI and of Data with which I have been involved for a number of years.

Fedelma Good, Director at PwC UK:

The EU is certainly going to be busy when it comes to fulfilling its objective to establish a ‘Europe fit for the digital age’. The publication of the proposed AI Regulation is just one of a number of EU digital sector initiatives underway, including for example the Data Governance Act, Digital Services Act, Digital Markets Act, the anticipated Data Act and, of key concern for marketers, the long-awaited finalisation of the ePrivacy Regulation.

One cannot argue with the objective to introduce regulation and good governance principles for AI and other areas of innovation which rely on data, and personal data in particular.

But the areas under focus are complex, constantly evolving and have significant overlap. I fear that the process of debating the finer points of each, not to mention how they will work together, will leave the actual finalisation of any regulation trailing in the wake of digital innovation.

It is highly likely that privacy concerns will contribute to the debate as the AI Regulation makes its way through legislative process. Whilst the AI text highlights the importance of maintaining privacy, privacy activists argue that the proposal does not go far enough. Of key concern is the issue of self-assessment and self-regulation – they question whether developers can really assess their own compliance and mitigation measures?

The expectation that governance can be retro fitted is contrary to the agile approach which we look for from our entrepreneurs. My hope is that a pragmatic balance can be struck that will enable informed and considerate developments, that adopt existing best practice and legal principles, particularly those of transparency and accountability, to take place.

Charles Ping, Managing Director at Winterberry Group:

It was inevitable that some regulatory boundaries around the use of artificial intelligence would need to be defined and it is hardly surprising that the European Commission has been the early to the table with a broad proposal, rather than piecemeal components.

However, there’s a long journey from an EC proposal to any sort of agreed legislation and it would be beneficial for all businesses utilising AI if there was participation and harmonisation beyond the 27 members of the EU.

What’s happening with AI Stateside?

I also asked Chris Field, Data Protection Officer and Head of Privacy at Harte Hanks for his thoughts on recent guidance on AI from the US Federal Trade Commission (FTC). Over to Chris:

The FTC have highlighted concerns specific to AI, which include consumer protection, big data, and predictive analytics perspectives. The guidance highlights the FTC’s enforcement authority related to AI practices deemed discriminatory, unfair, or deceptive.

The FTC’s guidance highlights particular concerns regarding algorithms designed to make decisions about credit, employment, housing and other benefits; especially credit-related decisions made on the basis of race, colour, religion, nationality, sex, marital status, age or the receipt of public benefits.

Whilst more information about the FTC’s guidance can be found here, businesses should consider the following when implementing AI and automated decision making techniques.

1.  Start with the right foundation – make sure the starting data foundation is representative of all populations and protected groups. Gaps in foundational datasets can easily result in unfair or inequitable algorithms and decisions.
2.  Watch out for discriminatory outcomes – test algorithms before and after deployment. Always ensure automated decisions don’t discriminate based race, gender and other protected classes of information.
3.  Embrace transparency and independence – be transparent about your use of AI. Publish information about automated decisions for review and act upon bias claims related to your AI.
4.  Don’t exaggerate or make deceptive claims about AI – don’t overstate what your AI can do. Misrepresentations of automated decision making, bias related to AI, and all practices deemed to be deceptive, unfair, or discriminatory violate the law.
5.  Tell the truth about your use of data – be transparent about where the data behind automated decision and AI comes from. Be sure the foundational data powering AI is lawful, and representative of all your privacy claims and choices made available to individuals.
6.  Do more good than harm – make sure automated decisions don’t cause harm. AI practices that cause unavoidable injury without comparable benefits to individuals or competition are unfair and against the law.
7.  Hold yourself accountable – you are responsible for your automated decisions. Take immediate action related to any claims that your AI results in credit discrimination, digital redlining, bias, misconduct of harm and avoid the FTC doing it for you.

Thanks to all our contributors for your insights. It’ll be interesting to see how the European Commission’s proposals for AI regulation, alongside the FTC’s approach, develop over the coming months and years.

No doubt we’ll see interesting developments in this space!

 

Data protection team over-stretched? Get in touch to find out more about how we can help with no-nonsense, practical privacy advice and support. Contact us