15 June 2023

Regulation of the most advanced forms of technology developed to date is not for the faint hearted. When the EU Commission set up a first working group on artificial intelligence back in March, 2018, the challenge was clear: how to regulate a fast-developing suite of technologies in a way that protects the rights and livelihoods of individual citizens while ensuring that European companies are still able to compete in an increasingly competitive global market?

Final approval of the AI Act by the EU Council and the Parliament is expected by the end of the year, or early 2024 at the latest. This is typically followed by a two year grace period for companies and organisations to adapt.

The AI Act’s four categories of risk

The proposed AI Act was adopted this week by the European Parliament by an overwhelming majority resulting in additions to some of the risk categories detailed below and the creation of an AI Office to support coordination on cross-border cases. The wide-ranging piece of legislation will apply to anyone who provides products or services that use AI. Martin Ulbrich, Policy Officer at the European Commission, joined the AI policy team in 2018 and has worked extensively on the AI Regulatory Framework.

He explains that the AI Act is designed to ensure that the rights Europeans have enjoyed in the digital age up until now will continue to be guaranteed. To this end, the AI Act classifies AI systems according to four levels of risk – from minimal through to unacceptable.

Minimal or no risk – according to the EU Commission, the “vast majority” of systems currently used in the EU fall into this category. They include AI-enabled video games and spam filters. Systems classified as having minimal risk will not be regulated.

Limited risk – These are systems with “specific transparency obligations,” such as a chatbot identifying itself as an AI. In a world of deep fakes and increasingly augmented reality, this self-identification requirement becomes increasingly important and simultaneously more complicated. Indeed, since the advent of ChatGTP in November last year, Ulbrich admits that discussions about how to regulate this and other General Purpose AI systems (GPAIS) have increased significantly, also in the US.    

High risk (good AI) – High risk AI systems are not fundamentally bad, explains Ulbrich, but they have the potential to cause real harm if misused. Particularly in areas like employment, education and law enforcement, where they will therefore face strict requirements for transparency and high quality data. The European Parliament also added migration control and recommender systems of prominent social media to this category. Those developing high-risk AI systems will be required to complete rigorous risk assessments, log their activities, and make data available to authorities for scrutiny. Ulbrich claims that most companies developing these technologies already fulfil most of the requirements laid out in the legislation.

However, a survey by the industry body appliedAI, showed that 51% of respondents expect a slowdown of their AI development activities as a result of the AI Act.

Unacceptable risk (bad AI) – AI systems falling into the unacceptable category will be banned and include real-time biometric identification in public spaces; social scoring by governments, as practised in China; and applications that will affect vulnerable groups like children, the handicapped and the elderly. Ulrich describes this category as “very small”.

Violations will result in fines of up to €30 million or 6% of global profits, whichever is higher. For a company like Microsoft which is backing ChatGPT creator OpenAI, it could mean a fine of over $10 billion if found violating the rules.

Risk vs Reward in the AI Act

The AI Act differs from the GDPR in its approach, explains Ulbrich. It takes what he terms “a list approach”. Companies and individuals can, in theory, check the annex of high and unacceptable risk applications to find out if they are required to take action. The reason for this is the very small number of applications that fall into these two categories. “Companies will simply have to check the list and then that’s it”, says Ulbrich.

The simplicity of such an approach is clearly attractive. And yet as Andrea Renda, Head of the CEPS Unit on Global Governance, Regulation, Innovation and the Digital Economy (GRID) who was also a member of the EU’s High-Level Expert Group points out, it does not take into account the possibility that a wide range of AI applications may be risky when combined with others in unforeseen ways. Indeed, the Commission initially proposed a binary approach to risk categorisation (risky or not risky) which remained in place “right up until the last minute” according to Renda.  

According to the current system of categorisation, low risk AI products should, in theory, include about 90% of products currently on the market. Ulrich reiterates that the Commission is “very positive about AI overall” and describes their attitude as a “yes, but approach”. Yes, they are in favour of the many benefits that AI can bring to society, but they are also aware of the need to protect fundamental rights and freedoms.   

EU vs US approaches to AI regulation

A recent report by the Brookings Institute comparing EU and US approaches to AI regulation acknowledges that both agree on a risk-based approach and the importance of trustworthy AI. However, their AI risk management regimes have more differences than similarities.

Professor Venkatasubramanian of Brown University in the US recently served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy, where he co-authored the Blueprint for an AI Bill of Rights. Published in 2022, it lays out nonbinding ethics- and civil rights-based principles for government and industry use of AI in the US. 

He admits that their attempts to define AI were “a real struggle, a fight” and personally believes that the EU’s attempts to define AI in such concrete terms is “a bad idea”. Indeed, Ulbrich admits that much of the deliberation on the AI Act, especially in the European parliament, has involved arguments about which applications should be on the High Risk list and which should not.  

An outcomes-based approach?

For Venkatasubramanian, it is the real world consequences, what he terms “the harm vectors”, on which regulation should focus. “If you say, look, I don’t care what you did, it impacted someone – then it’s the impact that matters, not the definition. It could be an excel spreadsheet.”

Venkatasubramanian advocates for the concept of deliberate ambiguity in regulation of areas like AI and technology more broadly which are extremely difficult to pin down. He admits too that legacy issues – how to regulate large tech companies who have already built models based on poor quality, questionably collected data – have yet to be adequately addressed via regulation.   

Certainly not the last word

However, he is positive about the potential for cooperation between the US and the EU. “This is our move – how we believe the US should respond to AI governance. Now let’s talk about how we can harmonise what the EU is doing with what we have.” Ulbricht suggests that since ChatGPT, the US is looking “much more favourably” at Europe’s approach to AI regulation. Yet he is also quick to acknowledge that the AI Act “will certainly not be the last word” when it comes to regulating these highly advanced technologies.  

For more on AI, read our articles on AI use in politics and AI biases.

How useful was this post?

Click on a star to rate it!