08 March 2024

DT: Let’s start with the EU’s AI Act. Can you maybe start by explaining what you think of it and what loopholes you see?

Angela Müller: First of all, the name “AI Act” makes it sound like it’s going to regulate everything that has to do with AI and every aspect that may be touched by AI. I think that’s a bit misleading, because it actually takes a very particular approach. It is a product safety law, and regards AI as a product whose safety needs to be ensured. In the end, the AI gets those CE markings that you know from child car seats and hair dryers.

It is a regulation to harmonise the digital single market. So it’s also a very economically guided instrument. It’s not overarching, and I think that already hints at certain tensions, because it’s a product safety law. Normally product safety is responsible for ensuring health and safety. But this time it’s also to protect fundamental rights via product safety. Is it really possible, with this highly technical approach, to ensure that AI is not going to violate our most basic rights?

Another consequence of this approach is that the Act is highly focused on providers and what they can do at the technical level. Don’t get me wrong, that’s very important to do. But of course, you cannot make sure merely through technical measures that human rights are not going to be violated, that the systems will not produce injustices and so on. The harms that result from an AI system heavily depend on the context of use, and on the way in which these systems are used. That’s also why the idea of risk categories, where you can tell in advance which level of risks come with a system, may have its limits. 

Another issue is that the definition of AI is highly contested. There are also a lot of automated algorithmic decision-making tools that are not machine learning in the narrow sense. And we will see to what extent they can fall under the Act, depending on how broadly it will be interpreted.

That said, there are also important provisions in the AI Act that will provide more transparency, and allow for control and oversight. And that’s something we have fought for over the last three years.

DT:  Generally speaking, to what extent was civil society involved or listened to in the making of the AI Act?

On the one hand, civil society was heavily involved in the process. The link between AI and human rights is something that you don’t have millions of experts on. So there was high demand for civil society to work on these topics. And we also had some important wins. Compared to the initial draft by the Commission, I think we’ve seen a lot of improvements.

We now have fundamental rights impact assessments for high-risk uses by public authorities. Also, there are plans for a database of AI systems. In the initial draft, this only covered AI systems that are on the market. And what civil society was fighting for – it sounds like a detail, but I think it’s crucial – is that this database should also include information on where high-risk systems are actually being used, at least by public authorities. 

 We also achieved sustainability transparency requirements for very large AI models. And there were huge civil society efforts to make the bans in the AI Act meaningful. While the Parliament considered this, member states decided to water down some of the bans, introducing a number of exceptions and limits to them.

DT: You are now addressing the Council of Europe, which is about to pass a convention on AI, which fewer people know about. Could you give us a brief introduction? How does it differ from the AI Act, and who does it address?

First of all, the Council of Europe has nothing to do with the EU. It is an international organisation with 46 member states. Its mandate is to protect human rights, democracy and the rule of law. 

Now, conventions drafted under the umbrella of the Council of Europe start from this mandate. So the AI Convention is very different from the AI Act because it doesn’t have the objective to harmonise the market or anything like that, but to protect human rights, democracy and the rule of law. It’s a very important perspective and offers a very promising opportunity to make sure we have safeguards. What is more, the convention is not necessarily limited to Council of Europe states, as it can also be signed by non-member states.

DT:  Would you say that to protect human rights and democracy, it could be the more important regulation? Is it strong enough for that?

It is going to be a legally binding international treaty. But it needs to be implemented at the national level by member states. Then, every state has to decide how to translate this into national law and other policy measures. So, such conventions typically leave a certain level of abstraction.  

And, of course, international law is not easily enforceable. It is not going to be the case that as an individual, I can go to Strasbourg and say:  “A state has violated my rights under the AI Convention.” However, when I go to Strasbourg and claim that my rights under the European Convention on Human Rights have been violated by an AI-based tool, then the court can of course consider the AI Convention in its interpretation.

DT:  In your open letter, you address very specific loopholes regarding the question of whether the rules of the convention should be applied only to public institutions, or whether they should also apply to private actors. Can you explain these loopholes?

The first loophole is exactly that. The whole world is saying that we need to regulate AI. Imagine we’re now getting the first international treaty and it says: Yes, we’re going to regulate AI, but private companies – including Big Tech – can go on doing whatever they want. 

In the draft of the convention that was made publicly available in some weeks ago, you can see that there are several options on the table. From my perspective, the preferable version would be that the convention covers both public and private actors developing and using AI. But there is also the suggestion that in principle, it’s only applicable to public authorities – and only if states want, they can declare that it should also apply to the private sector. And then there are compromise proposals on the table, which then again open up other exceptions and loopholes. 

The other loophole we’re addressing is national security, as in the AI Act. Presumably, the AI Convention could also generally exclude everything that’s done under the cover of ‘national security’. One problem is that national security is a very vague concept, and different states interpret it very differently. 

This is highly problematic, because the national security justification is mostly triggered in contexts affecting people who are more likely to be the victims of human rights infringements. It’s law enforcement, migration, security and so on – areas where it is really important to have good safeguards in place in these areas.

DT:  Do you have specific examples of AI applications where you worry that they would fall through this loophole?

Müller: We see a lot of technology being used for border control, to predict the risk of terrorism or to surveil public space. States could argue that they need to do that for national security purposes.

DT:  Beyond national security, what cases would fall under this convention where governments are using AI?

Müller: During both negotiations on the AI Act and also in the Council of Europe, we have had a lot of recent attention on generative AI tools like ChatGPT. This is good, but at the same time, you should also not forget about (sometimes technologically simpler) tools that also affect individuals.

This brings us back to the question of the definition of AI. Tools used in policing, to detect social benefit fraud, to analyse workers’ performance, to scan CVs, or to assess people’s creditworthiness, do not always rely on the most complex technological approaches. Still, they really affect people’s lives – and should not go unchecked.

DT: What aspects of the topic should people in the public sector pay close attention to?

Müller: We will see to what degree both the AI Act and the Convention on AI will contribute to transparency. I also am convinced that transparency alone won’t do the trick, because you can also do really bad stuff transparently. But at the same time, it is important just to say that transparency is the first and crucial step to enable oversight, control and accountability. All of this is in the interest of public authorities. Public authorities need to be controlled, have reliable oversight mechanisms, and clarity about who is responsible and accountable.  In my view, public authorities should see transparency as an opportunity – not as a threat.

How useful was this post?

Click on a star to rate it!