Democracy Technologies: You are working together on a research project regarding AI in deliberative democracy more generally. Can you tell us about it?
David Mas: It’s a research program looking at the bias of LLMs used in a democratic setting. With the big breakthrough in generative AI, we envision a lot of applications that will lower barriers to participation. So the first barrier we want to break down is the access to complex information.
Second, we want to break down barriers to contribution. AI can help people to improve their contributions, and to make their voices heard more easily. This will level the playing field in participation, meaning anyone can participate and be heard on a level footing.
The third point is to extend the experience of deliberation. We do not want to replace in-person, in-depth deliberation. But we are interested in the possibility of online deliberation facilitated by AI that would allow more people to interact. This would give more citizens access to this very beneficial experience of talking to people with different opinions. This could help them to see that a problem is more complex than it might seem to them when they are alone in front of a TV or on social media. This way, people come to appreciate that finding a consensus is a complex process.
All three would be great for democracy. But we know that AI can be biased, and that it can hallucinate. We know that there is gender bias, and sometimes political bias as well. There may even be new kinds of bias that haven’t been researched yet.
For instance, when you summarise a debate – how should you summarise it? Should you take only the majority opinion, or should you include minority opinions, too? What is the best way to present a majority opinion alongside a minority opinion?
These are complex questions. We will be working on them with people from social science who will approach the problems theoretically. And we work with scientists from the CNRS, AI scientists who actually measure these biases and try to fix them. And Hélène is part of a surveillance committee along with other scientists.
DT: Hélène, can you tell us more about your role?
Hélène Landemore: I’m really excited and honoured to be part of this project, because I actually think we need more competition in this space. Right now in the USA, the only person who is doing something equivalent is maybe Jim Fishkin at the Stanford Deliberative Democracy Lab. He has done some interesting mini publics, which he calls deliberative polls. Most recently and most spectacularly, he worked with Meta on a deliberative poll with around 6,000 people across the globe. The deliberations were organised in linguistic groups – they didn’t use AI to translate. AI was simply used as a facilitator in a very basic form. But still, I think that’s exciting.
But I also think there should be some competition, because that’s just one model, it’s just one team. And they have their own incentives, and they don’t share their data all that much. And so I think it’s important that there are other groups trying different approaches.
I also believe it’s important that it doesn’t only happen in the USA. I’m really proud that we have teams in Europe working in partnership with a place like the CESE (Conseil économique, social et environnemental). I think that’s really amazing.
DT: Could you explain where you see the highest potential for AI in deliberation?
Landemore: The most exciting potential for me is deploying at the global level. There are so many issues that need to be addressed in global terms, from climate change to North-South inequalities, from control of corporations or global wealth tax, and so on.
It’s not going to happen with the existing institutions. They are too slow. They are co-opted and too bureaucratic. I think we need to start thinking about something like a global citizens assembly that would be connected to the larger public. And that cannot be done through physical ties, simply because it’s so costly at that scale, and so complicated.
There’s also the problem of translation, but technology can solve that. We will have instantaneous translation. We will have virtual spaces that are safe where you cannot be spied on by the government, where you can say anything you want in any language you want. And we can also use virtual facilitators, as it would be too costly to hire humans to do all of that at scale.
That’s what I see as the most exciting potential if we fast forward 70 years or so.
DT: You mentioned the examples in the US where AI is being used in experimental deliberative processes. The biggest examples come from the tech sector itself, rather than from the government. What do you think about experimentation in these different spheres?
Landemore: I was on the board of academic advisors for OpenAI’s Democratic inputs to AI programme. I feel torn because on the one hand, I do think it comes from a good place. The people who are involved in these projects really want to democratise the use of AI and put it in the service of democracy. But the reality is that these companies are a huge part of the problem to begin with. Their incentives are completely misaligned with the common good.
So it’s kind of a joke, really, to think that we’re going to trust them. They only benefit from putting it to commercial uses. I think that’s really a fundamental problem. I don’t really have the solution, but I do think we definitely can’t just count on the goodwill of these companies.
Take Meta’s Community Forums. In the first global trial they ran, they had 6,000 participants. But for the recent one on the regulation of AI, they scaled it down to 1,500 people and spent a third of the money. So I think if they don’t see that it’s profitable, they’re going to stop doing it. The profit motive is too intrinsic to what they’re doing.
A deliberative platform that serves as a space for communication across cultures and social groups is a public good. I’m not sure there’s much money to be made through pursuing that goal. So I think it’s got to be publicly funded somehow.
DT: These community processes run by Meta were presented as part of the effort to regulate AI. But in the end, the tech companies decide what to do with the results. Are they going to regulate themselves?
Landemore: We don’t know. They’re completely intransparent about the outcomes. I’m relatively close to the people who run these processes, and I still don’t know what the outcomes really were, or what is going to happen with the recommendations. Unfortunately, it sounds more and more like a big PR push.
DT: You said that AI has the potential to really scale up deliberative processes at the global level. Generally, the format of citizens assemblies relies on people who would not have met otherwise getting to know each other in a group setting. Do you think there’s a chance of translating that to a bigger process?
Landemore: It has already happened. There is a group of NGOs and academics who did a global assembly on climate in autumn 2021, with only 100 people selected at random from different geographic regions of the world. And they brought together a Brazilian seamstress with an Afghan shepherd, really people from all over. Some of them couldn’t read. Some of them didn’t know how to operate an iPad. So they had to have teams on location to help. It was an intense effort. And they managed to produce a statement that was then presented at Cop 26.
And we know that the same bonds emerge. We are all humans at the end of the day. So it’s quite predictable. If you put people together for long enough with a task that they believe in, even in a virtual space, they will actually forge emotional bonds very quickly. The one thing I know about the Meta experiment is that the same thing happened among the 6,000 participants, who met in small groups of 8 to 12 people. I was told that many of them asked for each other’s email addresses. They wanted to stay connected.
The capacity of humans to bond across the virtual space and cultures and languages is amazing. So I think we need more of that, and I don’t think there’s any doubt that it’s feasible. It’s absolutely feasible. The question is, do we have the technology to make it happen? And even more important than technology, in fact, is the political will.
DT: You used two examples. One involved 100 people, which is a small enough group that everyone could get to recognise each other. On the other hand, Meta’s process involved thousands of people. Does it matter how big the group is?
Landemore: That’s a good question. I don’t think we have the answer, but my guess is that it doesn’t matter. I think there must be a trade-off there. People say you can only remember 150 names or so. There’s a threshold beyond which you’re not going to know everyone. But the important thing is that you trust that the people in the group who you don’t know are similar to the ones you do know. If you believe this, then you’ll just trust them. You just trust the process.
In France’s Convention on End of Life, there were 185 participants initially, 184 by the end. That’s beyond the limit. So not everyone met everyone by the end of the process. But they still felt enough that they were one.
According to Yuval Noah Harari, that’s why homo sapiens triumphed over neanderthals – our capacity for collective action and imagination. We need to use that to overcome distances to deal with the global scale of our problems as a species.