Democracy Technologies: In your DecidimFest talk, you spoke about “Artificial Collective Intelligence.” What do you understand by that?
Xabier E. Barandiaran: First of all, all human intelligence is artificial. From the very beginning, human intelligence has been conditioned and augmented by the tools we produce and use. Anthropologists are going crazy right now about “material cognition”, the idea that intelligence goes into the matter that we fabricate. We have to acknowledge that our intelligence is embodied in artefacts.
This means there is no such thing as natural intelligence, as opposed to artificial intelligence. All human intelligence is artificial, and it’s also all collective. We humans owe our intelligence to the collective material and linguistic environments we inhabit and to which we contribute.
This is what the term “collective artificial intelligence” means, and I see the new generation of AI systems as an expressions of this. Look at Chat GPT: Never in the history of capitalism was there a product that was so much the result of human collective effort. We need to move away from thinking: “Oh, this company has made an amazing product”, the idea that it is the exclusive result of their private, corporate, initiative. On the contrary, it is the privatised product of a social corpus. From the training data to the mathematics behind it – it’s the result of a collective effort.
DT: Are there concrete plans for AI-driven features in Decidim?
Barandiaran: The good thing about Decidim is that it has an API [an application programming interface, which allows it to interface with other software. Ed.] So it has everything it needs to be connected to different experiments with AI.
I hope this experimentation will explode soon. We don’t have the capacity to lead on this right now, but as a community, we are very open to collaborating with anyone. What we need right now are experiments, crazy and democratic experiments – within a controlled environment, of course. I don’t think we should wait for somebody to come up with a superb AI solution which somebody else plugs into the city, and everything just works. We need instead a carefully crafted hybridisation of participatory architectures with AI assistance to deliver high-quality democratic solutions.
DT: What do you think the future of AI-assisted participation will look like?
Barandiaran: One thing we can expect to see soon is a change of interfaces. At the moment, the interface for Decidim is a web-platform type interface we are all familiar with. Take the proposal creation wizard as an example. It prompts you to choose a picture, set a title, write a text, categorise it, and to check for similar proposals. That’s how computer interfaces work right now – they prompt us with actions or possibilities, and we fill out forms.
With an AI interlocutor, you could just say: “Hey Decidim, I want to create a proposal to improve street lighting in the neighbourhood.” I could tell the computer what I want, and it would just do it for me.
This really changes everything. We’ve put a lot of care into crafting an interface that is easy to use, but not so simple that it takes too many decisions, or takes too much autonomy away for the user. And now, all of this might just be obsolete. It’s terrifying! It’s a huge change that is going to happen with interfaces in general. And we don’t know how this is going to change the experience of participation.
And this can go further still. As a first step, there’s a situation where I can just ask Decidim to come up with a proposal, and there is no need to click through it step by step. But what if I had an AI agent that creates automatically the proposal for me? Let’s say I have a car accident because there aren’t enough street lights in the area. I could have an AI agent that is connected to a biological quantification device that recognises that I’m angry, that I blamed the bad lighting, and that “understands” the context of the accident. And so it automatically creates a proposal for me.
DT: In your talk, you also spoke about open source AI products. Will they be able to keep up with the big players, and provide a serious alternative for participation platforms?
Barandiaran: Earlier this year, a Google internal report was leaked which says that the biggest enemy of corporate AI right now is open source AI. The report identifies it as an existing threat, not as some potential threat in the future. Right now.
In a desperate move to try to beat the duopoly fight between Microsoft and Google, Meta released their LLM model Llama as open source. And there is a huge family of open source AI software, and the community is evolving fast. Some of the successful open source free AI systems run on the cloud, and use the infrastructure of the big corporations. But there have been some very interesting attempts to miniaturise these “brains,” making them smaller without compromising on intelligence, so that they can run on a personal computer.
Part of the way that AI is being marketed is to say: Only the big corporations can do this job. The big problem with open source and free software movements and projects is that being not for profit, you don’t have the capacity to accumulate money for marketing and advertising. This means that people don’t know about it. There is no money to increase the visibility of these open source solutions – but they are very rich, and they are evolving very rapidly.
And many of the people in the open source community are very concerned about the future of democracy, and of humanity. There’s a race going on here, but I would say that we have still have a chance. By we, I mean people who believe in democratising technology to make our social reality more democratic. We have a chance to win this battle.
DT: There have been a lot of calls for the democratic regulation of AI. What form do you think this will take?
Barandiaran: I don’t know what to think about the regulation of technology in general. I don’t think it has worked as democratic societies really need it to, except in the case of clear and crucial bans (like genetic experiments with humans, or the regulation of nuclear power). Most likely, some kind of regulations like these will be necessary to ensure the peaceful coexistence of AI and democratic societies, too.
But I also think we have the historical responsibility to move beyond this kind of regulation that imposes limits post-hoc. It’s like saying: “We have this technology to paint the future of humanity.” And the reply comes: “Okay, but you have to paint within this frame.”
My response to this is “Okay, but we as a democratic society should also decide together on the future we want to paint!” It would be much better to think about democracy not in terms of setting limits, but as a positive, creative act. We don’t currently have a collective democratic power to define the future as a society apart from big corporations. This is something that has to change, otherwise we have lost.
DT: You ran out of time earlier before you could explain your idea of “artificial democratic life.” Can you explain it to us?
Barandiaran: I’m not sure you are familiar with permaculture. Permaculture is an alternative way of growing vegetables and food for humans, that doesn’t rely on industrial techniques. The whole garden is diverse and ecologically balanced. It produces oranges or beans or whatever you need. But it also takes care of other animals so you don’t have to poison the whole food just to get rid of snails that were eating your lettuce, because there are birds that eat the snails. The garden is dialectically designed to take care of itself, without a violent domination of nature.
The idea behind democratic life is the same. Decidim is the garden that can be designed to attend to the activity of the participants so they are capable of delivering high quality public policies while balancing with each other in a sustainable politically ecology. Let’s take a simple example. It is possible to map all of the interactions on Decidim and to spot where people are not talking to each other and simply connect them with each other (rewire the politico-ecological interaction networks). You don’t even need AI to do this. These kind of simple interventions in a complex system can improve the quality of a democracy in ways we haven’t explored yet. And they can be guided by digital interaction data and inspired by the way life achieves resilience, creativity and ecologically sustainable diversity.
This is an idea that has real potential, but it will take time and energy to make it work. Robert Bjarnason of Citizens Foundation in Iceland has tried something along these lines, combining AI with a special Artificial Life technique called genetic algorithms. You seed proposals with human creations, and then artificially evolve them in interaction with humans: the machine recombines and mutates, humans select them, in the same way they might select the tastiest looking fruit. The proposals grow as an “organic” thing, and human beings prune them. It’s not AI, but it is an “artificial life” technique put into service to improve the quality of democratic life.