In the last two decades, deliberative democratic processes have played a role in changes to the national constitutions of both Ireland and Mongolia. Iceland even attempted to use deliberation methods to design a new national constitution from scratch. But can these same methods be used to establish a constitution for an AI chatbot?
A report published last week by AI developers Anthropic argues that they can. The tech company partnered with the Collective Intelligence Project (CIP) using Polis to ask 1,000 Americans to “help us pick rules for our AI chatbot.” These rules were then compiled into a “constitution,” a set of principles governing the behaviour of a new experimental version of their chatbot Claude.
“We believe that our work may be one of the first instances in which members of the public have collectively directed the behaviour of a language model via an online deliberation process,” reads the announcement on Anthropic’s blog.
While they may have been one of the first, they are far from being alone in their ambitions. As voices calling for the public to have a say in the regulation of AI grow louder, several of the biggest players in the field of AI are getting ahead of the curve.
Democratic approaches to regulating AI
Less than a week after Anthropic and CIP made their announcement, the first report on the results of one of OpenAI’s “Democratic Inputs to AI” grants was published. Meanwhile, back in June, Meta announced that they will be running a “Community Forum” on generative AI, with the goal of “bringing people together to inform decision-making on generative AI.”
These projects, all of them experiments in their early stages, come at a time when public anxiety concerning the social impact of AI is rising, and governments around the world are attempting to find ways of regulating AI without stifling domestic innovation.
As democracies began tackling the issue earlier this year, initiatives like the EU’s AI Act and the US Senate’s AI Insight Forums made headlines. But these processes involved only legislators and lobbyists from within the industry. Photos of the first US event showed the likes of Sam Altman, Elon Musk and Mark Zuckerberg in discussion with US senators. What was clear was that the broader public did not play an active role.
A growing number of people are calling for this to change. And above all, it is not legislators, but the private sector who are responding to these demands.
Among those calling for change is Aviv Ovadya, a research fellow at the newDemocracy Foundation. He is also the founder of AI & Democracy Foundation, due to launch later this year, and is on the advisory committee to OpenAI’s “Democratic inputs to AI” project.
“AI issues can have global impact — and deliberative processes actually can work globally whereas other forms of democratic processes may not,” he told Democracy Technologies. As online deliberative processes become more sophisticated and simpler to administer, the potential is there to consult with a public that far transcends national borders.
Independently run online processes
It’s crucial to note that this present wave of deliberative processes being run by private AI developers has no direct relation to the legislation process. Rather, they are a preliminary experiment in using public inputs to help private companies shape their own sets of rules. Furthermore, the AI developers have been transparent about the fact that for the moment, they are merely experiments – none have announced a commitment to abide by the results.
Nonetheless, beyond explicitly employing the vocabulary of deliberative democracy, they differ significantly from customer research focus groups in at least two ways. Firstly, they employ established methods and digital tools developed and used by the deliberative democracy community. And secondly, they hand control of the processes over to external organisations, drawing on the inputs of experts from the field of deliberative democracy.
Anthropic’s consultation was administered by the Collective Intelligence Project (CIP), a research and development lab which aims to establish new governance models for emerging technologies. CIP co-founder Saffron Huang confirmed to Democracy Technologies that her organisation retained editorial control over the entire process, in accordance with the organisation’s partnership principles.
The process employed the Polis platform, an established tool in use by governments and organisations around the world to gather and analyse inputs from citizens.
Meta using Deliberative Polling®
Meanwhile, for their upcoming Community Forum on Generative AI, Meta will employ a methodology already used in December 2022 for their Community Forum on “Bullying and Harassment in the Metaverse.”
For this process, Meta handed the administration of the deliberative process to the Stanford Deliberative Democracy Lab (DDL) and the Behavioural Insights Team. They too were required to commit in advance to allowing the results to be published, irrespective of the outcome.
The process used Deliberative Polling®, a method developed by James S. Fishkin of the Stanford DDL in 1988. It has been used in over 50 countries and jurisdictions worldwide. The Metaverse project saw a group of 6,300 people from around the world deliberating simultaneously in 23 languages using Stanford’s Online Deliberation Platform, a video discussion platform featuring AI moderators and analysis.
Risks and opportunities of deliberative processes initiated by big tech
There can be no question that the tech developers involved are holding themselves to high standards, in accordance with guidance from leading experts in the field. Nonetheless, even the most established methods of deliberation are far from being uncontroversial. At Democracy Technologies, we have already raised the concern that these processes could potentially end up a distraction from the urgent need for legislation.
Ovadya is well aware of these kinds of objections. “The biggest risk is that this becomes a permanent form of democracy washing,” he says, before pointing out that the same risk applies to similar processes led by governments or civil organisations. He adds: “this is very new to most tech companies and they need some time to get comfortable with these processes, and build some muscle in terms of integrating the outputs.”
Asked about the recent wave of online deliberations on AI, Jane Suiter told Democracy Technologies: “I can see the optimism. I can see the potential to scale, I can see the excitement, but I also worry about who’s setting the agenda.” A political scientist and professor at Dublin City University’s Institute for Future Media, Democracy, and Society, she is also one of the original architects behind the Irish Constitutional Convention and Citizens’ Assembly.
“It reminds me a bit of the beginning of social media”, she adds. “There were a lot of real tech utopians at the time telling us how it was going to allow us to connect, it was all going to be so good and so perfect. It took quite a few years and a lot of scholarship before the downsides became apparent. I’m concerned it could be the same.”
Transparency is key
We asked Suiter what advice she would give to those involved in running these processes. “I think they need to think really hard about the issues of independence and transparency,” says Suiter. “There needs to be an independent body who is setting the agenda and determining what the questions are.”
Transparency here means not only being open about who sets the questions, but also who decides to hold the process at all, and at what time. Furthermore, one of the most important lessons learned in the deliberative democracy community in recent years has been the importance of being transparent upfront about how the results will be followed up on.
Finally, it also means educating the public on how these processes work, their correct scope, and their limitations. If these experiments continue to grow, managing public expectations could be one of the largest challenges they face.
Watch this space
For the time being, these processes remain at the experimental stage. The preliminary results of OpenAI’s “Democratic inputs to AI” project are in the process of appearing online. Meta have not yet made public any further details of the Community Forum on Generative AI. And new projects continue to be announced. Watch this space.