16 January 2024

Many in Silicon Valley see it as the biggest challenge of our times: ensuring that AI is “aligned” with human interests. Yet they are often so focused on the technical and existential dimensions of the problem that they leave a crucial question unasked: Whose interests are we talking about, exactly?

Some in the tech sector have begun exploring democratic ways of answering this question. Among the most advanced so far are the Alignment Assemblies run by the Collective Intelligence Project. The assemblies were launched in February 2023 with the mission to “help create artificial intelligence (AI) that supports the public good, by involving the public in defining what ‘good’ is.” The project has seen them collaborate with the likes of Anthropic, OpenAI, and Audrey Tang at Taiwan’s Ministry of Digital Affairs. 

“Many in the tech industry think they believe in democracy, but they don’t think about it in terms of directly involving people in decisions. Partly because they think: ‘People don’t understand it.’ and partly because they think ‘This isn’t possible,’” Divya Siddarth of Collective Intelligence Project told Democracy Technologies. “But every time we run something, we get the most interesting, engaged, thoughtful responses, and we show it can be done.”

Democratising Silicon Valley

The research and development lab Collective Intelligence Project was co-founded by Saffran Huang, a former research engineer at DeepMind, and Divya Siddarth, a former political economist and social technologist at Microsoft’s Office of the CTO. They began the project with the conviction that the decisions being made in Silicon Valley are too important to be taken by the handful of people working in the industry. 

Their expressed aim is to leverage collective intelligence to explore new modes of governance for transformative technologies. Their work on AI has seen them collaborate with both major AI developers, and with governments, including Taiwan and the British government’s Frontier AI Taskforce. 

The assemblies are part of a growing trend towards Silicon Valley experimenting with participative and deliberative tools originally developed in the political sphere in the name of improving democracy. Among the alignment assemblies conducted so far are two involving major generative AI developers, Anthropic and OpenAI. The Collective Intelligence Project received no funding from the developers, and retained editorial control over the consultation processes. Siddarth explains: 

“We started with the tech companies because consequential decisions are being made right now over this technology within these companies. And if we want to actually affect the tech, we need to start with where the decisions are being made, while working on other avenues as well.”

Anthropic: A democratically sourced constitution

This desire to have a direct impact shaped the design of the assembly conducted in cooperation with Anthropic. Anthropic was founded in 2021 by a group of former OpenAI employees. Their generative AI chatbot Claude (not currently available in the EU) was developed with a novel approach to safety, which they call “Constitutional AI”. The idea is that Claude only gives answers that comply with a specific set of underlying principles.

The principles it was trained on were drawn from the Universal Declaration of Human Rights, the Sparrow Principles developed by DeepMind, parts of Apple’s user agreement, and principles based on their own research. A full list is available on their website

The Anthropic team acknowledges that selecting the principles for a constitution is a politically charged topic, adding they anticipate there will be “larger societal processes developed for the creation of AI constitutions.” It is not clear whether this refers to regulation by legislators, processes Anthropic will run themselves, or even to the establishment of industry-wide measures to allow the public to help shape the rules governing AI. In any case, the alignment assembly suggests one possible way forward – the attempt to create a democratically sourced constitution. 

For the Collective Intelligence Project, it was an opportunity to test a case where inputs from the public directly shape the technology, rather than the way the technology is used and applied, as is usually case with regulation. Anthropic agreed to small-scale, experimental version of Claude based on the outcomes.

“How can we create pilots that we can actually use to affect real change, leading to real outcomes and decisions? One way to do this is to use the inputs directly to shape the behaviour of the AI,” says Huang. 

“Very preliminary findings”

For the assembly, Collective Intelligence Project and Anthropic worked with market research company PureSpectrum to select a representative sample of approximately 1,000 participants from the USA. In addition to employing demographic criteria to ensure representativeness, participants were also screened to ensure that they had some prior knowledge of AI to avoid off-topic statements (a step they justified on pragmatic grounds, though it raises questions about how representative the final group was).

Participants were then invited to provide their own principles for the AI constitution, as well as to vote on proposals made by other users. The process was moderated by Collective Intelligence Project, and conducted using Polis, an online platform which uses statistical analysis and machine learning to analyse inputs from a large number of people. It has been used by several governments around the world, including Taiwan’s Ministry for Digital Affairs. 

After analysing the results and combining overlapping statements, Anthropic trained a Claude bot on 75 principles. The bot remains experimental, and is not accessible to the public.

Their report notes key differences from their own constitution. In particular, they observed that the statements that attracted public support tended to promote positive behaviour, rather than trying to avoid negative behaviour (e.g. “Choose the response that is most likely to promote good mental health,” or “Choose the response that is most creative”).

For the most part, the statements attracted a very high level of consensus among participants. Nonetheless, Polis identified a clear divide between two groups of participants, centered on the statements: “The AI should prioritize the needs of marginalized communities,” (56% for, 25% against) and “the AI should actively address and rectify historical injustices and systemic biases in its decision-making algorithms” (58% for, 27% against). 

As a result of the lack of consensus, neither statement was adopted for the public constitution. While clauses requiring that the AI should not encourage discrimination on the basis of gender or race did make the final cut, the question remains as to whether these are sufficient to address structural inequalities in existing data sets – and whether a democratic consensus on such a divisive issue is likely to emerge.

OpenAI trials alignment assemblies

Collective Intelligence Project have also worked with OpenAI, the team behind ChatGPT. OpenAI launched their own call for democratic inputs to AI last year, and their CEO Sam Altman has discussed the idea of using deliberative democracy to inform their work in several interviews. 

Most AI developers carry out risk evaluations – testing their generative AI for traits such as gender bias, or misleading output. While this is an important step in ensuring the safety of the technology, the priorities for these evaluations are usually set internally, and there are currently no universally accepted industry standards.

The aim of Collective Intelligence Project’s alignment assembly was to come up with a list of risk evaluations voted for by a representative subset of the public, to ensure that safety research reflects the concerns of ordinary citizens. 

The “Participatory Risk Prioritization” was run entirely by Collective Intelligence Project on AllOurIdeas, a wiki-survey platform, with OpenAI taking on the role of a “committed audience”. Once again, a representative group of approximately 1,000 Americans was selected. Over a two week period in June 2023, they were invited to submit and rank statements beginning “When it comes to making AI safe for the public, I want to make sure…”.

Support for regulation and oversight

The most popular statements were “I want to make sure that people understand fully what they are and how they work. Overreliance on something they don’t understand is a huge concern”; and “I want to make sure that sufficient regulations are installed as to make sure this source is a positive for society.”

Again, the results must be seen as provisional, limited by the scope and methodology of the assembly. Not only do they indicate strong support for regulation and oversight; they also show that the potential negative effects of overregulation are not a large cause for concern, ranking lowest of all the statements submitted.

Collective Intelligence Project published a list of recommendations based on their findings in their final report. The report includes a response from OpenAI, voicing a commitment to exploring public inputs to their work, and to pursuing “research initiatives and partnerships aligned with the report’s recommendations” – though it does not outline any specific steps taken in response.  

A need for meaningful impact

The Collective Intelligence Project are not alone in conducting these experiments. In addition to OpenAI’s “democratic inputs” project, on 28 and 29 October 2023, Meta conducted their Community Forum on Generative AI in collaboration with the Stanford Deliberative Democracy Lab and the Behavioral Insights Team, with results expected to follow soon. 

In all of these cases, it remains to be seen if and how public inputs will be taken onboard in a way that has a meaningful impact on the industry. Anthropic’s experimental Claude chatbot with its publicly sourced constitution certainly hints at a potential way forward. Yet so far, it has had no direct impact on AI being made available to the public. With strong incentives to bring technology to the market faster than competitors, as well as the need to respond to legislation being introduced to regulate the technology, it is not clear if democratic inputs will ultimately find a space within the field.

Towards industry standards?

The processes themselves are also still undergoing refinement, and do not currently meet the standards typically required of deliberative processes such as Citizens’ Assemblies. Aspects of the design of the processes – including the digital-only format, the filtering out of participants with no knowledge of AI, the US focus, and the lack of clarity over their final impact – all constitute limits. 

“What we can say at this stage is: Look, this is what we did, this is what’s possible. You can do it too. Here’s the roadmap or the toolkit, and we can connect you with the right people so that you can do it yourself,” says Huang. 

Siddarth agrees. “With the initial alignment assemblies, we just wanted to show that it can be done. We could have spent ten years trying to design a perfect process, but instead we designed a series of pilots to show what is possible. With the goal that we can improve them, and eventually they’ll be encouraged or mandatory, or part of fixed standards that people should have a say on these things.”

While refinements of the methodology will be necessary, for now, the crucial question is whether these processes can be integrated into the development of AI development and regulation in a meaningful way. Or whether ultimately, as was the case with social media, it will be down to legislators to ensure that the rollout of AI happens in a way that minimise the risks to our democracies and the people who live in them. 

How useful was this post?

Click on a star to rate it!