“If the users are informed, users should be able to use AI chatbots in any way they like, including romantic relationships.”
In the room on the video conferencing call I’m observing, there’s a brief pause before the first participant speaks up.
The conversation starts with opposed views. The first speaker has no problem with it, as long as users are informed. They cite the film Her, starring Joaquim Phoenix as a lonely man going through a divorce who falls in love with an AI voice assistant.
Another participant replies that they find the whole idea creepy. On consideration, though, they add that it could be a less harmful alternative to prostitution. Several hands go up, and a conversation begins.
Community Forums on Generative AI
The group is taking part in Meta’s “Community Forum on Generative AI”. Democracy Technologies was invited to observe the event, which took place over the weekend of 28 and 29 October 2023. Using an online, AI-moderated platform, over 800 participants from the USA, Germany, Spain and Brazil were consulted on a range of issues raised by the increasing prevalence of AI chatbots.
Participants attended two online sessions, dedicated respectively to the topics ‘What principles should guide how AI chatbots offer guidance and advice?’ and ‘What principles should guide how AI chatbots interact with people, and help them interact with others?’
The forums are part of a broader trend that emerged in Silicon Valley in 2023 towards exploring ways of giving the public a say in how AI is developed and regulated. Meta’s forums, run in collaboration with the Stanford Deliberative Democracy Lab and the Behavioural Insights Team, are based directly on established models of democratic deliberation, typically used by governments or civil society organisations.
“As far as I know, no company has done this before with this particular model,” Kris Rose, Head of Governance Insights at Meta, told Democracy Technologies. The results have since been analysed, and are expected for publication before the end of Q1 in 2024.
“While we will continue to iterate and figure out how we best bring these types of democratic inputs into our system, the commitment is very real,” adds Rose. “It can only be a good thing for the outputs of our innovations to be based on a more diverse and democratic set of inputs and perspectives.”
Deliberative polling
The Community Forums were devised with the aim of collecting public inputs on pressing questions concerning Meta’s products. They are part of the firm’s broader efforts to explore new ways of bringing outside actors into their decision-making processes.
The Generative AI event was Meta’s third Community Forum, and their second in partnership with Stanford’s Deliberative Democracy Lab, following a 2022 forum on Bullying and Harassment in the Metaverse.
Both of the collaborations with the Stanford team used Deliberative Polling to consult the participants. Developed in the late 1980s by James S. Fishkin, Director of the Deliberative Democracy Lab, deliberative polling has since been used in over 50 countries and jurisdictions around the world, and has even played a role in bringing about constitutional changes in Mongolia.
Gathering reflected opinions
In a deliberative poll, a random, representative sample of the population is selected and asked to complete a poll on a set of issues. They are then invited to take part in a discussion over the course of a weekend – either in person or online. After the deliberations, the participants are polled again, allowing researchers to track how attitudes have shifted in light of the discussions.
The process is designed to draw out people’s reflected opinions on a given issue, rather than simply recording their knee-jerk responses. In his 2018 book of the same title, Fishkin calls it: “Democracy when the people are thinking.”
Prior to the event, participants are provided with extensive background materials – in this case, a 50-page information pack explaining the idea behind the forum, the basics of how AI chatbots work, and the topics to be discussed during the forum. The sessions on Generative AI lasted approximately three hours each, beginning with small group discussions with around 8-12 participants. On both days, this was followed by a plenary Q&A session, in which a panel of six experts in the field of AI responded to questions voted for by the groups.
The collaboration with Meta is the first time that a deliberative poll has been initiated by a private company, for the purpose of collecting opinions relevant to their products. The administration of the process itself was carried out by the team at Stanford.
AI-moderated online platform
The small group discussion sessions of the forums were conducted using the Stanford Online Deliberation Platform. Though some experts believe in-person deliberation produces better results, formats using digital platforms began gaining in popularity during the pandemic. Not only do they significantly reduce costs in comparison to in-person formats; they also permit deliberation on a global scale.
Developed by the Deliberative Democracy Lab and the Stanford Crowdsourcing Democracy Team, from a user perspective, the platform is similar to most popular video conferencing platforms. Participants were each assigned to a room of 8-12 people. When someone wants to speak, they click a button to join a queue. Contributions are limited to 45 seconds to prevent anyone from dominating the conversation.
In contrast to most other online deliberative processes, there were no human moderators in the rooms. Instead, AI moderation was used to guide the discussion.
A video explaining the Stanford Online Deliberation Platform
Missing the human touch
Essentially, this meant that the different phases of the discussion were introduced by video introductions, before a disembodied voice announced the topic of discussion.
The discussion is transcribed live, visible in a console accessible to administrators or observers. The AI moderator also flags potentially offensive language, which participants have the chance to review. With administrator access, I could see full transcripts of the conversation, as well as a graphic indicating how long each participant had spoken for, or when the quieter members of the group had been “nudged” by the moderator, encouraging them to contribute.
Not everything went smoothly. Above all, many didn’t understand that they could give way to the next speaker if they did not want to use their full 45 seconds – meaning a lot of time sat in silence, where a human moderator could have intervened. And in spite of repeated nudges, some participants left their cameras switched off throughout the time I was observing, not contributing to the discussion.
Nonetheless, there were enough substantive discussions to suggest that AI-moderated platforms are likely to be sticking around. Even if improvements are necessary, Stanford’s platform points towards a potential future in which such events can be hosted at very low cost and with only a small team monitoring them.
Democratising big tech?
The Community Forums on Generative AI came at the end of a year which saw multiple calls for the development of AI to be democratised, both from the broader public and from within the tech community.
In his blog post announcing the Community Forum on Generative AI, Nick Clegg discusses the “need to find ways to bring people into the process as they develop the guardrails that AI systems operate within.” With this in mind, he announced that Meta would be running “a Community Forum designed to generate feedback on the governing principles people want to see reflected in new AI technologies.”
AI may be a hot topic at the moment, but Meta has already been pursuing what they call “new models of governance” for some time. In the same blog post, Clegg cites the Oversight Board as an example. Founded in 2020, the board acts as a kind of “supreme court” for content moderation. Funded via an irrevocable trust paid into by Meta, it is tasked with reaching independent decisions on controversial decisions regarding when content should be removed from Meta’s platforms, and when users should be banned or suspended.
“New models of governance“
The board’s members have included human rights lawyers, journalists, and a former prime minister. In some cases, a decision is handed over to the Oversight Board entirely. Meta has pledged to act upon these decisions, unless doing so would violate the law. In other cases, Meta can also ask the board for a recommendation, but is not bound by its decision.
The board has not been without critics. Last year, Meta drew criticism after failing to act on the board’s recommendation to suspend the account of Hun Sen, at the time Prime Minister of Cambodia, after he posted videos which included threats of violence to his opponents during an election.
In any case, it amounts to a significant experiment in the transfer of power, driven not by regulation but by the tech industry itself.
“Given Meta’s sizable impact on the intersection of technology and society, we started to think about how we might better innovate the way we relate to users. Part of that meant divesting power away from the company in the form of accountability through the oversight board,” explains Rose, who previously worked on the Oversight Board.
The Community Forums can be seen as a further experiment in opening up decision-making processes within Meta. Yet as with the Oversight Board, several key questions remain unanswered. Whether the forums will ultimately fulfil their aim will depend on several factors.
What happens next?
Extensive details of the feedback from participants in the Community Forums on AI is expected to be released by the end of Q1 2024, together with the overall report. In the small number of rooms I was able to observe, participants were largely enthusiastic about the experience. But there was also evidence of confusion among some as to why exactly they were being consulted, and what would happen with the results of the exercise. Can participants expect the results to have a direct impact on Meta’s own AI chatbot, Llama 2, or to otherwise impact their strategy?
“What happens next?” is a problematic issue even when it comes to public-sector deliberative processes. Usually, there is a formal agreement with the participants about how the results will be followed up on. In many cases, the government agrees to “consider” the recommendations made by the deliberators. This is no guarantee of any concrete action, and several processes have seen little or no follow-up. There is, however, the potential for the broader public to apply political pressure, and for a government to be held accountable for its failure to act.
In Meta’s case, the Community Forums remain in an experimental phase. As such, there was no prior agreement on any specific follow up. The company is currently reviewing the results internally before deciding on a course of action.
Experimental phase
As long as these processes remain experimental, they cannot be expected to meet all best practice standards in the field of deliberative democracy. Nonetheless, the questions remains as to how this democratic gap can be closed in future, especially since private companies are not accountable to the public in the same way governments are.
“I think it’s a fair question,” says Rose. “A company stepping into this space is a new actor, and a lot of people value the promise that deliberative democracy has to improve our democratic institutions. So it’s totally understandable that they would want to see that a company is truly committed to using the results.”
One way of addressing the issue would be to commit to a specific response in advance of any future Community Forums. This would see Meta confronted with the same balancing act as governments, reaching an agreement significant enough to be meaningful, yet without thereby handing over an unacceptable degree of power to the participants.
Who asks the questions? Agenda setting
Along with the question of follow-up, one of the major challenges facing deliberative projects in general is that of agenda setting. Put simply: In a democratic discussion, it matters who formulates the questions, and who gets to decide if they get asked at all.
Canning Malkin is an independent researcher and founding member of the The Global Citizens’ Assembly Network (GloCAN). One of the network’s recent research projects focused on agenda-setting in transnational and global citizens’ assemblies. She notes that the issue of agenda setting is a contentious one among projects in the public sector.
“Governmental bodies rarely ever approach deliberative processes as neutral or willing to broadly accept the proposals of the citizens; deliberation must fit into the political contexts and landscapes of the moment,” she told Democracy Technologies. “In this way, a ceiling has been set for the breadth and width of deliberation before experts and participants are even selected.”
These issues are made even more complex by the involvement of private firms. In the processes run by Meta, as well as by their competitors OpenAI and Anthropic, it matters a great deal what issues they decide to consult the public on, and when they do it. In Meta’s case, the discussion topics for the forums were set collectively by Meta’s generative AI team, along with the governance team and the team at Stanford.
“For me, agenda-setting is “problematic” because it necessitates a discussion of who ultimately has power, and how those people want that power to be exerted and perceived within what is meant to be an inclusive, equitable, and innovative political process,” Malkin concludes.
Deliberation and the future of AI
It is unclear at this stage how Meta will act on the results of the Community Forums, or whether they intend to hold future events dedicated to the topic of AI. Other experiments in the area, including the alignment assemblies run by the Collective Intelligence Project, were also completed in 2023, though they too remain in an experimental phase. In any case, OpenAI’s recent announcement of a new team for democratic inputs to AI suggests that the topic is not going away, and could even become a new arena for competition between AI developers.
Observing the Forum made one thing abundantly clear: People want to have a say on the future of AI. They are concerned about its potential impact, and value the opportunity to discuss it. What remains to be seen is whether any actor in the field – be it a private firm, a civil society actor, or a government – can find a way for public inputs to meaningfully shape the development and regulation of generative AI.