12 January 2023

Democracy is about more than just making decisions. It also requires us to create and maintain a space for informed, constructive dialogue. And when it does come time to actually choose, how that choice is framed can make all the difference in the world.

Ensuring that sincere discussion and reliable information makes it through the surrounding noise is vitally important.

Take Brexit for example, a seemingly simple ‘leave’ or ‘stay’ choice. Yet the discourse and process of deliberation leading up to that point had been so loaded and so full of misleading or downright false information that the actual decision reflected a reality almost entirely manufactured by the preceding discourse.

‘Kosmo’ AI Moderation Assistant

This is the problem being addressed by the team at Kosmo — a collaborative effort between Liquid Democracy in Berlin (creators of the ‘Adhocracy’ platform), the German Institute for Participatory Design and Heinrich-Heine University in Düsseldorf.

“What we found in our platforms was a need for moderation,” says Marie-Kathrin Siemer, Managing Director of Liquid Democracy. “There is a need for something to detect quality contributions, because so often it’s about just not having bad comments, but the absence of bad comments doesn’t mean it’s a good discussion.”

What the partners share is an appreciation for the value and complexity of a healthy deliberative process, as well as the rapidly scaling difficulty of making sure it happens in larger projects.

“It’s about the recognition of quality content, because oftentimes in online discussions that high quality content goes unrecognised and perishes in the flood of other comments,” says Dr. Marc Ziegele, Junior Professor for Political Online Communication at Heinrich Heine University, and one of the developers of Kosmo.

“There’s a lot of fear from people who would like to do online deliberation, but they are afraid of simply managing all the comments,” adds Siemer.

Criteria for Quality Deliberation

The approach taken by Kosmo has been to identify four criteria which can be used to evaluate online discussion.

The first criterion is ‘rationality’ — whether comments or contributions provide clear arguments, additional knowledge or coherent solution proposals. The second is reciprocity — whether a commenter responds to others, takes up their ideas and considers them in their own contributions. The third is civility — whether a comment shows respect, or at least refrains from saying uncivil things. Finally, the fourth criterion, sometimes just subsumed under the first, is constructiveness — whether the contribution helps move the discussion forward towards a compromise or a decision.

“Detecting these criteria with an algorithm is very difficult,” says Dr Ziegele. “We have to deal with written language, and written language is very complex, with a variety of ways things can be expressed.”

In the end, the team opted to focus on two criteria first — rationality and civility — and to use machine learning, together with large volumes of training data and lots of hard work from project assistants, to build out their prototype.

“We have over 20,000 comments from different cooperating and industry partners, which we then annotate manually. So we have the training data and then our colleagues compute machine learning algorithms to try and detect patterns,” says Dr Ziegele.

“It’s taken two and a half years to implement these two categories. And they’re good, but far from perfect. There are follow-up research projects in which we want to implement stance detection, and other automatic systems, so that users will be exposed to comments with a different stance. But what we are really interested in doing first is detecting constructive and rational comments. This would help moderators focus on emotionally less exhausting and more constructive tasks.”

Creating a Productive Environment

For Ilja Maiber of the Institute for Participatory Design, the project touches on an underappreciated dynamic in democracy — how people feel about the process.

“Whether analogue or digital, we pay a lot of attention to the process because depending on how things are going in the room, the effect will be different. Like if you had a discussion that felt good, you’re leaving the room differently than if you had a discussion where people were just fighting each other.

“We’re focused on the deep intrinsic motivation of people and the relationship between moderators and participants — how they can support each other and how we can create an environment that is conducive to good relationships.”

Overcoming those emotional bumps in the road seems to be a consistent challenge, whether it’s public frustration with deliberative processes, the stress of content moderation, or confronting stigma around artificial intelligence.

“I would say the biggest challenge is that there’s a lot of scepticism towards AI, particularly when it comes to social projects and democracy itself,” says Ricardo Lanari, Project Manager with Liquid Democracy.

“But we’ve put so much emphasis on transparency and I think that has provided a good counterbalance.”

Ultimately however, the real test will be in putting it to work.

“Right now, we are starting to search for a project — to find out who could use Kosmo to test and provide a practical use case. And yeah, it’s challenging, partly I guess out of fear. There are still many anxieties around AI and debates around the moderation of participatory processes,” says Siemer.

In other words, before we can start implementing AI solutions on a large scale, there’s a need for public discussion, and a process of clarification. “Maybe we should test Kosmo with that,” adds Siemer.

How useful was this post?

Click on a star to rate it!