24 May 2023

The Democracy Technologies Convention in Warsaw on Thursday May 12 brought together two experts on AI from different corners to discuss the question of AI in politics.

Dagmar Monett is Professor of Computer Science (Artificial Intelligence, Software Engineering), head of the Computer Science Department and Co-Director of the M.Sc. programme on Digital Transformation at the Berlin School of Economics and Law (HWR Berlin).

Tim Gordon is a Co-Founder and Partner at Best Practice AI, a London-based advisory working with executives in corporates, startups, governmental and non-profit organisations to “accelerate the adoption of ethical, sustainable and value-building AI.”

You can learn more about them here.

AI and power

Both panellists brought a different set of expertise and background experience to the table, yet there was surprising synthesis in the message.

In short – be careful.

“I think AI is one of the biggest threats to democracy we face in our lifetimes,” said Gordon, courting a bit of controversy. “AI is essentially a concentration of power in capital, and increasingly, with the state.”

The message was not that AI itself is a threat. But how we use it.

“If you think about the constant challenge – the battle between liberal democracy and autocracy – democracy has typically won because it’s delivered better economic and better outcomes for people in their lives,” he said. “In the age of AI however, if you take away privacy and provide more data to the system, it is very possible that an autocratic centralised system may deliver better outcomes for citizens than a liberal democracy.”

For Monett, there is another issue that plays into the power dynamics afforded by AI – one of dependence.

“When we use these models to generate anything we want, we are de-skilling humans,” she says. “When we get an answer from technology, that means a human doesn’t need to have that skill anymore.”

AI might represent the means to empower digital authoritarianism. However that doesn’t mean it can’t empower democratic governance as well, provided appropriate protections are in place.

“We ration access to government services and public goods, essentially through time, legal expertise and knowledge,” Gordon adds. “If we can start thinking about how we can use AI to reverse that flow – to build tools and enable citizens to access the resources they need when they need them – then that is one very quick way I think we can massively increase outcomes for people.”

AI and people

Gordon identifies a range of areas where AI tools can be of assistance, not just for citizens, but also politicians themselves.

“If you go through what a politician does, there’s a whole series of tasks. They’re speakers, they’re debaters, they’re social workers, they’re strategists, they’re writers, they’re arbitrators, they’re fundraisers, they’re salespeople, they’re lawyers, they’re negotiators, they’re deliverers of leaflets,” says Gordon.

“For every single one of these tasks, there is a way in which AI can help make that job faster and more efficient.”

Yet while Artificial intelligence has certainly demonstrated a capacity to streamline services for citizens and government officials alike, democratic politics is about more than simply service provision.

It’s about reflecting people’s opinions and values, and manifesting them in governance. In that regard, according to Monett, AI comes up well short.

“Machines only work with mathematical models with numbers,” she says. “We cannot implement values, ethical values, social values in machines because we need mathematical models to do so and there are no models for these things.”

“It is not a question of zeros and one, it is much more complex than that.”

Those shortcomings don’t just affect the expression of citizens’ beliefs either. They also hinder the use of AI in the kinds of political communication mentioned above – communication which requires nuance and complex, contemporary understanding.

“A large language model that can talk about, or answer questions about what we are doing now, will be totally unusable because the data it was trained with is data from the past, from other situations and has nothing to do with the context,” says Monett.

Managing expectations

The key to all of this would seem to simply be – don’t buy into the hype and always keep one hand on the wheel.

“The thing I really hate,” says Gordon, “is when you look at a picture in the media about AI, there is a little robot man. Some little robot thing doing something, often with a little finger pointing to the human and power is passing between the two.”

“It’s anthropomorphisation, and it’s an absolute travesty because it totally misinforms about what AI is. AI is not a replacement for a human being. It is a task, it’s a tool.”

Or put another way – “It is a lawnmower, it is not a gardener.”

Monett feels much the same.

“We don’t have models that model human intentions in order to put that data in the machine so we cannot build a machine that models intentions,” she says. “It follows that computers do not have intentions and cannot be part of the complex human interactions we have in reality.”

Asked about their recommendation to politicians considering the use of AI based tools, she warns, “Politicians be very careful.”

“You will never have a tool to which you can delegate what you should do or what you could do with citizens in your daily job.”

Keeping informed

There is clearly a lot of buzz at the moment. And it can be difficult to see past it. Ultimately, they agreed that people in politics – and indeed anyone whose work is being impacted by AI – have an obligation to be informed. 

“Learn more about how AI works,” she says “so that you understand how those tools work and so that you can, in a new situation, tell whether or not they can help you in doing your tasks.”

“But don’t substitute them for yourself. Because that way, they will always be either mediocre or incomplete.”

And, as always, ask questions.

“I’d obviously start with ‘what is it precisely that your tool is doing?’” says Gordon. 

“And then follow through on that – what data has it been trained on to do that? And where did that data come from?”

How useful was this post?

Click on a star to rate it!