13 July 2023

Despite always being present, the conversation around disinformation has been on the rise globally since early in the 2016 US election cycle, when then candidate Trump began using “fake news” as an attack on legitimate media.

Ironically, those attacks (and the explosion of disinformation that came with them) were themselves ‘fake news’ and have forced a reckoning of sorts for political discourse and the media.

Now with the advent of publicly and easily accessible AI generated content, the struggle to combat disinformation, already spreading like wildfire, has been knocked into another gear. And with it, the threat to the integrity of democratic elections, already profound, has similarly escalated.

Flooding the information ecosystem

When it comes to text-based content, the main difference is volume. While Trump may have been striking at the wrong targets, ‘fake news’ certainly does exist. Volumes of misleading or outright false articles have been clogging up the internet and pouring through social media for as long as both have existed.

With LLM (large language model) based generative AI applications like ChatGPT it has simply reduced the time required to actually produce said content  from ‘very little’ to ‘virtually zero’.

It should come as no surprise then that this technology has fed into the information warfare strategy of flooding an ecosystem with false or misleading content. It was already the modus operandi of adversarial intelligence agencies. Now we can add an ever increasing number of private sector actors the world over – AI content generation in the hands of organisations like Team Gorge.

This has made the question of how to fight ‘fake news’, already a subject of critical importance, a key pillar in the defence of democracy. Solutions range from industry approved quality marks, unprecedented transparency, relentless fact-checking and even AI itself.

As yet, no one answer seems to have caught on, or addressed the issue in its entirety. Questions remain as to whether it ever will.

Now, ‘fake-news’ goes far beyond mass produced articles and social media posts. A barrage of ‘deep fake’ images, audio and video, growing in intensity and sophistication, has been making the rounds and the impact is already being felt.

Fake social media accounts and misleading footage in the Turkish election

Turkey’s recent election will be remembered for a number of reasons. For starters, it has far reaching geopolitical consequences. Turkey is a key actor and its leader, President Recep Tayyip Erdoğan, plays a divisive role – ostensibly a strategic partner, while showing a predisposition for authoritarianism and an often frustrating tendency to play adversary.

With an ongoing conflict in Syria and a war in Ukraine now in its second year, it’s easy to see why the leadership contest in Turkey (a NATO member) was so closely watched. However beyond geopolitics (or perhaps because of), it will also be remembered for the role played by tech-powered disinformation.

Not only was one of the main opposition leaders, Muharrem İnce, forced out of the race days before the election due to a video released online, purportedly showing him engaging in a compromising act with a fellow opposition candidate, but questionable videos were making the rounds all through the campaign.

In the realm of deep fakes, the alleged İnce fabrication stands as just one example among many. Teyit, a prominent fact-checking platform, has debunked numerous examples, most notably one wielded by President Erdoğan himself during a rally.

The video in question showcased a modified piece of campaign footage from leader of the main opposition party, Kemal Kılıçdaroğlu, manipulated to portray members of the banned PKK (classified as a terrorist organisation by the US, Turkey and the EU) singing the opposition party’s anthem.

While not strictly a deep fake, Erdoğan was also captured using another video featuring Kilicdaroglu at a rally. This time cut together using a clip where the opposition leader is encouraging citizens to vote, with another featuring PKK founder Murat Karayilan.

There is no evidence of the two sharing a platform, suggesting the video was simply misleadingly edited.

While Russia has repeatedly denied any involvement in disinformation campaigns in Turkey, reports claim that in the leadup to the election, more than 12 thousand Russian and Hungarian language twitter accounts were reactivated as Turkish and started following Turkish leaders and engaging in Turkish political discourse.

Just how significant an impact this all had on the outcome is unclear, but given how close the Turkish election ultimately was – with only a few percentage points in it – there’s a chance it could have made all the difference in the world.

Increasingly complex ‘deep fakes’ in the US presidential elections

Over in the US, the election race for 2024 has already begun – years in advance, as is usually the case.

Written or televised disinformation has been a key topic for the last several elections. Yet while in 2020, the concern around ‘deep fakes’ turned out be less than justified, that was before the enormous boom in consumer AI tech – one which started with image generators and has since spread to audio and video.

Now the concern is fresh and somewhat more justified.

Let’s take a look at a few examples that have already affected the political landscape early in the political season.

Back in February, a video began circulating supposedly showing President Biden giving a speech virulently hateful of transgendered people. The video was of an actual speech, the audio was AI generated.

In May, former President Trump shared a video through his Truth.Social account that made similar use of fake audio to show CNN host Anderson Cooper saying that Trump had just finished “ripping” the network “a new a**hole”.

In June, the campaign team for GOP candidate Ron DeSantis shared a video featuring fake images of Donald Trump hugging former white house advisor Dr. Anthony Fauci, in an apparent move to discredit the former President amongst his voter base.

Not all of the generated material is intended to be taken as real. Sometimes it’s plainly a joke. Such as when Donald Trump Junior shared a video with candidate DeSantis deep-faked as Michael Scott from the TV show ‘The Office’.

Nevertheless it has increased the complexity and potential impact of political humour. Sometimes with unintended consequences. Such as when journalist Elliot Higgins used an AI generator to create images of Donald Trump apparently being held down by New York police. The images were circulated among supporters of the former President.

The lines begin to blur – even more than they already were. And while we might all be secure in our own ability to separate fact from fiction, that security is usually overestimated.

As presidential primaries progress and shift into full blown election season come 2024, the role of fake articles, fake images, fake audio and video in tilting public opinion becomes an ever more pressing concern.

An ongoing test

Unfortunately, the ability of AI driven tools to create increasingly sophisticated fakes is constantly improving, and at a remarkable pace. For example AI image generators are rapidly overcoming their ability to create hands – a key detail that distinguished the images of Trump purportedly getting arrested as fake.

For now, the best thing we can do is check with trusted sources, double check with multiple sources where possible and try to stay up to date.

This is not a problem that’s going away. And so of course it’s a problem that lots of people are working on. Adobe, for example, is leading the ‘Content Authenticity Initiative’ to establish an international open standard for image verification and the tracking of changes.

MIT also leads a research initiative called ‘DetectFakes’ which allows anyone to participate and test their abilities to spot false images, audio and video.

Perhaps you want to give it a go and see for yourself whether you can tell truth from fiction. 

How useful was this post?

Click on a star to rate it!