A weapon of mass deception?

Digital Security

As artificial images, videos and audio clips of real people come to the fore, the prospect of a firehose of AI-powered disinformation is a cause for growing concern.

Deepfakes in the global election year of 2024: A weapon of mass fraud?

Fake news has dominated election headlines ever since became a big story during the race for the White House back in 2016. But eight years later, there is a bigger threat: a combination of disinformation and deepfakes that can fool even the experts. There’s a good chance it’s new examples of AI-generated content with an election theme – including many images and videos circulating with the arrival of the presentical elections in Argentina and a doctored audio of US President Joe Biden – are signs of what is likely to happen on a larger scale.

With the environment a quarter of the world’s population In the run-up to the 2024 polls, concerns are rising that disinformation and AI-powered fraud could be used by bad actors to influence the results, with many experts fearing the consequences of deep-seated rigging. mainstream.

The deep fake disinformation threat

As mentioned, no less than two billion people are about to go to their local polling stations this year to vote for their favored representatives and state leaders. As major elections are scheduled to take place in more countries, including the US, the UK and India (as well as the European Parliament), it has the potential to change the political landscape and direction of geopolitics in next years – and beyond.

At the same time, however, misinformation and disinformation are new RANKINGS by the World Economic Forum (WEF) as the number one global risk in the next two years.

The challenge with deepfakes is that AI-powered technology is now cheap, accessible and powerful enough to cause damage on a large scale. It has democratized the ability of cybercriminals, state actors and hacktivists to launch convincing disinformation campaigns and more ad hoc, one-off scams. This is part of the reason why the WEF recently ranked misinformation/disinformation as the biggest global risk for the next two years, and the number two current risk, after a very long time. According to 1,490 experts from academia, business, government, international community and civil society consulted by WEF.

The report warns:“Synthetic content will manipulate individuals, disrupt economies and overturn societies in many ways over the next two years…


(Deep)faking it

The challenge is that tools like ChatGPT and freely accessible generative AI (GenAI) make it possible for a wider range of individuals to engage in the creation of deeply driven disinformation campaigns. technology. With all the hard work done for them, malicious actors have more time to work on their messaging and amplification efforts to ensure their fake content is seen and heard.

In an electoral context, deepfakes can clearly be used to undermine voter confidence in a particular candidate. In fact, it is easier to convince someone not to do something than the other way around. If the supporters of a political party or candidate are apt to be swayed by the fake audio or video it is a sure win for the opposing groups. In some situations, rogue states may look to undermine faith in the entire democratic process, so that whoever wins will find it difficult to govern with legitimacy.

At the heart of the challenge lies a simple fact: when people process information, they tend to value quantity and ease of understanding. Meaning, the more content we see with a similar message, and the easier it is to understand, the higher the chance we will believe it. This is why marketing campaigns tend to consist of short and repetitive messages. Add to this the fact that deepfakes are becoming harder to tell from real content, and you have a potential recipe for democratic disaster.

From theory to practice

Worryingly, deepfakes are likely to have an impact on voter sentiment. Take this recent example: In January 2024, a deeply fake audio of US President Joe Biden circulated by robocalling an unknown number of New Hampshire primary voters. In the message, he apparently told them not to come out, and instead “save your vote for the November election.” The caller ID number displayed was also spoofed to appear as if the automated message was sent from the personal number of Kathy Sullivan, a former state Democratic Party chairwoman who now runs a pro-Biden super-PAC.

It is not difficult to see how such calls can be used to reprimand the voters to turn to their preferred candidate before the presidential election in November. The risk is particularly acute in closely contested elections, where the shift of a small number of voters from one side to another will determine the outcome. With only tens of thousands of voters in some swing states likely to decide the outcome of the election, a targeted campaign like this could do untold damage. And to add insult to injury, as in the case above it is spread through robocalls rather than social media, it is more difficult to track or measure the impact.

What are tech companies doing about it?

YouTube and Facebook are said to be slow to respond to some deep fakes intended to influence the recent elections. That’s despite a new EU law (the Digital Services Act) that requires social media companies to prevent attempts to manipulate elections.

For its part, OpenAI said it will implement the Coalition for Content Provenance and Authenticity (C2PA) digital credentials for images created by DALL-E 3. The cryptographic watermarking technology – which Meta and Google are also testing – is designed to make it harder to fake images.

However, these are still baby steps and have reasonable concerns that the technological response to the threat is too little, too late as election fever grips the world. Especially when spread in relatively closed networks such as WhatsApp groups or robocalls, it is difficult to quickly track and debunk any fake audio or video.

The theory of “anchor bias” suggests that the first piece of information that people hear is the one that sticks in our minds, even if it turns out to be false. When the deepfakers swing voters first, all bets are off on who will be the biggest winner. In the age of social media and AI-powered disinformation, Jonathan Swift’s saying that “falsehood flies, and the truth comes limping after it” has new meaning.

Leave a comment