Skip to content

AI Bots and State-Sponsored Disinfo Threaten Online Democracy

AI bots outpace platform defenses, mimicking human activity to manipulate public opinion. Microsoft's 2024 findings highlight the urgent need to strengthen enforcement against automated manipulation.

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

AI Bots and State-Sponsored Disinfo Threaten Online Democracy

Russia has been accused of leading disinformation campaigns targeting US political movements and European elections. These operations use bot farms to spread false narratives, as revealed by Google in 2024. Meanwhile, AI-driven bots continue to outpace platform defenses, posing a significant threat to genuine online conversations and democratic processes.

In 2024, Google uncovered a Russian disinformation campaign that spread a false story about US Vice President Kamala Harris. This is just one example of state-sponsored bot farms mimicking human activity on social media platforms. These operations, often attributed to countries like Russia, China, or Iran, manipulate public opinion and influence elections.

AI bots can create the illusion of consensus, pushing genuine conversations to the margins with their sheer volume. This 'liars dividend' phenomenon causes people to question authentic content due to the prevalence of fakes online. In 2024, automated bot traffic made up 51% of all web traffic, surpassing human activity online for the first time in a decade. An experiment on Mastodon showed that participants were wrong 58% of the time when trying to identify AI bots in political discussions. Before the UK vote, 45 bot-like accounts on X spread divisive political content, reaching over 3 billion views.

The rise of AI-driven bots and state-sponsored disinformation campaigns poses a serious threat to democratic societies. As these operations become more sophisticated, it is crucial for platforms and governments to strengthen enforcement against automated manipulation. Otherwise, trust in institutions and the integrity of online conversations will continue to be undermined.

Read also:

Latest