OpenAI is increasingly becoming a platform of choice for cyber actors looking to influence democratic elections across the globe.

In a 54-page report published Wednesday, the ChatGPT creator said that it’s disrupted “more than 20 operations and deceptive networks from around the world that attempted to use our models.” The threats ranged from AI-generated website articles to social media posts by fake accounts.

The company said its update on “influence and cyber operations” was intended to provide a “snapshot” of what it’s seeing and to identify “an initial set of trends that we believe can inform debate on how AI fits into the broader threat landscape.”

OpenAI’s report lands less than a month before the U.S. presidential election. Beyond the U.S., it’s a significant year for elections worldwide, with contests taking place that affect upward of 4 billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns, with the number of deepfakes that have been created increasing 900% year over year, according to data from Clarity, a machine learning firm.

Misinformation in elections is not a new phenomenon. It’s been a major problem dating back to the 2016 U.S. presidential campaign, when Russian actors found cheap and easy ways to spread false content across social platforms. In 2020, social networks were inundated with misinformation on Covid vaccines and election fraud.

Lawmakers’ concerns today are more focused on the rise in generative AI, which took off in late 2022 with the launch of ChatGPT and is now being adopted by companies of all sizes.

OpenAI wrote in its report that election-related uses of AI “ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts.” The social media content related mostly to elections in the U.S. and Rwanda, and to a lesser extent, elections in India and the EU, OpenAI said.

In late August, an Iranian operation used OpenAI’s products to generate “long-form articles” and social media comments about the U.S. election, as well as other topics, but the company said the majority of identified posts received few or no likes, shares and comments. In July, the company banned ChatGPT accounts in Rwanda that were posting election-related comments on X. And in May, an Israeli company used ChatGPT to generate social media comments about elections in India. OpenAI wrote that it was able to address the case within less than 24 hours.

In June, OpenAI addressed a covert operation that used its products to generate comments about the European Parliament elections in France, and politics in the U.S., Germany, Italy and Poland. The company said that while most social media posts it identified received few likes or shares, some real people did reply to the AI-generated posts.

None of the election-related operations were able to attract “viral engagement” or build “sustained audiences” via the use of ChatGPT and OpenAI’s other tools, the company wrote.

WATCH:Outlook of election could be positive or very negative for China

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts


This will close in 0 seconds