Image credit: Pavlo Gonchar/Sopa Images/Rex/Shutterstock/
Disclaimer: This story was originally published on May 31, 2024.
While artificial intelligence has paved the way for increased efficiency and productivity, AI tools can potentially be used to manipulate information and shape public opinion. This influence on individual worldviews poses risks of creating echo chambers and polarizing societies.
Over the last three months, ChatGPT-maker OpenAI said it had identified such influence operations by groups from Russia, China, Iran, and Israel that use its models for “deceptive activity” across the internet, the company shared in a May 30 blog post.
The AI firm said the threat actors used its AI models to generate short comments and longer articles in different languages and made-up names and bios for social media accounts.
The campaigns focused on different issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments.
OpenAI said such operations “attempt to manipulate public opinion or influence political outcomes” anonymously.
“So far, these operations do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” the Microsoft-backed AI company said.
It found that not all flagged operations relied on AI exclusively, as some employed a mix of AI-generated content and manually written texts or memes copied from across the internet.
The report comes a few days after OpenAI formed a committee to propose recommendations to the board on safety and security decisions for the company’s projects and operations. CEO Sam Altman will lead the new advisory group.