On Friday, OpenAI reported that it had removed accounts linked to an Iranian group, known as Storm-2035, for misusing its ChatGPT chatbot to influence the US presidential election and other issues. This operation involved generating content on various topics, including US election candidates, the Gaza conflict, and Israel’s presence at the Olympic Games. The content was disseminated through social media and websites to sway public opinion.
The investigation by OpenAI, supported by Microsoft, revealed that Storm-2035 employed ChatGPT to produce both long-form articles and shorter social media posts. Despite these efforts, OpenAI found that the content generated by the operation did not achieve significant engagement from the audience. Most of the social media posts received minimal interaction, and the web articles did not see widespread sharing.
OpenAI responded by banning the implicated accounts from using its services and stated that it would continue to monitor for any further policy violations. This move is part of the company’s broader efforts to prevent misuse of its AI tools for deceptive activities.
In early August, a Microsoft threat-intelligence report highlighted that Storm-2035 operated through four websites posing as news outlets. These sites were actively engaging US voter groups with polarizing content on topics such as presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.
Previously, in May, OpenAI had disrupted five other covert influence operations that were attempting to use its models for deceptive purposes online. The company’s ongoing vigilance aims to prevent similar misuse of its AI technologies.