OpenAI shut down an Iranian influence op that used ChatGPT to generate bogus news articles

OpenAI said on Friday that it thwarted an Iranian influence campaign that used ChatGPT to generate fake news stories and social posts aimed at Americans. The company said it identified and banned accounts generating content for five websites (in English and Spanish) pretending to be news outlets, spreading “polarizing messages” on issues like the US presidential campaign, LGBTQ+ rights and the war in Gaza. The operation was identified as “Storm-2035,” part of a series of influence campaigns Microsoft identified last week as “connected with the Iranian government.” In addition to the news posts, it included “a dozen accounts on X and one on Instagram” connected to the operation. OpenAI said the op didn’t appear to have gained any meaningful traction. “The majority of social media posts that we identified received few or no likes, shares, or comments,” the company wrote. In addition, OpenAI said that on the Brookings Institution’s Breakout Scale, which rates threats, the operation only charted a Category 2 rating (on a scale of one to six). That means it showed “activity on multiple platforms, but no evidence that real people picked up or widely shared their content.” OpenAI described the operation as creating content for faux conservative and progressive news outlets, targeting opposing viewpoints. Bloomberg said the content suggested Donald Trump was “being censored on social media and was prepared to declare himself king of the US.” Another framed Kamala Harris’ choice of Tim Walz as her running mate as a “calculated choice for unity.” OpenAI added that the operation also created content about Israel’s presence at the Olympics and (to a lesser degree) Venezuelan politics, the rights of Latin American communities and Scottish Independence. In addition, the campaign peppered the heavy stuff with comments about fashion and beauty, “possibly to appear more authentic or in an attempt to build a following.” “The operation tried to play both sides but it didn’t look like it got engagement from either,” OpenAI Intelligence and Investigations investigator Ben Nimmo told Bloomberg. The busted dud of an influence op follows the disclosure earlier this week that Iranian hackers have targeted both Harris’ and Trump’s campaigns. The FBI said informal Trump adviser Roger Stone fell victim to phishing emails. The Iranian hackers then took control of his account and sent messages with phishing links to others. The FBI found no evidence that anyone in the Harris campaign fell for the scheme.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/openai-shut-down-an-iranian-influence-op-that-used-chatgpt-to-generate-bogus-news-articles-202526662.html?src=rss

Aug 17, 2024 - 02:30
 0
OpenAI shut down an Iranian influence op that used ChatGPT to generate bogus news articles

OpenAI said on Friday that it thwarted an Iranian influence campaign that used ChatGPT to generate fake news stories and social posts aimed at Americans. The company said it identified and banned accounts generating content for five websites (in English and Spanish) pretending to be news outlets, spreading “polarizing messages” on issues like the US presidential campaign, LGBTQ+ rights and the war in Gaza.

The operation was identified as “Storm-2035,” part of a series of influence campaigns Microsoft identified last week as “connected with the Iranian government.” In addition to the news posts, it included “a dozen accounts on X and one on Instagram” connected to the operation. OpenAI said the op didn’t appear to have gained any meaningful traction. “The majority of social media posts that we identified received few or no likes, shares, or comments,” the company wrote.

In addition, OpenAI said that on the Brookings Institution’s Breakout Scale, which rates threats, the operation only charted a Category 2 rating (on a scale of one to six). That means it showed “activity on multiple platforms, but no evidence that real people picked up or widely shared their content.”

OpenAI described the operation as creating content for faux conservative and progressive news outlets, targeting opposing viewpoints. Bloomberg said the content suggested Donald Trump was “being censored on social media and was prepared to declare himself king of the US.” Another framed Kamala Harris’ choice of Tim Walz as her running mate as a “calculated choice for unity.”

OpenAI added that the operation also created content about Israel’s presence at the Olympics and (to a lesser degree) Venezuelan politics, the rights of Latin American communities and Scottish Independence. In addition, the campaign peppered the heavy stuff with comments about fashion and beauty, “possibly to appear more authentic or in an attempt to build a following.”

“The operation tried to play both sides but it didn’t look like it got engagement from either,” OpenAI Intelligence and Investigations investigator Ben Nimmo told Bloomberg.

The busted dud of an influence op follows the disclosure earlier this week that Iranian hackers have targeted both Harris’ and Trump’s campaigns. The FBI said informal Trump adviser Roger Stone fell victim to phishing emails. The Iranian hackers then took control of his account and sent messages with phishing links to others. The FBI found no evidence that anyone in the Harris campaign fell for the scheme.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/openai-shut-down-an-iranian-influence-op-that-used-chatgpt-to-generate-bogus-news-articles-202526662.html?src=rss

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Viral News Code whisperer by profession, narrative alchemist by passion. With 6 years of tech expertise under my belt, I bring a unique blend of logic and imagination to ViralNews360. Expect everything from tech explainers that melt your brain (but not your circuits) to heartwarming tales that tug at your heartstrings. Come on in, the virtual coffee's always brewing!