Meta says AI-generated content was less than 1 precent of election misinformation

AI-generated content played a much smaller role in global election misinformation than what many officials and researchers had feared, according to a new analysis from Meta. In an update on its efforts to safeguard dozens of elections in 2024, the company said that AI content made up only a fraction of election-related misinformation that was caught and labeled by its fact checkers. “During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation,” the company shared in a blog post, referring to elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU’s Parliamentary elections. The update comes after numerous government officials and researchers for months raised the alarm about the role generative AI could play in supercharging election misinformation in a year when more than 2 billion people were expected to go to the polls. But those fears largely did not play out — at least on Meta’s platforms — according to the company’s President of Global Affairs, Nick Clegg. “People were understandably concerned about the potential impact that generative AI would have on the forthcoming elections during the course of this year, and there were all sorts of warnings about the potential risks of things like widespread deepfakes and AI-enabled disinformation campaigns,” Clegg said during a briefing with reporters. “From what we've monitored across our services, it seems these risks did not materialize in a significant way, and that any such impact was modest and limited in scope.” Meta didn’t elaborate on just how much election-related AI content its fact checkers caught in the run-up to major elections. The company sees billions of pieces of content every day, so even small percentages can add up to a large number of posts. Clegg did, however, credit Meta’s policies, including its expansion of AI labeling earlier this year, following criticism from the Oversight Board. He noted that Meta’s own AI image generator blocked 590,000 requests to create images of Donald Trump, Joe Biden, Kamala Harris, JD Vance and Tim Walz in the month leading up to election day in the US.   At the same time, Meta has increasingly taken steps to distance itself from politics altogether, as well as some past efforts to police misinformation. The company changed users’ default settings on Instagram and Threads to stop recommending political content, and has de-prioritized news on Facebook. Mark Zuckerberg has said he regrets the way the company handled some of its misinformation policies during the pandemic.  Looking ahead, Clegg said Meta is still trying to strike the right balance between enforcing its rules and enabling free expression. “We know that when enforcing our policies, our error rates are still too high, which gets in the way of free expression,” he said.” I think we also now want to really redouble our efforts to improve the precision and accuracy with which we act.”This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-says-ai-generated-content-was-less-than-1-precent-of-election-misinformation-130042422.html?src=rss

Dec 3, 2024 - 19:30
 0
Meta says AI-generated content was less than 1 precent of election misinformation

AI-generated content played a much smaller role in global election misinformation than what many officials and researchers had feared, according to a new analysis from Meta. In an update on its efforts to safeguard dozens of elections in 2024, the company said that AI content made up only a fraction of election-related misinformation that was caught and labeled by its fact checkers.

“During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation,” the company shared in a blog post, referring to elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU’s Parliamentary elections.

The update comes after numerous government officials and researchers for months raised the alarm about the role generative AI could play in supercharging election misinformation in a year when more than 2 billion people were expected to go to the polls. But those fears largely did not play out — at least on Meta’s platforms — according to the company’s President of Global Affairs, Nick Clegg.

“People were understandably concerned about the potential impact that generative AI would have on the forthcoming elections during the course of this year, and there were all sorts of warnings about the potential risks of things like widespread deepfakes and AI-enabled disinformation campaigns,” Clegg said during a briefing with reporters. “From what we've monitored across our services, it seems these risks did not materialize in a significant way, and that any such impact was modest and limited in scope.”

Meta didn’t elaborate on just how much election-related AI content its fact checkers caught in the run-up to major elections. The company sees billions of pieces of content every day, so even small percentages can add up to a large number of posts. Clegg did, however, credit Meta’s policies, including its expansion of AI labeling earlier this year, following criticism from the Oversight Board. He noted that Meta’s own AI image generator blocked 590,000 requests to create images of Donald Trump, Joe Biden, Kamala Harris, JD Vance and Tim Walz in the month leading up to election day in the US.  

At the same time, Meta has increasingly taken steps to distance itself from politics altogether, as well as some past efforts to police misinformation. The company changed users’ default settings on Instagram and Threads to stop recommending political content, and has de-prioritized news on Facebook. Mark Zuckerberg has said he regrets the way the company handled some of its misinformation policies during the pandemic. 

Looking ahead, Clegg said Meta is still trying to strike the right balance between enforcing its rules and enabling free expression. “We know that when enforcing our policies, our error rates are still too high, which gets in the way of free expression,” he said.” I think we also now want to really redouble our efforts to improve the precision and accuracy with which we act.”This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-says-ai-generated-content-was-less-than-1-precent-of-election-misinformation-130042422.html?src=rss

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Viral News Code whisperer by profession, narrative alchemist by passion. With 6 years of tech expertise under my belt, I bring a unique blend of logic and imagination to ViralNews360. Expect everything from tech explainers that melt your brain (but not your circuits) to heartwarming tales that tug at your heartstrings. Come on in, the virtual coffee's always brewing!