Meta needs updated rules for sexually explicit deepfakes, Oversight Board says
Meta’s Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures. The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal. The company eventually removed the post after attention from the Oversight Board, which nonetheless overturned Meta’s original decision to leave the image up. The second post, which was shared to a Facebook group dedicated to AI art, showed “an AI-generated image of a nude woman with a man groping her breast.” Meta automatically removed the post because it had been added to an internal system that can identify images that have been previously reported to the company. The Oversight Board found that Meta was correct to have taken the post down. In both cases, the Oversight Board said the AI deepfakes violated the company’s rules barring “derogatory sexualized photoshop” images. But in its recommendations to Meta, the Oversight Board said the current language used in these rules is outdated and may make it more difficult for users to report AI-made explicit images. Instead, the board says that it should update its policies to make clear that it prohibits non-consensual explicit images that are AI-made or manipulated. “Much of the non-consensual sexualized imagery spread online today is created with generative AI models that either automatically edit existing images or create entirely new ones,” the board writes.”Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators.” The board also called out Meta’s practice of automatically closing user appeals, which it said could have “significant human rights impacts” on users. However, the board said it didn’t have “sufficient information” about the practice to make a recommendation. The spread of explicit AI images has become an increasingly prominent issue as “deepfake porn” has become a more widespread form of online harassment in recent years. The board’s decision comes one day after the US Senate unanimously passed a bill cracking down on explicit deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for as much as $250,000. The cases aren’t the first time the Oversight Board has pushed Meta to update its rules for AI-generated content. In another high-profile case, the board investigated a maliciously edited video of President Joe Biden. The case ultimately resulted in Meta revamping its policies around how AI-generated content is labeled.This article originally appeared on Engadget at https://www.engadget.com/meta-needs-updated-rules-for-sexually-explicit-deepfakes-oversight-board-says-100005969.html?src=rss
Meta’s Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures.
The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal. The company eventually removed the post after attention from the Oversight Board, which nonetheless overturned Meta’s original decision to leave the image up.
The second post, which was shared to a Facebook group dedicated to AI art, showed “an AI-generated image of a nude woman with a man groping her breast.” Meta automatically removed the post because it had been added to an internal system that can identify images that have been previously reported to the company. The Oversight Board found that Meta was correct to have taken the post down.
In both cases, the Oversight Board said the AI deepfakes violated the company’s rules barring “derogatory sexualized photoshop” images. But in its recommendations to Meta, the Oversight Board said the current language used in these rules is outdated and may make it more difficult for users to report AI-made explicit images.
Instead, the board says that it should update its policies to make clear that it prohibits non-consensual explicit images that are AI-made or manipulated. “Much of the non-consensual sexualized imagery spread online today is created with generative AI models that either automatically edit existing images or create entirely new ones,” the board writes.”Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators.”
The board also called out Meta’s practice of automatically closing user appeals, which it said could have “significant human rights impacts” on users. However, the board said it didn’t have “sufficient information” about the practice to make a recommendation.
The spread of explicit AI images has become an increasingly prominent issue as “deepfake porn” has become a more widespread form of online harassment in recent years. The board’s decision comes one day after the US Senate unanimously passed a bill cracking down on explicit deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for as much as $250,000.
The cases aren’t the first time the Oversight Board has pushed Meta to update its rules for AI-generated content. In another high-profile case, the board investigated a maliciously edited video of President Joe Biden. The case ultimately resulted in Meta revamping its policies around how AI-generated content is labeled.This article originally appeared on Engadget at https://www.engadget.com/meta-needs-updated-rules-for-sexually-explicit-deepfakes-oversight-board-says-100005969.html?src=rss
What's Your Reaction?