The DOJ makes its first known arrest for AI-generated CSAM

The US Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). As far as we know, this is the first case of its kind as the DOJ looks to establish a judicial precedent that exploitative materials are still illegal even when no children were used to create them. “Put simply, CSAM generated by AI is still CSAM,” Deputy Attorney General Lisa Monaco wrote in a press release. The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.” The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM. Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22. The case will challenge the notion some may hold that CSAM’s illegal nature is based exclusively on the children exploited in their creation. Although AI-generated digital CSAM doesn’t involve any live humans (other than the one entering the prompts), it could still normalize and encourage the material, or be used to lure children into predatory situations. This appears to be something the feds want to clarify as the technology rapidly advances and grows in popularity. “Technology may change, but our commitment to protecting children will not,” Deputy AG Monaco wrote. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”This article originally appeared on Engadget at https://www.engadget.com/the-doj-makes-its-first-known-arrest-for-ai-generated-csam-201740996.html?src=rss

May 22, 2024 - 02:30
 0
The DOJ makes its first known arrest for AI-generated CSAM

The US Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). As far as we know, this is the first case of its kind as the DOJ looks to establish a judicial precedent that exploitative materials are still illegal even when no children were used to create them. “Put simply, CSAM generated by AI is still CSAM,” Deputy Attorney General Lisa Monaco wrote in a press release.

The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.”

The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML.

According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement.

Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.

The case will challenge the notion some may hold that CSAM’s illegal nature is based exclusively on the children exploited in their creation. Although AI-generated digital CSAM doesn’t involve any live humans (other than the one entering the prompts), it could still normalize and encourage the material, or be used to lure children into predatory situations. This appears to be something the feds want to clarify as the technology rapidly advances and grows in popularity.

“Technology may change, but our commitment to protecting children will not,” Deputy AG Monaco wrote. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”This article originally appeared on Engadget at https://www.engadget.com/the-doj-makes-its-first-known-arrest-for-ai-generated-csam-201740996.html?src=rss

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Viral News Code whisperer by profession, narrative alchemist by passion. With 6 years of tech expertise under my belt, I bring a unique blend of logic and imagination to ViralNews360. Expect everything from tech explainers that melt your brain (but not your circuits) to heartwarming tales that tug at your heartstrings. Come on in, the virtual coffee's always brewing!