OpenAI says it can detect images made by its own software… mostly
We all think we’re pretty good at identifying images made by AI. It’s the weird alien text in the background. It’s the bizarre inaccuracies that seem to break the laws of physics. Most of all, it’s those gruesome hands and fingers. However, the technology is constantly evolving and it won’t be too long until we won’t be able to tell what’s real or not. Industry leader OpenAI is trying to get ahead of the problem by creating a toolset that detects images created by its own DALL-E 3 generator. The results are a mixed bag. OpenAI The company says it can accurately detect pictures whipped up by DALL-3 98 percent of the time, which is great. There are, though, some fairly big caveats. First of all, the image has to be created by DALL-E and, well, it’s not the only image generator on the block. The internet overfloweth with them. According to data provided by OpenAI, the system only managed to successfully classify five to ten percent of images made by other AI models. Also, it runs into trouble if the image has been modified in any way. This didn’t seem to be a huge deal in the case of minor modifications, like cropping, compression and changes in saturation. In these cases, the success rate was lower but still within acceptable range at around 95 to 97 percent. Adjusting the hue, however, dropped the success rate down to 82 percent. OpenAI Now here’s where things get really sticky. The toolset struggled when used to classify images that underwent more extensive changes. OpenAI didn’t even publish the success rate in these cases, stating simply that "other modifications, however, can reduce performance.” This is a bummer because, well, it’s an election year and the vast majority of AI-generated images are going to be modified after the fact so as to better enrage people. In other words, the tool will likely recognize an image of Joe Biden asleep in the Oval Office surrounded by baggies of white powder, but not after the creator slaps on a bunch of angry text and Photoshops in a crying bald eagle or whatever. At least OpenAI is being transparent regarding the limitations of its detection technology. It’s also giving external testers access to the aforementioned tools to help fix these issues, as reported by The Wall Street Journal. The company, along with bestie Microsoft, has poured $2 million into something called the Societal Resilience Fund, which hopes to expand AI education and literacy. Unfortunately, the idea of AI mucking up an election is not some faraway concept. It’s happening right now. There have already been AI-generated election ads and disingenuous images used this cycle, and there’s likely much more to come as we slowly, slowly, slowly (slowly) crawl toward November.This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-can-detect-images-made-by-its-own-software-mostly-170012976.html?src=rss
We all think we’re pretty good at identifying images made by AI. It’s the weird alien text in the background. It’s the bizarre inaccuracies that seem to break the laws of physics. Most of all, it’s those gruesome hands and fingers. However, the technology is constantly evolving and it won’t be too long until we won’t be able to tell what’s real or not. Industry leader OpenAI is trying to get ahead of the problem by creating a toolset that detects images created by its own DALL-E 3 generator. The results are a mixed bag.
The company says it can accurately detect pictures whipped up by DALL-3 98 percent of the time, which is great. There are, though, some fairly big caveats. First of all, the image has to be created by DALL-E and, well, it’s not the only image generator on the block. The internet overfloweth with them. According to data provided by OpenAI, the system only managed to successfully classify five to ten percent of images made by other AI models.
Also, it runs into trouble if the image has been modified in any way. This didn’t seem to be a huge deal in the case of minor modifications, like cropping, compression and changes in saturation. In these cases, the success rate was lower but still within acceptable range at around 95 to 97 percent. Adjusting the hue, however, dropped the success rate down to 82 percent.
Now here’s where things get really sticky. The toolset struggled when used to classify images that underwent more extensive changes. OpenAI didn’t even publish the success rate in these cases, stating simply that "other modifications, however, can reduce performance.”
This is a bummer because, well, it’s an election year and the vast majority of AI-generated images are going to be modified after the fact so as to better enrage people. In other words, the tool will likely recognize an image of Joe Biden asleep in the Oval Office surrounded by baggies of white powder, but not after the creator slaps on a bunch of angry text and Photoshops in a crying bald eagle or whatever.
At least OpenAI is being transparent regarding the limitations of its detection technology. It’s also giving external testers access to the aforementioned tools to help fix these issues, as reported by The Wall Street Journal. The company, along with bestie Microsoft, has poured $2 million into something called the Societal Resilience Fund, which hopes to expand AI education and literacy.
Unfortunately, the idea of AI mucking up an election is not some faraway concept. It’s happening right now. There have already been AI-generated election ads and disingenuous images used this cycle, and there’s likely much more to come as we slowly, slowly, slowly (slowly) crawl toward November.This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-can-detect-images-made-by-its-own-software-mostly-170012976.html?src=rss
What's Your Reaction?