iOS 18.2 has a child safety feature that can blur nude content and report it to Apple
In iOS 18.2, Apple is adding a new feature that resurrects some of the intent behind its halted CSAM scanning plans — this time, without breaking end-to-end encryption or providing government backdoors. Rolling out first in Australia, the company’s expansion of its Communication Safety feature uses on-device machine learning to detect and blur nude content, adding warnings and requiring confirmation before users can proceed. If the child is under 13, they can’t continue without entering the device’s Screen Time passcode. If the device’s onboard machine learning detects nude content, the feature automatically blurs the photo or video, displays a warning that the content may be sensitive and offers ways to get help. The choices include leaving the conversation or group thread, blocking the person and accessing online safety resources. The feature also displays a message that reassures the child that it’s okay not to view the content or leave the chat. There’s also an option to message a parent or guardian. If the child is 13 or older, they can still confirm they want to continue after receiving those warnings — with a repeat of the reminders that it’s okay to opt out and that further help is available. According to The Guardian, it also includes an option to report the images and videos to Apple. Apple The feature analyzes photos and videos on iPhone and iPad in Messages, AirDrop, Contact Posters (in the Phone or Contacts app) and FaceTime video messages. In addition, it will scan “some third-party apps” if the child selects a photo or video to share with them. The supported apps vary slightly on other devices. On Mac, it scans messages and some third-party apps if users choose content to share through them. On the Apple Watch, it covers Messages, Contact Posters and FaceTime video messages. Finally, on Vision Pro, the feature scans Messages, AirDrop and some third-party apps (under the same conditions mentioned above). The feature requires iOS 18, iPadOS 18, macOS Sequoia or visionOS 2. The Guardian reports that Apple plans to expand it globally after the Australia trial. The company likely chose the land Down Under for a specific reason: The country is set to roll out new regulations that require Big Tech to police child abuse and terror content. As part of the new rules, Australia agreed to add the clause that it was only mandated “where technically feasible,” omitting a requirement to break end-to-end encryption and compromise security. Companies will need to comply by the end of the year. User privacy and security were at the heart of the controversy over Apple’s infamous attempt to police CSAM. In 2021, it prepared to adopt a system that would scan for images of online sexual abuse, which would then be sent to human reviewers. (It came as something of a shock after Apple’s history of standing up to the FBI over its attempts to unlock an iPhone belonging to a terrorist.) Privacy and security experts argued that the feature would open a backdoor for authoritarian regimes to spy on their citizens in situations without any exploitative material. The following year, Apple abandoned the feature, leading (indirectly) to the more balanced child-safety feature announced today. Once it rolls out globally, you can activate the feature under Settings > Screen Time > Communication Safety, and toggle the option on. That section has been activated by default since iOS 17.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/ios-182-has-a-child-safety-feature-that-can-blur-nude-content-and-report-it-to-apple-194614810.html?src=rss
In iOS 18.2, Apple is adding a new feature that resurrects some of the intent behind its halted CSAM scanning plans — this time, without breaking end-to-end encryption or providing government backdoors. Rolling out first in Australia, the company’s expansion of its Communication Safety feature uses on-device machine learning to detect and blur nude content, adding warnings and requiring confirmation before users can proceed. If the child is under 13, they can’t continue without entering the device’s Screen Time passcode.
If the device’s onboard machine learning detects nude content, the feature automatically blurs the photo or video, displays a warning that the content may be sensitive and offers ways to get help. The choices include leaving the conversation or group thread, blocking the person and accessing online safety resources.
The feature also displays a message that reassures the child that it’s okay not to view the content or leave the chat. There’s also an option to message a parent or guardian. If the child is 13 or older, they can still confirm they want to continue after receiving those warnings — with a repeat of the reminders that it’s okay to opt out and that further help is available. According to The Guardian, it also includes an option to report the images and videos to Apple.
The feature analyzes photos and videos on iPhone and iPad in Messages, AirDrop, Contact Posters (in the Phone or Contacts app) and FaceTime video messages. In addition, it will scan “some third-party apps” if the child selects a photo or video to share with them.
The supported apps vary slightly on other devices. On Mac, it scans messages and some third-party apps if users choose content to share through them. On the Apple Watch, it covers Messages, Contact Posters and FaceTime video messages. Finally, on Vision Pro, the feature scans Messages, AirDrop and some third-party apps (under the same conditions mentioned above).
The feature requires iOS 18, iPadOS 18, macOS Sequoia or visionOS 2.
The Guardian reports that Apple plans to expand it globally after the Australia trial. The company likely chose the land Down Under for a specific reason: The country is set to roll out new regulations that require Big Tech to police child abuse and terror content. As part of the new rules, Australia agreed to add the clause that it was only mandated “where technically feasible,” omitting a requirement to break end-to-end encryption and compromise security. Companies will need to comply by the end of the year.
User privacy and security were at the heart of the controversy over Apple’s infamous attempt to police CSAM. In 2021, it prepared to adopt a system that would scan for images of online sexual abuse, which would then be sent to human reviewers. (It came as something of a shock after Apple’s history of standing up to the FBI over its attempts to unlock an iPhone belonging to a terrorist.) Privacy and security experts argued that the feature would open a backdoor for authoritarian regimes to spy on their citizens in situations without any exploitative material. The following year, Apple abandoned the feature, leading (indirectly) to the more balanced child-safety feature announced today.
Once it rolls out globally, you can activate the feature under Settings > Screen Time > Communication Safety, and toggle the option on. That section has been activated by default since iOS 17.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/ios-182-has-a-child-safety-feature-that-can-blur-nude-content-and-report-it-to-apple-194614810.html?src=rss
What's Your Reaction?