Apple is stepping up its efforts to combat the spread of photos of children that are abusive in nature. The company has confirmed a report from the Financial Times is true to multiple media outlets: it’ll soon begin scanning your photos for such imagery.
As a part of iOS 15, iPadOS 15, macOS Monterey, and watchOS 8, Apple will begin automatically scanning pictures that get uploaded to your iCloud account and compare them to a large database of child sexual abuse material (CSAM) that’s maintained by the National Center for Missing & Exploited Children (NCMEC). It’s only when your photos are uploaded to iCloud the scan is initiated, so presumably, if you don’t have iCloud backups turned on it won’t work.
The scan is powered by a system that looks for known CSAM image hashes. If it finds those hashes and matches them with hashes stored in NCMEC databases, Apple will flag the photo and alert authorities. According to Apple, these scans happen on-device and are completely unreadable unless an alert is triggered. The company provides some more insights into how the system works on a web page all about their new efforts for child safety.
Considering an alert will call the police, it’s obvious to think about the possibility of false positives. Apple says lone errors in an image will not trigger an alert, which should help the company reach its goal of one false alarm per trillion users a year. That’s an extremely small chance the cops will come knocking on your door by accident. Alerts are also reviewed by Apple and the NCMEC before the police know anything as an extra loophole.
However, another major concern has been privacy. Security experts have raised the point of Apple’s auto-scanning technology being misused, specifically by authoritarian governments such as China. Matthew Green, a cryptography professor at Johns Hopkins, mentions this concern in a tweet.
Because of its sheet power over billions of devices, in theory, Apple could go haywire with this feature and let entities bigger than itself control it. However, it’s not the first time the company has had a feature like this. They’ve had a similar feature built into iCloud for some time to detect child pornography in email, similar to what Google does with Gmail. So unless they do go haywire, you’re kind of forced to trust Apple in the meantime.
As a part of its child safety initiatives, Apple is also introducing a couple of new features to suppress the spread of child sexual abuse materials, starting with iMessage. If you have an Apple ID account set up for a child as a part of your family, when iMessage detects an image that exploits a child is either being shared or has been received, it’ll blur the photo and ask the user if they actually want to view it/share it. Opening the image will also send a notification to the parent on the account.
In addition, Apple is building safeguards into Siri and Search to prompt users that what they might be searching is illegal or problematic. It’ll then provide resources for the user to easily file a report or get help for “at-risk thoughts.”
All of these features will begin rolling out this fall when the next major updates to the iPhone, iPad, Mac, and Apple Watch are released. I’m definitely curious to know what everyone’s opinion on these features are. Obviously, scanning your images without consent is a serious breach of trust, but it seems Apple will limit the feature strictly to child porn. Of course, with such powerful technology on so many different devices, it only makes sense to raise an eyebrow as to whether Apple plans to limit its capabilities going into the future.
One Comment
You must log in to post a comment.