Apple defends its CSAM monitoring

Last week, Apple announced a major initiative to combat the spread of child abuse imagery. Beginning with the company’s next round of major software updates, Apple says everything from your iPhone to your Apple Watch would automatically scan your photos once they get uploaded to iCloud to see if they match any stored hashes in a database of CSAM (or child sexual abuse material). If those hashes match, Apple would be notified and authorities would be contacted.

In the context of child porn, this is obviously a good thing. In the context of just about anything else, though, it sounds like a feature that could be weaponized by authoritarian governments who don’t want their citizens to share certain content. Many industry analysts have suggested Apple could expand the capabilities of the feature to include content that doesn’t fit the agendas of certain authoritarian governments, such as China. That’s clearly a huge concern, but luckily, Apple appears to be taking a firm stance on the issue.

In a new FAQ document, Apple confirms it will refuse to accede to “any government’s request to expand [CSAM monitoring].” The company’s full quote explains this in more detail.

Could governments force Apple to add non-CSAM images to the hash list?

Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

In addition, Apple defends its new child protection features in iMessage which can automatically blur sexually explicit photos on accounts belonging to children. The photos can be seen by the viewer if they’re 12 and younger, but an alert will be sent to their parents that they opened it up. In addition, those 13-17 can share and receive explicit photos and will receive a warning from Apple beforehand, but their parents won’t know.

Does this mean Messages will share information with Apple or law enforcement?

No. Apple never gains access to communications as a result of this feature in Messages. This feature does not share any information with Apple, NCMEC or law enforcement. The communications safety feature in Messages is separate from CSAM detection for iCloud Photos — see below for more information about that feature.

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.

This is an incredibly firm stance, one that Apple undoubtedly hopes will bestow some confidence in those who are skeptical of the feature.

When news broke that Apple would be scanning your photos, it made sense to me that people would get upset. Who wants a Big Tech company snooping in on the photos you take? That being said, child pornography and its distribution is a serious issue, especially nowadays with so many different ways of encrypting it.

Apple’s documentation makes it clear that the company has no interest in learning what kinds of photos you take, other data on your phone, or ever using this technology for anything beyond CSAM. The scanning that takes place to detect certain hashes on your phone is also completely encrypted and can’t be seen by Apple unless a hash is matched with one in the database provided by NCMEC and other agencies.

This kind of language, paired with Apple’s strong urge to ensure user’s privacy, makes it seem like there’s not much to worry about when it comes to Apple scanning your photos as they get uploaded to iCloud. But then again, this is a very fine line Apple’s trying to walk on. It’s taking “spying on you through your phone without your consent” to such an extreme that you almost can’t argue with the intention behind it. Issues, in that case, would only arise if that practice ever expands to other forms of content.

Whether that happens in the future has yet to be seen. Apple’s FAQ makes it seem like it’ll never happen, but there’s no such thing as a crystal ball. We have no choice but to sit back and wait to see what happens.

I’ll continue to follow this story as more developments emerge.