In August 2021, Apple announced plans to add several features to its iPhone operating system (iOS) to help prevent the possession and dissemination of child sex abuse material (CSAM). Among the proposals was a feature to be deployed on child iMessage accounts that would use a machine learning algorithm to scan all incoming and outgoing photos in a child’s messages for nudity. This feature would come to be branded “Communication Safety” and was implemented in the United States as part of a routine iOS update in December 2021.
The public reaction to Communication Safety has been relatively subdued, in stark contrast to the outcry from privacy advocates and information security experts in response to Apple’s proposed “client-side scanning” feature. This Note argues, however, that despite the relatively muted reaction to its announcement, Communication Safety also presents a meaningful risk to user privacy and security, constituting the early architecture of a backdoor into iMessage’s encryption—one that could theoretically be expanded with only a few technical modifications.
This Note discusses how U.S. law enforcement could attempt to use existing legal authorities to compel Apple to modify Communication Safety to search or surveil a suspect’s encrypted messages that otherwise would be beyond the government’s reach. While it is uncertain whether a court would ultimately issue such an order, Apple’s introduction of Communication Safety strengthens the government’s legal arguments in its longstanding effort to compel the company to assist with decrypting its users’ communications.