Facebook’s Former Security Chief Discusses Controversy Around Apple’s Planned Child Safety Features

Amid the ongoing controversy around Apple’s plans to implement new child safety features that would involve scanning messages and users’ photos libraries, Facebook’s former security chief, Alex Stamos, has weighed into the debate with criticisms of multiple parties involved and suggestions for the future.

Child Safety Feature
In an extensive Twitter thread, Stamos said that there are “no easy answers” in the debate around child protection versus personal privacy.

Stamos expressed his frustration with the way in which Apple handled the announcement of the new features and criticized the company for not engaging in wider industry discussions around the safety and privacy aspects of end-to-end encryption in recent years.

Apple was invited but declined to participate in these discussions, and with this announcement they just busted into the balancing debate and pushed everybody into the furthest corners with no public consultation or debate.

Likewise, Stamos said that he was disappointed with various NGOs, such as the Electronic Frontier Foundation (EFF) and National Center for Missing & Exploited Children (NCMEC), for leaving little room for discussion in their public statements. The NCMEC, for example, called Apple employees that questioned the privacy implications of the new features “the screeching voices of the minority.” “Apple’s public move has pushed them to advocate for their equities to the extreme,” Stamos explained.

Stamos urged security researchers and campaigners who were surprised at Apple’s announcement to pay closer attention to the global regulatory environment, and speculated that the UK’s Online Safety Bill and the EU’s Digital Services Act were instrumental in Apple’s move to implement the new child safety features.

One of the basic problems with Apple’s approach is that they seem desperate to avoid building a real trust and safety function for their communications products. There is no mechanism to report spam, death threats, hate speech, NCII, or any other kinds of abuse on iMessage.

He also said that Apple does not have sufficient functions for trust and safety, and encouraged Apple to create a reporting system in iMessage, roll out client-side ML to prompt users to report something abusive, and staff a child safety team to investigate the worst reports.

Instead, we get an ML system that is only targeted at 13 year-olds (not the largest group of sextortion/grooming targets in my experience), that gives kids a choice they aren’t equipped to make, and notifies parents instead of Apple T&S.

Stamos said that he did not understand why Apple is scanning for CSAM locally unless iCloud backup encryption is in the works, and warned that Apple may have “poisoned” opinion against client-side classifiers.

I also don’t understand why Apple is pushing the CSAM scanning for iCloud into the device, unless it is in preparation for real encryption of iCloud backups. A reasonable target should be scanning shared iCloud albums, which could be implemented server-side.

In any case, coming out of the gate with non-consensual scanning of local photos, and creating client-side ML that won’t provide a lot of real harm prevention, means that Apple might have just poisoned the well against any use of client-side classifiers to protect users.

Nevertheless, Stamos highlighted that Facebook caught 4.5 million users posting child abuse images, and that this is likely only a proportion of the overall number of offenders, by scanning for images with known matches for CSAM.

Leave a Reply

Your email address will not be published. Required fields are marked *