Craig Federighi Tries to Clarify Apple’s Upcoming Child Safety Features wsj.com

Joanna Stern of the Wall Street Journal interviewed Craig Federighi about the two new child safety features Apple announced last week. You can watch the interview on YouTube.

Stern and Tim Higgins, Wall Street Journal:

Craig Federighi, Apple’s senior vice president of software engineering, in an interview emphasized that the new system will be auditable. He conceded that the tech giant stumbled in last week’s unveiling of two new tools. One is aimed at identifying known sexually explicit images of children stored in the company’s cloud storage service and the second will allow parents to better monitor what images are being shared with and by their children through text messages.

“It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood,” Mr. Federighi said. “We wish that this would’ve come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing.”

I am not sure how much Federighi’s explanations are clarifying for those who conflate these features or do not understand their limitations. For example, in the context of saying that Apple distributes the same version of iOS everywhere so there would not be region-specific targeting — more on that later — he said (at 7:25 in the interview) “it’s a single image across all countries”. I understand what a disk image is, but I think it is muddling for a general audience who are trying to understand how this CSAM technology scans picture images.

It is also striking how difficult it is for even a media-trained executive to clearly articulate these features. In Stern’s interview, there are several moments when she has to pause the interview to explain, in layperson terms, what is happening with the assistance of some effective graphics. I appreciate Stern’s clarifications and I understand them to be accurate, but I wish those words came from Apple’s own representative without needing interpretation. I think Apple’s representatives are still using too much jargon.

I am reassured by one of Federighi’s explanations, however. For background, here’s an interview with Apple’s privacy head Erik Neuenschwander by Matthew Panzarino of TechCrunch earlier this week:

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference if pressured or asked by a government to compromise the system?

Well first, that is launching only for U.S., iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the U.S. when they speak in that way, and the therefore it seems to be the case that people agree U.S. law doesn’t offer these kinds of capabilities to our government.

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. […]

I doubt that a singular global operating system means there cannot be country-specific hashes. Even excluding iOS’ spotty international feature availability, there are region-specific concessions made for political reasons. Many countries in the Middle East have blocked VoIP services, though some of those bans were eased in March last year. Russia requires that Apple prompt users to install locally-developed apps and, when I linked to that, I noted several other regional adjustments in China and elsewhere.

But even if it were impossible to target a hash list by country, a singular global operating system would still be concerning. If this feature were rolled out to more oppressive countries that required Apple to include hashes of non-CSAM images in its global database, that would mean accounts could be flagged in any region for including them. Yes, I know there is a human review step as well, but it is still unclear what that looks like and whether there is a possibility of coercion.

Anyway, background of my worries aside, here’s what Federighi explained in the Journal’s interview that reassured me:

Critics have said the database of images could be corrupted, such as political material being inserted. Apple has pushed back against that idea. During the interview, Mr. Federighi said the database of images is constructed through the intersection of images from multiple child-safety organizations — not just the National Center for Missing and Exploited Children. He added that at least two “are in distinct jurisdictions.” Such groups and an independent auditor will be able to verify that the database consists only of images provided by those entities, he said.

It really does seem like Apple is doing all it can to keep this system’s scope narrowed. I appreciate the risks inherent to the capability of local file scanning — even if it is only active on files being uploaded to iCloud — but I feel more assured these databases really will only contain CSAM and nothing more.

These concerns could apply to all cloud storage providers since all of the major ones check for CSAM-matching images, but it is interesting to me how much concern Apple’s approach has generated because of its on-device aspects. If this were an entirely cloud-based feature, I do not think it would be nearly as much anxiety, even though the systems are identical in their results. But because Apple is so focused on using on-device features for privacy reasons, it is requiring iCloud Photos users in the U.S. to sacrifice some control. I do not think it anticipated so much skepticism:

“It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood,” Mr. Federighi said. “We wish that this would’ve come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing.”

Agreed.