"Messages got jumbled" —

Apple defends iPhone photo scanning, calls it an “advancement” in privacy

Discussing child-abuse detection, Federighi said Apple is still "leading on privacy."

Apple executive Craig Federighi speaking on stage at an Apple conference in 2018.
Enlarge / Apple executive Craig Federighi speaks during the 2018 Apple Worldwide Developer Conference (WWDC) in San Jose, California.
Getty Images | Justin Sullivan

Apple's decision to have iPhones and other Apple devices scan photos for child sexual abuse material (CSAM) has sparked criticism from security experts and privacy advocates—and from some Apple employees. But Apple believes its new system is an advancement in privacy that will "enabl[e] a more private world," according to Craig Federighi, the company's senior VP of software engineering.

Federighi defended the new system in an interview with The Wall Street Journal, saying that Apple is aiming to detect child sexual abuse photos in a way that protects user privacy more than other, more invasive scanning systems. The Journal wrote today:

While Apple's new efforts have drawn praise from some, the company has also received criticism. An executive at Facebook Inc.'s WhatsApp messaging service and others, including Edward Snowden, have called Apple's approach bad for privacy. The overarching concern is whether Apple can use software that identifies illegal material without the system being taken advantage of by others, such as governments, pushing for more private information—a suggestion Apple strongly denies and Mr. Federighi said will be protected against by "multiple levels of auditability."

"We, who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy, as enabling a more private world," Mr. Federighi said.

In a video of the interview, Federighi said, "[W]hat we're doing is we're finding illegal images of child pornography stored in iCloud. If you look at any other cloud service, they currently are scanning photos by looking at every single photo in the cloud and analyzing it. We wanted to be able to spot such photos in the cloud without looking at people's photos and came up with an architecture to do this." The Apple system is not a "backdoor" that breaks encryption and is "much more private than anything that's been done in this area before," he said. Apple developed the architecture for identifying photos "in the most privacy-protecting way we can imagine and in the most auditable and verifiable way possible," he said.

The system has divided employees. "Apple employees have flooded an Apple internal Slack channel with more than 800 messages on the plan announced a week ago, workers who asked not to be identified told Reuters," the Reuters news organization wrote yesterday. "Many expressed worries that the feature could be exploited by repressive governments looking to find other material for censorship or arrests, according to workers who saw the days-long thread."

While some employees "worried that Apple is damaging its leading reputation for protecting privacy," Apple's "[c]ore security employees did not appear to be major complainants in the posts, and some of them said that they thought Apple's solution was a reasonable response to pressure to crack down on illegal material," Reuters wrote.

Phones to scan photos before uploading to iCloud

Apple announced last week that devices with iCloud Photos enabled will scan images before they are uploaded to iCloud. An iPhone uploads every photo to iCloud almost immediately after it is taken, so the scanning would also happen almost immediately if a user has previously turned iCloud Photos on.

As we reported, Apple said its technology "analyzes an image and converts it to a unique number specific to that image." The system flags a photo when its hash is identical or nearly identical to the hash of any that appear in a database of known CSAM. Apple said its server "learns nothing about non-matching images," and even the user devices "learn [nothing] about the result of the match because that requires knowledge of the server-side blinding secret."

Apple also said its system's design "ensures less than a one in one trillion chance per year of incorrectly flagging a given account" and that the system prevents Apple from learning the result "unless the iCloud Photos account crosses a threshold of known CSAM content." That threshold is approximately 30 known CSAM photos, Federighi told the Journal.

While Apple could alter the system to scan for other types of content, the company on Monday said it will refuse any government demands to expand beyond the current plan of using the technology only to detect CSAM.

Apple is separately adding on-device machine learning to the Messages application for a tool that parents will have the option of using for their children. Apple says the Messages technology will "analyze image attachments and determine if a photo is sexually explicit" without giving Apple access to the messages. The system will "warn children and their parents when receiving or sending sexually explicit photos."

Apple said the changes will roll out later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey and that the new system will be implemented in the US only at first and come to other countries later.

Federighi: “Messages got jumbled pretty badly”

Federighi said that critics have misunderstood what Apple is doing. "It's really clear a lot of messages got jumbled pretty badly in terms of how things were understood," Federighi told The Wall Street Journal. "We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing."

The simultaneous announcement of two systems—one that scans photos for CSAM and another that scans Messages attachments for sexually explicit material—led some people on social media to say they are "worried... about having family photos of their babies in the bath being flagged by Apple as pornography," the Journal noted.

"In hindsight, introducing these two features at the same time was a recipe for this kind of confusion," Federighi said. "By releasing them at the same time, people technically connected them and got very scared: What's happening with my messages? The answer is... nothing is happening with your messages."

Channel Ars Technica