- Apple Inc announced on Thursday that it would implement a system to check photos on iPhones in the United States for matches with known images of child sexual abuse before they are uploaded to its iCloud storage services.
If enough child abuse images uploads are detected, Apple will initiate a human review of and report the user to law enforcement officials, reported the company. Apple said the new system is designed to reduce false positives to one in one trillion.
With the new system, Apple is trying to address two imperatives: Requests from law enforcement to help stem child sexual abuse, and privacy and security practices that the company has made a core tenet of its brand. Other companies such as Facebook Inc use similar technology to detect and report child sexual abuse.
What is the structure of Apple's iOS iPhone? Law enforcement officials maintain a database of known child abuse images and translate those images into hashes - numerical codes, which positively identify the image but can't be used to reconstruct them.
Apple has made its own implementation of that database using a technology called NeuralHash that is designed to also catch edited but similar to the original imagines. The database will be stored on iPhones.
When a user uploads an image to Apple's iCloud storage service, the iPhone will create a hash of the image to be uploaded and compare it against the database. Photos stored only on the phone are not checked, Apple said.
The Financial Times highlighted some aspects of the program earlier.
One key aspect of the system is that Apple tests photos stored on smartphones before they are uploaded, rather than checking the pictures after they arrive in Apple's servers.
On Twitter, privacy and security experts expressed concerns that the system could potentially be expanded to scan phones more generally for prohibited content or political speech.
Regardless of what Apple's long term plans are, they have sent a very clear signal. In their opinion, it is safe to make systems that scan users' phones for prohibited content, Matthew Green, a security researcher at Johns Hopkins University, wrote in reply to earlier reporters. What makes it right or wrong for a person to turn out to be right or wrong in some way? This will break the dam - governments will demand it from everybody.