Apple to monitor sexually explicit images in iPhone and iPad

895
4

- Apple Inc. said it would launch new software later this year that will analyze photos stored in a user's iCloud Photos account for sexually explicit images of children and then report instances to relevant authorities As part of new safeguards involving children, the company also announced a feature that will analyze photos sent and received in the Messages app to see if they are explicit. Apple also adds features to its Siri digital assistant to help when user search for related abusive material. The Cupertino, California-based technology giant said on Thursday that the three new features would be put in use later in 2021 and reviewed more in this regard in 2018.

If Apple detects a threshold of sexually explicit photos of kids in a user's account, the instances will be automatically reported to the National Center for Missing and Exploited Children or NCMEC, which reports with law enforcement agencies. Apple said images are analyzed in the user's iPhone and iPad in the US before they are uploaded to the cloud.

Apple said it would detect child abuse by comparing photos with a database of known Child Sex Abuse Material (CSAM), supplied by the NCMEC. The company is using a technology called NeuralHash that analyzes images and converts them to a unique key or hash number. The key is then compared with the database using cryptography. Apple said the process ensures it can't learn about images that don't match the database.

The Electronic Frontier Foundation says Apple is to design a backdoor to its highly advertised privacy features for users with the new tools.

'It's impossible to build a client-side scanning system that is capable only for sexually explicit images sent or received by children, the EFF said in a post on its website. 'As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger's encryption itself and open the door to broader abuses.

Other researchers are likewise worried. 'Regardless of what Apple's long-term plans are, they have sent a very clear signal, wrote Matthew Green, a cryptography teacher at Johns Hopkins University, on Twitter. In their opinion, it is safe to build systems that scan users' phones for prohibited content.

Critics said the moves don't align with Apple's 'What happens on your iPhone, stays on your iPhone' advertising campaigns. This completely betrays the company's pious privacy assurances, wrote journalist Dan Gillmor. 'This is just the beginning of what governments will demand everywhere. If you think something otherwise, then you're naive.

Apple said its detection system has an error rate of 'less than one in 1 trillion' per year and that it protects the user privacy. Apple only learns about users' photos if they have a collection of known CSAM in their iCloud Photos account, the company said in a statement. Even in these cases, Apple automatically learns about the images that match known CSAM.

Any user who believes his account was flagged by mistake can file an appeal, the company said. To respond to privacy concerns about the feature, Apple published a white paper detailing the technology as well as a multiple analysis of the protocol by third parties from three researchers.

John Clark, President and Chief Executive Officer of NCMEC, praised Apple for its new features. 'These new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are circulated in child sexual abuse material, Apple said in a statement provided by Apple.

The feature in Messages is optional and can be enabled by parents on devices used by their children. The system will check sexually explicit material in photographic pictures sent and those ready to be put by children. If a child receives an image with sexual content, it will be blurred out and the child will have to tap an extra button to view it. If they look up an image, their parent will be notified. Likewise, if a child tries to send an explicit image, they will be told and their parent will receive a notification.

Apple said the Messages feature uses device analysis and doesn't view message contents. The feature applies to Apple's iMessage service and other protocols like Multimedia Messaging Service (AFC).

The company also rolls out two similar features to Siri and search. The systems will be able to respond to claims of child abuse and abusive images and provide information on how people can file a report. The second feature warns users who conduct searches for material that is abusive to children. The Siri features are coming to the iPhone, iPad, Mac and Apple Watch, the company said.