Don't miss the latest stories
Apple Indefinitely Pauses Controversial Photo-Scanning Tech For ‘Improvements’
By Mikelle Leow, 04 Sep 2021
Image via Framesira / Shutterstock.com
Apple was planning to take on a strict and highly controversial approach of scanning users’ devices for photos of Child Sexual Abuse Material (CSAM). After much backlash, the company says it will put the feature on hold. The detection tool is still coming, though.
On Friday, the tech giant sent out a statement to news outlets that read: “Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material [CSAM]. Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
This comes after heated concerns about privacy invasion arose, most audibly from 90 policy groups around the world, who banded together and penned an open letter to CEO Tim Cook expressing worries about potential government intervention and—ultimately, manipulation. “Once this capability is built into Apple products, the company and its competitors will face enormous pressure—and potentially legal requirements—from governments around the world to scan photos not just for CSAM, but also for other images a government finds objectionable,” they wrote.
Privacy and security advocates feared that—while Apple stressed that it would only extend the feature for CSAM—it might eventually fall to government pressures to ban imagery of “human rights abuses, political protests, images companies have tagged as ‘terrorist’ or violent extremist content, or even unflattering images of the very politicians” for personal gain.
“And that pressure could extend to all images stored on the device, not just those uploaded to iCloud,” the policy groups added. “Thus, Apple will have laid the foundation for censorship, surveillance and persecution on a global basis.”
The measure was intended for rollout as part of updates in iOS 15, iPad OS 15, watchOS 8, and macOS Monterey this year to help parents take on “a more informed role in helping their children navigate communication online,” but it will now be delayed indefinitely.
Apple did not elaborate how it would “improve” on the image detection tool, or where it would further “collect input” from.
The company previously explained that images would be determined as abusive if they matched overlapping hashes from two or more child protection agencies. It also clarified that, to prevent unfair and manipulative censorship, the organizations it would draw these red flags from wouldn’t be controlled by the same local governments. An added layer of verification by human checkers would prevent photos mistakenly flagged by AI from being banned.
[via Ars Technica, cover image via Framesira / Shutterstock.com]
More related news
Also check out these recent news