Furthermore, the Electronic Frontier Foundation insists that it’s already seen this mission creep in action: “one of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society.”
Fundamental design flaws in Apple’s proposed approach have also been pointed out by experts, who have claimed that “Apple can trivially use different media fingerprinting datasets for each user. For one user it could be child abuse, for another it could be a much broader category”, thereby enabling selective content tracking for targeted users.
The type of technology that Apple is proposing for its child protection measures depends on an expandable infrastructure that can’t be monitored or technically limited. Experts have repeatedly warned that the problem isn’t just privacy, but also the lack of accountability, technical barriers to expansion, and lack of analysis or even acknowledgement of the potential for errors and false positives.