Matt Green breaks down (ahem) popular misrepresentation of “homomorphic encryption” (i.e. cleartext) CSAM detection that Thorn espouse; HT Douwe Korff for tl;dr

This short thread is a blinder; linked as best I can, below, because NewTwitter does not permit serious unrolls any more.

For those unfamiliar, Thorn is a noted, Aston-Kutcher-founded critic of platforms which are enabling people to have secure private messenger conversations, on the basis that permitting people to have privacy leads to child abuse with presumably no positive outcomes.

They pitch their “solution” thusly:

Thorn’s CSAM Classifier is an incredible machine learning-based tool that can find new or unknown CSAM in both images and videos. When potential CSAM is flagged for moderator review and the moderator confirms if it is or is not CSAM, the classifier learns. It continually improves from this feedback loop so it can get even smarter at detecting new material.

https://www.thorn.org/blog/how-thorns-csam-classifier-uses-artificial-intelligence-to-build-a-safer-internet/

Short version (the long version is in this posting)

  • They are proposing to oblige/tell Signal how to write software (see diagram)
  • They are talking about using (ostensibly privacy-preserving) homomorphic encryption, but not actually using it in any way that would preserve privacy
  • Doing so is not exactly honest

Quoth Matt

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *