Every piece of CSAM shared online is another fire.
Let’s prove we can put one out.
Project Extinguish
THE PROBLEM
How can we disrupt the sharing and viewing of the worst known CSAM?
Especially in a world of end-to-end encryption?
Previous attempts to legislate tech companies can stall on hot topics that may not even be relevant:
We must not weaken encryption.
We don’t want excessive surveillance.
We cannot invade privacy.
THE IDEA
THE IDEA
Take the binary hash of a single widely shared image of known CSAM that is illegal everywhere and bake it into the tools that open, edit or view media on devices and prevent it being opened.
HOW?
HOW?
Look for common points in applications that handle images and ask developers to insert a check for that hash and prevent it opening.
— Open source Jpeg libraries / image editors / javascript components, whatever.
Why only one image?
-
Simple
Let’s just test the idea. MVP.
-
Consent
We must only use an image of a known survivor with their informed consent.
-
Fast
Lightest possible implementation.
-
Testable
Measure the efficacy of this idea
Why use a binary (cryptographic) hash?
-
There is no argument about invading privacy or snooping. More like a virus scanner.
-
No risk of falsely blocking a legal image that is similar
-
Quick to calculate / verify. May already be calculated in the app.
Why seek consent?
Risk
What if trying to block this image causes it to gain more attention?
The survivor must grant informed consent knowing the risks.
Proactive
Let’s do something now to prevent the spread. An image being online does not mean it’s there forever.
Why not report to the police?
(or alert the user / offer support etc.)
-
Fewer reasons for developers to not do it.
-
Reduced code complexity
-
The goal is to simply prove we can disrupt the spread.
-
Uncontroversial