Follow

OK. This is quite technical. But TL;DR - the neural hash system uses for their CSAM detection has been confronted with its first possible collision by some good hackers. This dog might be marked by the system as suspicious. Ouch. Issue 1 at github.com/AsuharietYgvar/Appl

cc @aral FYI. This dog would be classified as possible CSAM by the Apple Neuralhash system. Do NOT save this on your iDevice.

@aral Now if we can find, say, 50 more absolutely non CSAM pictures that trigger the system and put those in albums on many iDevices, the whole thing becomes rather useless.

@jwildeboer @aral as I wrote: "a kind of". I know it's not a DDos in the true sense of the word.

@jwildeboer @aral as far as I understand the issue, only the hash of these two pictures in the issue collide, there is no information on classification on the CSAM database.
According to reddid the hashes (of every picture on the device) is checked on apples servers only, so there is no way to know if the hash is actually in the DB.
But yes, once hashes are public, one could generate colliding pictures quite easily.

@jwildeboer @aral also, I always wondered why Apple said they will only alert if there are a few matching files... which makes sense if they already knew that there will be hash collisions.

@jwildeboer If we argue against technological shortcomings, we're fighting a losing battle. Tech will probably get better over time.

We need to fight this from a human rights perspective. Privacy, personhood, who owns your device. This kind of stuff.

@claudius Absolutely. But that is not an either/or question. You need both IMHO. The technical arguments against this implementation AND the complete erosion of privacy it introduces.

You have to agree it has a suspicious look in its eyes :-D
@caranmegil And it might be younger than 18 too. I can see why Apple would tag this :-D

@jwildeboer from its facial expression, I know how this πŸ• thinks of the system

[Image description] 

@jwildeboer a white and brown dog with a weird expression on his face.

@jwildeboer this means collisions are rather easy. The thing is, if you can then get the db, and then generate innocuous pics that match the same hashes, you can flood a targets phone without having to actually use illegal content, with potentially interesting consequences. Denial of Privacy attacks... @aral

@jwildeboer Credit where it's due, though: the AI appears correct that the dog _is_ indeed suspicious .. probably they're looking straight at an Apple "privacy" advert...

@Jan ? Wildeboer @Matthias Eberl
Even worse: millions of hashes are not listed in CSAM system, cause a suspicious pic has different hashes when they're different in resolution, quality, file format etc.
It may just contain only 2% of abusive pics exist in wild and produces a lot of false negative matches.
Et all: it's just an argument for surveillance.
Sign in to participate in the conversation
social.wildeboer.net

Mastodon instance for people with Wildeboer as their last name