Via Hacker News In internal memo, Apple addresses concerns around new Photo scanning features | Hacker News
Apple's mistake is that they seemingly believe there is pushback because people misunderstand how it works. The reality is more nuanced: People understand exactly how it works, and how it works is that it is turn-key onboard spyware, that Apple pinky-swears isn't being used wrong today.
For example if the scope/mission expands (e.g. foreign governments), suddenly you've created a drag-net for whatever "badness" is of interest in whatever today's moral panic is (e.g. terrorism after 9/11). Plus perceptual hashing, by its very design, is created to be less precise than traditional cryptographic hashing.
A cryptographic hash + file size combo is unlikely to have a false positive within our lifetime (and it has been used successful by multiple companies to combat CP). The interesting thing about a perceptual hash is that the closer the source material is to the banned material in terms of actual content (e.g. nudity), the more likely for a false positive.
Therefore, if Apple does mess up via false-positive and manually review your material, it is more likely to be sensitive private materials (involving consenting adult(s), not CP) because that is what the perceptual hashes are looking for similarities to.
The server then uses the decryption key to decrypt the inner encryption layer and extract the NeuralHash and visual derivatives for the CSAM matches.
This "visual derivative" term shows up repeatedly. To me, the implication seems to be that Apple doesn't look at the actual suspected image before deciding whether to proceed with a report. Instead, I infer that they only verify whether (as the device reports) the image's neuralhash is indeed present in the NCMEC database. If my understanding is correct, their "manual review" process actually provides no protection at all against collisions or erroneous database entries.
Further supporting this, on page 4:
Apple reviews each report to confirm there is a match
It only refers to a match, not about whether the image appears to be illegal.
This makes perfect sense from Apple's perspective- who would want to be in the business of reviewing reports of probably-illegal images?- but it means that the references to a manual review safeguard would seem to be false reassurance. Maybe I'm misunderstanding the paper.