We don't have (and I doubt we will ever have) tools for distinguishing between real and ai generated images with a guaranteed 100% accuracy (and 0% false negative and false positive rates).
Given that, I don't see how you can allow ai generated CSAM without effectively making "real" csam images be unprosectable.
So you think that currently, until this law is implemented, CSAM is effectively unprosecutable because people can just claim they generated the image with AI?
I think that there is a >0% probability that a individual case can be unprosecutable (or at least have the image evidence be much less useful) if the person in question actively starts generating CSAM using AI for the purpose of casting doubt on the legitimacy of any individual real image that the prosecutor wants to use as evidence.
The standard is beyond reasonable doubt, and I think that's going to become an increasingly difficult bar to clear if the AI generated versions (either made for their own case or as decoys) are allowed to remain legal.
Well they could ask the child in the photo...
loading story #47391395