Europe takes first step to banning AI-generated child sexual abuse images
https://www.reuters.com/business/europe-takes-first-step-banning-ai-generated-child-sexual-abuse-images-2026-03-13/As long as you don't need actual CSAM material in the training data and the generated images are different enough from a real person (both of which seem to be very possible technology-wise), that seems to be a good thing.
Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?
Given that, I don't see how you can allow ai generated CSAM without effectively making "real" csam images be unprosectable.
The standard is beyond reasonable doubt, and I think that's going to become an increasingly difficult bar to clear if the AI generated versions (either made for their own case or as decoys) are allowed to remain legal.
(You need to sign both the models and the programs to make sure there's no img2img.)
You got a study showing that's how it works!
> Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?
There's a decent amount of research that shows the escalating nature of pornography consumption.
https://fightthenewdrug.org/how-porn-can-become-an-escalatin...
You can follow the references in this article.
Surely it's better to ask why people are looking for any kind of child abuse material, generated or not, and find ways to help them.
As much as it's reasonable to worry about moral panic one might also worry about moral complacency.
That being said I don’t know if the availability of CSAM would increase or decrease real world abuse.