As long as you don't need actual CSAM material in the training data and the generated images are different enough from a real person (both of which seem to be very possible technology-wise), that seems to be a good thing.
Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?
Given that, I don't see how you can allow ai generated CSAM without effectively making "real" csam images be unprosectable.
The standard is beyond reasonable doubt, and I think that's going to become an increasingly difficult bar to clear if the AI generated versions (either made for their own case or as decoys) are allowed to remain legal.
(You need to sign both the models and the programs to make sure there's no img2img.)