"bad actor" can now be "ignorant employee running AI agents on their laptop".
Threats from incompetence or ignorance will be multiplied by 'X' over 'Y' years as AI proliferates. Unsupervised AI agents and context poisoning will spiral things out of control in any environment.
I'm interested in the effect of this with respect to AI-generated/assisted documentation and the recycling of that alongside the source-code back into the models.
Almost like defense in depth is key to good security. GP is ignoring that a truffle defense is only good until the first person is tricked