[flagged]
I wrote about that recently: [1] One of the ways that code will be valued in the AI era is the extent to which it has contact with the real world. It doesn't matter how smart the AI is, the real world is always more perverse and complicated, and until their code has been tested by the real world you can't really trust it. (Even if we get superhuman AIs in the future, we have the same superhuman AIs producing superhuman amounts of new code in the world that your AI will have to interact with, and a single AI won't be able to overpower all the superhuman output in that world without testing.)
In practice even with much better AIs this would still be a pretty big risk. The testing you'd need would be extensive.
[1]: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/
Absolutely true, but there is a silver lining:
When people rewriting open source libs with a bot then come crying to maintainers that their rewrites have bugs, and they would like for someone to fix said bugs for free, there is absolutely no one who will feel obligated to help them out.
Eh I think part of the joke is that LLMs have gobbled up the original source code, and if you help them enough (identical type signatures and specs), they will output the same code, it's the copyright laundering problem.
Let's not spam HN with AI slop please.