Hacker News new | past | comments | ask | show | jobs | submit
I wrote about that recently: [1] One of the ways that code will be valued in the AI era is the extent to which it has contact with the real world. It doesn't matter how smart the AI is, the real world is always more perverse and complicated, and until their code has been tested by the real world you can't really trust it. (Even if we get superhuman AIs in the future, we have the same superhuman AIs producing superhuman amounts of new code in the world that your AI will have to interact with, and a single AI won't be able to overpower all the superhuman output in that world without testing.)

In practice even with much better AIs this would still be a pretty big risk. The testing you'd need would be extensive.

[1]: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/