This is not correct. They can say "sorry" which makes them as accountable as ordinary developer.
Ordinary developers get fired for poor performance *all the time*.
LLM based solutions don’t need to stay dry and warm at night, with a full belly, possibly with their sexual partner with whom they have a drive to procreate.
I've found recent versions of Claude and codex to be reluctant in this regard. They will recognise the problem they created a few minutes ago but often behave as if someone else did it. In many ways that's true though, I suppose.
Does it do this for really cut and dry problems? I’ve noticed that ChatGPT will put a lot of effort into (retroactively) “discovering” a basically-valid alternative interpretation of something it said previously, if you object on good grounds. Like it’s trying to evade admitting that it made a mistake, but also find some say to satisfy your objection. Fair enough, if slightly annoying.
But I have also caught it on straightforward matters of fact and it’ll apologize. Sometimes in an over the top fashion…