No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.
Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.
> _Some_ of the blame lies on the UX here. It must.
No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
> No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.
New LLM-related AIs are all supremely confident in every assertion, no matter how wrong.