Hacker News new | past | comments | ask | show | jobs | submit
> they don't know how to use than the tools themselves.

No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.

Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.

> _Some_ of the blame lies on the UX here. It must.

No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.

>> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.

> No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.

The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.

I miss the days of earlier AI image-recognition software that would emit a confidence percentage.

New LLM-related AIs are all supremely confident in every assertion, no matter how wrong.

loading story #47357750