Hacker News new | past | comments | ask | show | jobs | submit
The current way of doing AI cannot be trusted.

that doesn’t mean the future won’t herald a way of using what a transformer is good at - interfacing with humans - to translate to and interact with something that can be a lot more sound and objective.

You're falling into the extrapolation fallacy, there is no reason to think that the future won't have the same issues as today in terms of hallucinations.

And even if they were solved, how would that even work? The world is not sound and objective.

It’s a thought experiment. I am not saying I believe it will happen.

But right now there are lots of domains where current lauded success is in treating something objective - like code - as tokens for an llm.

We could instead explore using transformers to translate human languages to a symbology that can be reasoned about and applied eg to code.

It’s the talk of conferences. But whether it works better than we have today, or whether it aligns with the incentives or the big players, is another matter