Hacker News new | past | comments | ask | show | jobs | submit
They're not "muted". You just got used to them and figured out that they don't actually generete knew knowledge or information, they only give a statistically average summary of the top Google query. (I.e., they are super bland, boring and predictable.)
LLMs are pretty bland but they don’t just summarize the top Google result. They can generate correct SQL queries to answer complex questions about novel datasets. Summarizing a search engine result does not get you anywhere close to that.

It may be fair to characterize what they’re doing as interpolative retrieval, but there’s no reason to deny that the “interpolative” part pulls a lot of weight.

P.S. Yes, reliability is a major problem for many potential LLM applications, but that is immaterial to the question of whether they're doing something qualitatively different from point lookups followed by summarization.

loading story #41456385