Hacker News new | past | comments | ask | show | jobs | submit
The actual quotes are the best part: https://www.anthropic.com/features/81k-interviews#quotes

Some quotes that stuck out to me:

"I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany

"I live in a war zone... AI can not only give practical advice, but also emotionally calm me down during panic attacks. It can calm someone during a missile attack in one chat, and laugh with me about something silly in another. That’s what makes it not fragmented into a therapist/teacher/friend, but something whole." Ukraine

"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA

"The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."

> "The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."

I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...

> which came back 6 times higher than its supposed to be.

It has been proven that massive testing creates many false positives.

This happened during covid: https://www.bmj.com/content/373/bmj.n1411/rr

Tests may not be as reliable as though but they are good enough when other symptoms are accounted for. To randomly test people based on AI hallucinations can increase the number of unnecessary medication or even interventions.

> I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...

This is a competition of public and private interests. A sick individual is going to lobby for tests until they discover the cause. From a public perspective, it might be cheaper to just let them die. AI is an advocate for the individual.

For the record, ChatGPT helped me diagnose a lifelong illness. I'm a new man now thanks to AI. Literally life changing. I had spent decades pleading for tests because no one could figure out the cause. I think a likely outcome here is not necessarily 10,000x more tests performed, but similar or even fewer tests, because the diagnosis success rate with AI is higher. It's not subject to bias. People tend to be more honest and reflective with their AI than they are with doctors. They get 5 minutes to give the entire case to the doctor. With an AI they can spend weeks debating and reflecting. This builds a case history far more detailed and accurate than anything we have in modern medicine today. Amplified by an order of magnitude because the AI can extract meaningful insights from the discussion.

In the very near future our AI will contact our GP for us. Soon after that, our GP will be our AI.

I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.

They're good at acting as a "reverse dictionary" like this where you give it a description of something, and it knows the word for it. They have approximate knowlege of many things.

> "I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany

There's always something about claims like this. I'm not claiming that AI can't speed up your processes, but I question the persons expertise when they claim months or years of work turns into days or weeks. It just doesn't make sense to me.

"My output is like 25x what it used to be. I’ve built over 20 backend server tools, 7 major projects in the last 6 months—my work output this year is greater than the last five combined. I can typically finish a significant project in a day or two."

I love how many of these comments have em dashes in them and how many are just outright trolling.
"AI is sort of like money... it just makes you more of what you already are."
loading story #47436448