Hacker News new | past | comments | ask | show | jobs | submit
When I was young, and learning math, my father always forbade me from looking at the answer in the back of the textbook. “You don’t work backwards from the answer!”, and I think this is right.

In life, we rarely have the answer in front of us, we have to work that out from the things we know. It’s this struggling that builds a muscle you can then apply to any problem. ChatGPT, I suspect, is akin to looking up the answer. You’re failing to exercise the muscle needed to solve novel (to you), problems.

I think this is very correct for studying, when I was in undergrad, if I saw the answer, I found it was more effective to skip past it and solve it later. Even more important, once I am done, I should not go look at the answer to confirm if I am right or wrong, I should try and validate the answer by looking at my solution and trying to figure out if my solution is correct or not because there are plenty of ways to "disqualify" an answer, once I learned to do this well, my grades really went up.

However, I don't always agree with the fact that we don't have the answer in front of us in many situations. There are a lot of situations beyond grade school and undergrad where you do have the answer in front of you. Sometimes it's in for form of Numpy or Matlab or Simulink. It might be someone's research publication. Replicating these by working both forwards and backwards from their libraries or results can be much more effective.

loading story #41453640
loading story #41453733
I think it depends on how the tool is used. If a student is just plugging in the problem and asking for the answer, there is clearly no long term benefit to this. If the student is trying to understand a concept, and uses GPT to bounce ideas around or to ask for alternate ways of thinking about something, it can be very helpful.

How spacing is utilized can be helpful too. Struggle with the problem for 20-30 minutes. Ask for a nudge. Struggle some more. Repeat many times.

Some concepts also just have to be thought about differently to get to the aha moment, especially in math. AI may have an opportunity to present many instances of “think about it this way instead”.

loading story #41459066
loading story #41457612
>In life, we rarely have the answer in front of us,

this sounds a lot like the "when you're an adult, you won't always have a calculator in your pocket" line we all heard in elementary school. and of course, we all know now how wrong that was.

in life, chatGPT exists. and tools like it are only going to become more widespread. we are going to have the answers in front of us. knowing the things an LLM knows is not a useful skill anymore.

In order to develop the kinds of mental skills you need to tackle complex problems, you need to practice them on simpler problems first (particularly if you happen to be a child). If you decide at a young age it's worthless to attempt any problems that can be solved by a calculator or ChatGPT, you will probably never learn how to solve any problems that you can't use those tools for either.

Also, knowing what kind of things exist and what questions to ask is half the battle. If you haven't stored anything in your head on the grounds that you can outsource knowing things and thinking about them to ChatGPT, you're not going to be able to prompt it efficiently either. It's much harder to sanity check numbers and formulas given to you by others (or ChatGPT) if you can't do any quick mental math. Having original ideas that don't yet exist in LLM datasets also requires storing a lot of concepts and making new connections between them.

I suppose all this is moot if you expect AGI to be available to everyone in the near future, but if it turns out to be decades or more away unfortunately human mental effort will still be required in the meantime.

Not true. Current research is that experts are better with a ChatGTP, and beginners worse. An expert knows what to ask, and a beginner juat flails around.

Tools like calculators, ChatGTP, Google, and chainsaws are often force magnifiers for experts, and a danger for beginners.

If you have an idea of what you are trying to do at a deep level then they are great, but beginners don't, they usually just use them as a shortcut to avoid needing any real understanding of the space, and just get "the answer".

Although in some ways youre right. There will need to be changes in the skills we value, just churning out words, or calculating large sums and interating obscure trig functions will lose value.

> Current research is that experts are better with a ChatGTP, and beginners worse.

Current research being your opinion?

Because literally every expert I personally know to be competent says that they're essentially garbage producers and don't use them beyond occasionally checking if they've improved. And no, they haven't.

The only way to get decent code from them is to be so specific that you'd be able to produce the same code in a fraction of the time you need to add all the context necessary for the LLM to figure it out.

It's like coding with a junior. Thay can produce great code too. It's just gonna take a lot of handholding if complexity skyrocks.

The only place llms got a foot hold is with generating stock art that's mostly soulless marketing material. This "industry" is going to crash so hard in the next 3 years

loading story #41454393
loading story #41454947
loading story #41454260
loading story #41459079
>we all know now how wrong that was.

How wrong was it ? I find myself having to do head math every time I playing video games.

I do mental math pretty much every day. Even just going through the grocery store.

I also use things I learned from calculus and physics. Like when driving. No, not solving integrals all the time but it does teach you to focus on rates of change

Which video games?
League of Legends.

Just knowing that your combo can finish an opponent can switches your situation from very passive to very aggressive. Also some jungle / gank timing require clock math.

I think most serious / pro players memorize the numbers already. But casual players like me still have do head math.

This is about building [math] skill, not simply calculating a result
{"deleted":true,"id":41453857,"parent":41453670,"time":1725514865,"type":"comment"}
sure, but kids education is about building useful skills for life. passing a test is not a useful skill, if the test is useless.

teaching kids problem solving and how to be productive is important. maybe that means knowing how to solve certain mathematical equations, maybe that means knowing how to use tools like ChatGPT. focusing on the math just for the sake of passing tests because that's what we've got good tests for isn't super helpful to anybody.

loading story #41454280
(1) chatGTP is not a calculator. A calculator resolutely computes the exact numerical answer (modulo finite bits and rational numbers etc). ChatGTP is more stochastic. sometimes it's right, maybe a lot of the time, but no promises! (2) said student is taking an exam to find answers without the benefit of ChatGTP. Thusly doing exercises in a ChatGTP will not benefit the exam in the same way someone who only uses the calculator will suck at mental math. similar to what some of us may have seen "no calculators allowed on math exam"

but the big hangup is that "stochastically correct/incorrect" part. Can't count on it, still need skillz. Maybe when straberry/q* drops things will change in this regard

The same holds for calculators. My university banned graphical calculators from math courses. It is too easy to plug in a formula and see the graph. But for mathematical intuition, they want you know how formulas look like from your memory.
Mechanical aids—be they abacuses, slide rules, pocket calculators, supercomputers, or large language models—will never replace the need to reason.
Unless you can do the task yourself, you have a very hard time figuring out whether the LLM hallucinates or not. Relying on ChatGPT for your education is the path towards believing fake news.
Except you still have to validate ChatGPTs answer... Even answer keys in text books are sometimes wrong, teachers are sometimes wrong. ChatGPT is often wrong. You still need to learn the skills.
"It’s this struggling that builds a muscle…"

Very few of us have the luxury or good fortune to achieve anything of substance without effort, as the old saying goes 'no pain, no gain'.

Good teachers help and are essential for encouragement. If or when AI becomes a truly encouraging mentor then it will be useful.

When a kid I recall hearing a recording of Sparky's Magic Piano on the radio which drove home that there's no shortcut and that effort and hard work eventually pay off. For anyone that hasn't heard it there's a copy on the Internet Archive: https://archive.org/details/78_sparkys-magic-piano_henry-bla...

Here's a Wiki synopsis: https://en.m.wikipedia.org/wiki/Sparky%27s_Magic_Piano

> ChatGPT, I suspect, is akin to looking up the answer.

It’s a tool that can be used in a variety of ways. For example, if you have it act as Socrates working through a dialogue where you still have to use your logic to get to the answer, I doubt that is akin to looking up the answer.

In life there usually isn't an answer.
loading story #41454147
loading story #41453743
If you know you got it wrong and you don't know why, what's the alternative to working backwards?
loading story #41454680
loading story #41453598
An LLM used properly would be like an individualized tutor that knows the subject very well, and learns the student's quirks quickly.

I mean, any pre-prompt telling the LLM to be a good tutor and not just give kids the answer can be trivially bypassed by 8 year olds; but if the session logs were available for (potentially LLM-based) review to check whether they stayed in tutor mode...

Shit, I should wrap a UI around this and sell it.

loading story #41454687