In life, we rarely have the answer in front of us, we have to work that out from the things we know. It’s this struggling that builds a muscle you can then apply to any problem. ChatGPT, I suspect, is akin to looking up the answer. You’re failing to exercise the muscle needed to solve novel (to you), problems.
However, I don't always agree with the fact that we don't have the answer in front of us in many situations. There are a lot of situations beyond grade school and undergrad where you do have the answer in front of you. Sometimes it's in for form of Numpy or Matlab or Simulink. It might be someone's research publication. Replicating these by working both forwards and backwards from their libraries or results can be much more effective.
How spacing is utilized can be helpful too. Struggle with the problem for 20-30 minutes. Ask for a nudge. Struggle some more. Repeat many times.
Some concepts also just have to be thought about differently to get to the aha moment, especially in math. AI may have an opportunity to present many instances of “think about it this way instead”.
this sounds a lot like the "when you're an adult, you won't always have a calculator in your pocket" line we all heard in elementary school. and of course, we all know now how wrong that was.
in life, chatGPT exists. and tools like it are only going to become more widespread. we are going to have the answers in front of us. knowing the things an LLM knows is not a useful skill anymore.
Also, knowing what kind of things exist and what questions to ask is half the battle. If you haven't stored anything in your head on the grounds that you can outsource knowing things and thinking about them to ChatGPT, you're not going to be able to prompt it efficiently either. It's much harder to sanity check numbers and formulas given to you by others (or ChatGPT) if you can't do any quick mental math. Having original ideas that don't yet exist in LLM datasets also requires storing a lot of concepts and making new connections between them.
I suppose all this is moot if you expect AGI to be available to everyone in the near future, but if it turns out to be decades or more away unfortunately human mental effort will still be required in the meantime.
Tools like calculators, ChatGTP, Google, and chainsaws are often force magnifiers for experts, and a danger for beginners.
If you have an idea of what you are trying to do at a deep level then they are great, but beginners don't, they usually just use them as a shortcut to avoid needing any real understanding of the space, and just get "the answer".
Although in some ways youre right. There will need to be changes in the skills we value, just churning out words, or calculating large sums and interating obscure trig functions will lose value.
Current research being your opinion?
Because literally every expert I personally know to be competent says that they're essentially garbage producers and don't use them beyond occasionally checking if they've improved. And no, they haven't.
The only way to get decent code from them is to be so specific that you'd be able to produce the same code in a fraction of the time you need to add all the context necessary for the LLM to figure it out.
It's like coding with a junior. Thay can produce great code too. It's just gonna take a lot of handholding if complexity skyrocks.
The only place llms got a foot hold is with generating stock art that's mostly soulless marketing material. This "industry" is going to crash so hard in the next 3 years
How wrong was it ? I find myself having to do head math every time I playing video games.
I also use things I learned from calculus and physics. Like when driving. No, not solving integrals all the time but it does teach you to focus on rates of change
Just knowing that your combo can finish an opponent can switches your situation from very passive to very aggressive. Also some jungle / gank timing require clock math.
I think most serious / pro players memorize the numbers already. But casual players like me still have do head math.
teaching kids problem solving and how to be productive is important. maybe that means knowing how to solve certain mathematical equations, maybe that means knowing how to use tools like ChatGPT. focusing on the math just for the sake of passing tests because that's what we've got good tests for isn't super helpful to anybody.
but the big hangup is that "stochastically correct/incorrect" part. Can't count on it, still need skillz. Maybe when straberry/q* drops things will change in this regard
Very few of us have the luxury or good fortune to achieve anything of substance without effort, as the old saying goes 'no pain, no gain'.
Good teachers help and are essential for encouragement. If or when AI becomes a truly encouraging mentor then it will be useful.
When a kid I recall hearing a recording of Sparky's Magic Piano on the radio which drove home that there's no shortcut and that effort and hard work eventually pay off. For anyone that hasn't heard it there's a copy on the Internet Archive: https://archive.org/details/78_sparkys-magic-piano_henry-bla...
Here's a Wiki synopsis: https://en.m.wikipedia.org/wiki/Sparky%27s_Magic_Piano
It’s a tool that can be used in a variety of ways. For example, if you have it act as Socrates working through a dialogue where you still have to use your logic to get to the answer, I doubt that is akin to looking up the answer.
I mean, any pre-prompt telling the LLM to be a good tutor and not just give kids the answer can be trivially bypassed by 8 year olds; but if the session logs were available for (potentially LLM-based) review to check whether they stayed in tutor mode...
Shit, I should wrap a UI around this and sell it.