Over time I am increasingly wondering how much these shortcomings on cognitive tasks are a matter of evaluators overestimating the capabilities of humans, while failing to provide AI systems with the level of guidance, training, feedback, and tools that a human would get.
I think that’s one issue; LLMs don’t get the same types of guidance, etc. that humans get; they get a lot of training and RL feedback, but it’s structured very differently.
I think this particular article gets another major factor right, where most analyses overlook it: LLMs by default don’t do metacognitive checks on their thinking. This is a huge factor in humans appearing as smart as we do. We make a variety of mistakes in our first guesses (system 1 thinking) that can be found and corrected with sufficient reflection (system 2 thinking). Adding more of this to LLM agents is likely to be a major source of capabilities improvements. The focus on increasing “9s of reliability” is a very CS approach; humans just make tons of mistakes and then catch many of the important ones; LLMs sort of copy their cognition from humans, so they can benefit from the same approach—but they don’t do much of it by default. Scripting it in to LLM agents is going to at least help, and it may help a lot.
I think it is at least somewhat in line with your post and what @Seth Herd said in reply above.
Like, we talk about LLM hallucinations, but most humans still don’t really grok how unreliable things like eyewitness testimony are. And we know how poorly calibrated humans are about our own factual beliefs, or the success probability of our plans. I’ve also had cases where coworkers complain about low quality LLM outputs, and when I ask to review the transcripts, it turns out the LLM was right, and they were overconfidently dismissing its answer as nonsensical.
Or, we used to talk about math being hard for LLMs, but that disappeared almost as soon as we gave them access to code/calculators. I think most people interested in AI are overestimating how bad most other people are at mental math.
I guess I was thinking about this in terms of getting maximal value out of wise AI advisers. The notion that comparisons might be unfair didn’t even enter my mind, even though that isn’t too many reasoning steps away from where I was.
Over time I am increasingly wondering how much these shortcomings on cognitive tasks are a matter of evaluators overestimating the capabilities of humans, while failing to provide AI systems with the level of guidance, training, feedback, and tools that a human would get.
I think that’s one issue; LLMs don’t get the same types of guidance, etc. that humans get; they get a lot of training and RL feedback, but it’s structured very differently.
I think this particular article gets another major factor right, where most analyses overlook it: LLMs by default don’t do metacognitive checks on their thinking. This is a huge factor in humans appearing as smart as we do. We make a variety of mistakes in our first guesses (system 1 thinking) that can be found and corrected with sufficient reflection (system 2 thinking). Adding more of this to LLM agents is likely to be a major source of capabilities improvements. The focus on increasing “9s of reliability” is a very CS approach; humans just make tons of mistakes and then catch many of the important ones; LLMs sort of copy their cognition from humans, so they can benefit from the same approach—but they don’t do much of it by default. Scripting it in to LLM agents is going to at least help, and it may help a lot.
That’s a fascinating perspective.
I think it is at least somewhat in line with your post and what @Seth Herd said in reply above.
Like, we talk about LLM hallucinations, but most humans still don’t really grok how unreliable things like eyewitness testimony are. And we know how poorly calibrated humans are about our own factual beliefs, or the success probability of our plans. I’ve also had cases where coworkers complain about low quality LLM outputs, and when I ask to review the transcripts, it turns out the LLM was right, and they were overconfidently dismissing its answer as nonsensical.
Or, we used to talk about math being hard for LLMs, but that disappeared almost as soon as we gave them access to code/calculators. I think most people interested in AI are overestimating how bad most other people are at mental math.
I guess I was thinking about this in terms of getting maximal value out of wise AI advisers. The notion that comparisons might be unfair didn’t even enter my mind, even though that isn’t too many reasoning steps away from where I was.