It’s unclear if this implies fundamental differences in how they work versus different specializations.
(It’s possible that humans could trained to be much better at next token prediction, but there isn’t an obvious methodology which works for this based on intial experiments.)
> It’s unclear if this implies fundamental differences in how they work versus different specializations.
Correct. That article argues that LLMs are more powerful than humans in this skill, but not that they have different (implicit) goal functions or that their cognitive architecture is deeply different from the human.
SOTA LLMs seem to be wildly, wildly superhuman than humans at literal next token prediction.
It’s unclear if this implies fundamental differences in how they work versus different specializations.
(It’s possible that humans could trained to be much better at next token prediction, but there isn’t an obvious methodology which works for this based on intial experiments.)
Thank you.
> It’s unclear if this implies fundamental differences in how they work versus different specializations.
Correct. That article argues that LLMs are more powerful than humans in this skill, but not that they have different (implicit) goal functions or that their cognitive architecture is deeply different from the human.