“What is intelligence?” is a question you can spend an entire productive academic career failing to answer. Intentionally ignoring the nerd bait, I do think this post highlights how important it is for AGI worriers to better articulate which specific qualities of “intelligent” agents are the most worrisome and why.
For example, there has been a lot of handwringing over the scaling properties of language models, especially in the GPT family. But as Gary Marcus continues to point out in his inimitable and slightly controversial way, scaling these models fails to fix some extremely simple logical mistakes—logical mistakes that might need to be fixed by a non-scaling innovation before an intelligent agent poses an ex-risk. On forums like these it has long been popular to say something along the lines of “holy shit look how much better these models got when you add __ amount of compute! If we extrapolate that out we are so boned.” But this line of thinking seems to miss the “intelligence” part of AGI completely, it seemingly has no sense at all of the nature of the gap between the models that exist today and the spooky models they worry about.
It seems to me that we need a better specification for describing what exactly intelligent agents can do and how they get there.
“What is intelligence?” is a question you can spend an entire productive academic career failing to answer. Intentionally ignoring the nerd bait, I do think this post highlights how important it is for AGI worriers to better articulate which specific qualities of “intelligent” agents are the most worrisome and why.
For example, there has been a lot of handwringing over the scaling properties of language models, especially in the GPT family. But as Gary Marcus continues to point out in his inimitable and slightly controversial way, scaling these models fails to fix some extremely simple logical mistakes—logical mistakes that might need to be fixed by a non-scaling innovation before an intelligent agent poses an ex-risk. On forums like these it has long been popular to say something along the lines of “holy shit look how much better these models got when you add __ amount of compute! If we extrapolate that out we are so boned.” But this line of thinking seems to miss the “intelligence” part of AGI completely, it seemingly has no sense at all of the nature of the gap between the models that exist today and the spooky models they worry about.
It seems to me that we need a better specification for describing what exactly intelligent agents can do and how they get there.