I’m claiming that “new stuff is needed” has been the dominant view for a long time, but is gradually becoming less and less popular. Inspiration from neuroscience has always been one of the most common flavors of “new stuff is needed.” As a relatively recent example, it used to be a prominent part of DeepMind’s messaging, though it has been gradually receding (I think because it hasn’t been a part of their major results). 27 people holding the view is not a counterexample to the claim that it is becoming less popular.
See also this survey of NLP, where ~17% of participants think that scaling will solve practically any problem. I think that’s an unprecedentedly large number (prior to 2018 I’d bet it was <5%). But it still means that 83% of people disagree. 60% of respondents think that non-trivial results in linguistics or cognitive science will inspire one of the top 5 results in 2030, again I think down to an unprecedented low but still a majority.
Yann LeCun, Terry Sejnowski, Yoshua Bengio, Timothy Lillicrap et al seem to disagree with the perspective you’re articulating in this comment insofar as they actually endorse the perspective of the paper they’ve coauthored.
Did the paper say that NeuroAI is looking increasingly likely? It seems like they are saying that the perspective used to be more popular and has been falling out of favor, so they are advocating for reviving it.
27 people holding the view is not a counterexample to the claim that it is becoming less popular.
Still feels worthwhile to emphasize that some of these 27 people are, eg, Chief AI Scientist at Meta, co-director of CIFAR, DeepMind staff researchers, etc.
These people are major decision-makers in some of the world’s leading and most well-resourced AI labs, so we should probably pay attention to where they think AI research should go in the short-term—they are among the people who could actually take it there.
I assume this is the chart you’re referring to. I take your point that you see these numbers as increasing or decreasing (despite that where they actually are in an absolute sense seems harmonious with believing that brain-based AGI is entirely possible), but it’s likely that these increases or decreases are themselves risky statistics to extrapolate. These sorts of trends could easily asymptote or reverse given volatile field dynamics. For instance, if we linearly extrapolate from the two stats you provided (5% believe scaling could solve everything in 2018; 17% believe it in 2022), this would predict, eg, 56% of NLP researchers in 2035 would believe scaling could solve everything. Do you actually think something in this ballpark is likely?
Did the paper say that NeuroAI is looking increasingly likely?
I was considering the paper itself as evidence that NeuroAI is looking increasingly likely.
When people who run many of the world’s leading AI labs say they want to devote resources to building NeuroAI in the hopes of getting AGI, I am considering that as a pretty good reason to believe that brain-like AGI is more probable than I thought it was before reading the paper. Do you think this is a mistake?
Certainly, to your point, signaling an intention to try X is not the same as successfully doing X, especially in the world of AI research. But again, if anyone were to be able to push AI research in the direction of being brain-based, would it not be these sorts of labs?
To be clear, I do not personally think that prosaic AGI and brain-based AGI are necessarily mutually exclusive—eg, brains may be performing computations that we ultimately realize are some emergent product of prosaic AI methods that already basically exist. I do think that the publication of this paper gives us good reason to believe that brain-like AGI is more probable than we might have thought it was, eg, two weeks ago.
To be clear, I do not personally think that prosaic AGI and brain-based AGI are necessarily mutually exclusive—eg, brains may be performing computations that we ultimately realize are some emergent product of prosaic AI methods that already basically exist.
This is already the case—transformer LLMs already predict neural responses of linguistic cortex remarkably well[1][2]. Perhaps not entirely surprising in retrospect given that they are both trained on overlapping datasets with similar unsupervised prediction objectives.
I’m claiming that “new stuff is needed” has been the dominant view for a long time, but is gradually becoming less and less popular. Inspiration from neuroscience has always been one of the most common flavors of “new stuff is needed.” As a relatively recent example, it used to be a prominent part of DeepMind’s messaging, though it has been gradually receding (I think because it hasn’t been a part of their major results). 27 people holding the view is not a counterexample to the claim that it is becoming less popular.
See also this survey of NLP, where ~17% of participants think that scaling will solve practically any problem. I think that’s an unprecedentedly large number (prior to 2018 I’d bet it was <5%). But it still means that 83% of people disagree. 60% of respondents think that non-trivial results in linguistics or cognitive science will inspire one of the top 5 results in 2030, again I think down to an unprecedented low but still a majority.
Did the paper say that NeuroAI is looking increasingly likely? It seems like they are saying that the perspective used to be more popular and has been falling out of favor, so they are advocating for reviving it.
Still feels worthwhile to emphasize that some of these 27 people are, eg, Chief AI Scientist at Meta, co-director of CIFAR, DeepMind staff researchers, etc.
These people are major decision-makers in some of the world’s leading and most well-resourced AI labs, so we should probably pay attention to where they think AI research should go in the short-term—they are among the people who could actually take it there.
I assume this is the chart you’re referring to. I take your point that you see these numbers as increasing or decreasing (despite that where they actually are in an absolute sense seems harmonious with believing that brain-based AGI is entirely possible), but it’s likely that these increases or decreases are themselves risky statistics to extrapolate. These sorts of trends could easily asymptote or reverse given volatile field dynamics. For instance, if we linearly extrapolate from the two stats you provided (5% believe scaling could solve everything in 2018; 17% believe it in 2022), this would predict, eg, 56% of NLP researchers in 2035 would believe scaling could solve everything. Do you actually think something in this ballpark is likely?
I was considering the paper itself as evidence that NeuroAI is looking increasingly likely.
When people who run many of the world’s leading AI labs say they want to devote resources to building NeuroAI in the hopes of getting AGI, I am considering that as a pretty good reason to believe that brain-like AGI is more probable than I thought it was before reading the paper. Do you think this is a mistake?
Certainly, to your point, signaling an intention to try X is not the same as successfully doing X, especially in the world of AI research. But again, if anyone were to be able to push AI research in the direction of being brain-based, would it not be these sorts of labs?
To be clear, I do not personally think that prosaic AGI and brain-based AGI are necessarily mutually exclusive—eg, brains may be performing computations that we ultimately realize are some emergent product of prosaic AI methods that already basically exist. I do think that the publication of this paper gives us good reason to believe that brain-like AGI is more probable than we might have thought it was, eg, two weeks ago.
This is already the case—transformer LLMs already predict neural responses of linguistic cortex remarkably well[1][2]. Perhaps not entirely surprising in retrospect given that they are both trained on overlapping datasets with similar unsupervised prediction objectives.
The neural architecture of language: Integrative modeling converges on predictive processing
Brains and algorithms partially converge in natural language processing