I would say “tragically flawed”: noble in their aspirations and very well written, but overconfident in some key foundations. The sequences make some strong assumptions about how the brain works and thus the likely nature of AI, assumptions that have not aged well in the era of DL. Fortunately the sequences also instill the value of updating on new evidence.
Back when the sequences were written in 2007/2008 you could roughly partition the field of AI based on beliefs around the efficiency and tractability of the brain. Everyone in AI looked at the brain as the obvious single example of intelligence, but in very different lights.
If brain algorithms are inefficient and intractable[1] then neuroscience has little to offer, and instead more formal math/CS approaches are preferred. One could call this the rationalist approach to AI, or perhaps the “and everything else approach”. One way to end up in that attractor is by reading a bunch of ev psych; EY in 2007 was clearly heavily into Tooby and Cosmides, even if he has some quibbles with them on the source of cognitive biases.
An evolutionary perspective leads one to view the mind as a crowded zoo of evolved, domain-specific programs. Each is functionally specialized for solving a different adaptive problem that arose during hominid evolutionary history, such as face recognition, foraging, mate choice, heart rate regulation, sleep management, or predator vigilance, and each is activated by a different set of cues from the environment.
Evolution, the constructor of living organisms, has no privileged tendency to build into designs principles of operation that are simple and general. (Tooby and Cosmides 1992)
EY quotes this in LOGI, 2007 (p 4), immediately followed with:
The field of Artificial Intelligence suffers from a heavy, lingering dose of genericity and black-box, blank-slate, tabula-rasa concepts seeping in from the Standard Social Sciences Model (SSSM) identified by Tooby and Cosmides (1992). The general project of liberating AI from the clutches of the SSSM is more work than I wish to undertake in this paper, but one problem that must be dealt with immediately is physics envy. The development of physics over the last few centuries has been characterized by the discovery of unifying equations which neatly underlie many complex phenomena. Most of the past fifty years in AI might be described as the search for a similar unifying principle believed to underlie the complex phenomenon of intelligence.
Physics envy in AI is the search for a single, simple underlying process, with the expectation that this one discovery will lay bare all the secrets of intelligence.
Meanwhile in the field of neuroscience there was a growing body of evidence and momentum coalescing around exactly the “physics envy” approaches EY bemoans: the universal learning hypothesis, popularized to a wider audience in On Intelligence in 2004. It is pretty much pure tabula rosa, blank-slate, genericity and black-box.
The UL hypothesis is that the brain’s vast complexity is actually emergent, best explained by simple universal learning algorithms that automatically evolve all the complex domain specific circuits as required by the simple learning objectives and implied by the training data. (Years later I presented it on LW in 2015, and I finally got around to writing up the brain efficiency issue more recently—although I literally started the earlier version of that article back in 2012.)
But then the world did this fun experiment: the rationalist/non-connectivist AI folks got most of the attention and research money, but not all of it—and then various researcher groups did their thing and tried to best each other on various benchmarks. Eventually Nvidia released cuda, a few connectivists ported ANN code to their gaming GPUs which started to break imagenet, and then a little startup founded with the mission of reverse engineering the brain by some folks who met in a neuroscience program adapted that code to play Atari and later break Go; the rest is history—as you probably know.
Turns out the connectivists and the UL hypothesis were pretty much completely right after all—proven not only by the success of DL in AI, but also by how DL is transforming neuroscience. We know now that the human brain learns complex tasks like vision and language not through kludgy complex evolved mechanisms, but through the exact same simple approximate bayesian (self-supervised) learning algorithms that drive modern DL systems.
The sequences and associated materials were designed to “raise the rationality water line” and ultimately funnel promising new minds into AI-safety. And there they succeeded, especially in those earlier years. Finding an AI safety researcher today who isn’t familiar with the sequences and LW .. well maybe they exist? But they would be unicorns. ML-safety and even brain-safety approaches are now obviously more popular, but there is still this enormous bias/inertia in AI safety stemming from the circa 2007 beliefs and knowledge crystallized and distilled into the sequences.
It’s also possible to end up in the “brains are highly efficient, but completely intractable” camp, which implies uploading as the most likely path to AI—this is where Hanson is—and closer to my beliefs circa 2000 ish before I had studied much systems neuroscience.
I think in order to be “concrete” you need to actually point to a specific portion of the sequences that rests on these foundations you speak of, because as far as I can tell none of it does.
you need to actually point to a specific portion of the sequences that rests on these foundations you speak of,
I did.
My comment has 8 links. The first is a link to “Adaptation-Executers, not Fitness-Maximizers”, which is from the sequences (“The Simple Math of Evolution”), and it opens with a quote from Tooby and Cosmides. The second is a link to the comment section from another post in that sequence (Evolutions Are Stupid (But Work Anyway)) where EY explicitly discusses T&C, saying:
They’re certainly righter than the Standard Social Sciences Model they criticize, but swung the pendulum slightly too far in the new direction.
And a bit later:
In a sense, my paper “Levels of Organization in General Intelligence” can be seen as a reply to Tooby and Cosmides on this issue; though not, in retrospect, a complete one.
My third and fourth links are then the two T&C papers discussed, and the fifth link is a key quote from EY’s paper LOGI—his reply to T&C.
Such understanding as I have of rationality, I acquired in the course of wrestling with the challenge of artificial general intelligence (an endeavor which, to actually succeed, would require sufficient mastery of rationality to build a complete working rationalist out of toothpicks and rubber bands).
So in his words, his understanding of rationality comes from thinking about AGI—so the LOGI and related quotes are relevant, as that reveals the true foundation of the sequences (his thoughts about AGI) around the time when he wrote them.
These quotes—especially the LOGI quote—clearly establish that EY has an evolved modularity like view of the brain with all that entails. He is skeptical of neural networks (even today calls them “giant inscrutable matrices”) and especially “physics envy” universal learning type explanations of the brain (bayesian brain, free energy, etc), and this subtly influences everything where the sequences discusses the brain or related topics. The overall viewpoint is that the brain is a kludgy mess riddled with cognitive biases. He not so subtly disses systems/complexity theory and neural networks.
More key to the AI risk case are posts such as “Value is Fragile” which clearly builds on his larger wordview:
Value isn’t just complicated, it’s fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. … And then there are the long defenses of this proposition, which relies on 75% of my Overcoming Bias posts,
These views espouse the complexity and fragility of human values and supposed narrowness of human mind space vs AI mind space which are core components of the AI risk arguments.
None of which is a concrete reply to anything Eliezer said inside “Adaption Executors, not Fitness Optimizers”, just a reply to what you extrapolate Eliezer’s opinion to be, because he read Tooby and Cosmides and claimed they were somewhere in the ballpark in a separate comment. So I ask again: what portion of the sequences do you have an actual problem with?
And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.
What concretely do you have in mind here?
Back when the sequences were written in 2007/2008 you could roughly partition the field of AI based on beliefs around the efficiency and tractability of the brain. Everyone in AI looked at the brain as the obvious single example of intelligence, but in very different lights.
If brain algorithms are inefficient and intractable[1] then neuroscience has little to offer, and instead more formal math/CS approaches are preferred. One could call this the rationalist approach to AI, or perhaps the “and everything else approach”. One way to end up in that attractor is by reading a bunch of ev psych; EY in 2007 was clearly heavily into Tooby and Cosmides, even if he has some quibbles with them on the source of cognitive biases.
From Evolutionary Psychology and the Emotions:
From the Psychological Foundations of Culture:
EY quotes this in LOGI, 2007 (p 4), immediately followed with:
Meanwhile in the field of neuroscience there was a growing body of evidence and momentum coalescing around exactly the “physics envy” approaches EY bemoans: the universal learning hypothesis, popularized to a wider audience in On Intelligence in 2004. It is pretty much pure tabula rosa, blank-slate, genericity and black-box.
The UL hypothesis is that the brain’s vast complexity is actually emergent, best explained by simple universal learning algorithms that automatically evolve all the complex domain specific circuits as required by the simple learning objectives and implied by the training data. (Years later I presented it on LW in 2015, and I finally got around to writing up the brain efficiency issue more recently—although I literally started the earlier version of that article back in 2012.)
But then the world did this fun experiment: the rationalist/non-connectivist AI folks got most of the attention and research money, but not all of it—and then various researcher groups did their thing and tried to best each other on various benchmarks. Eventually Nvidia released cuda, a few connectivists ported ANN code to their gaming GPUs which started to break imagenet, and then a little startup founded with the mission of reverse engineering the brain by some folks who met in a neuroscience program adapted that code to play Atari and later break Go; the rest is history—as you probably know.
Turns out the connectivists and the UL hypothesis were pretty much completely right after all—proven not only by the success of DL in AI, but also by how DL is transforming neuroscience. We know now that the human brain learns complex tasks like vision and language not through kludgy complex evolved mechanisms, but through the exact same simple approximate bayesian (self-supervised) learning algorithms that drive modern DL systems.
The sequences and associated materials were designed to “raise the rationality water line” and ultimately funnel promising new minds into AI-safety. And there they succeeded, especially in those earlier years. Finding an AI safety researcher today who isn’t familiar with the sequences and LW .. well maybe they exist? But they would be unicorns. ML-safety and even brain-safety approaches are now obviously more popular, but there is still this enormous bias/inertia in AI safety stemming from the circa 2007 beliefs and knowledge crystallized and distilled into the sequences.
It’s also possible to end up in the “brains are highly efficient, but completely intractable” camp, which implies uploading as the most likely path to AI—this is where Hanson is—and closer to my beliefs circa 2000 ish before I had studied much systems neuroscience.
I think in order to be “concrete” you need to actually point to a specific portion of the sequences that rests on these foundations you speak of, because as far as I can tell none of it does.
I did.
My comment has 8 links. The first is a link to “Adaptation-Executers, not Fitness-Maximizers”, which is from the sequences (“The Simple Math of Evolution”), and it opens with a quote from Tooby and Cosmides. The second is a link to the comment section from another post in that sequence (Evolutions Are Stupid (But Work Anyway)) where EY explicitly discusses T&C, saying:
And a bit later:
My third and fourth links are then the two T&C papers discussed, and the fifth link is a key quote from EY’s paper LOGI—his reply to T&C.
On OB in 2006 “The martial art of rationality”, EY writes:
So in his words, his understanding of rationality comes from thinking about AGI—so the LOGI and related quotes are relevant, as that reveals the true foundation of the sequences (his thoughts about AGI) around the time when he wrote them.
These quotes—especially the LOGI quote—clearly establish that EY has an evolved modularity like view of the brain with all that entails. He is skeptical of neural networks (even today calls them “giant inscrutable matrices”) and especially “physics envy” universal learning type explanations of the brain (bayesian brain, free energy, etc), and this subtly influences everything where the sequences discusses the brain or related topics. The overall viewpoint is that the brain is a kludgy mess riddled with cognitive biases. He not so subtly disses systems/complexity theory and neural networks.
More key to the AI risk case are posts such as “Value is Fragile” which clearly builds on his larger wordview:
And of course “The Design Space of Minds in General”
These views espouse the complexity and fragility of human values and supposed narrowness of human mind space vs AI mind space which are core components of the AI risk arguments.
None of which is a concrete reply to anything Eliezer said inside “Adaption Executors, not Fitness Optimizers”, just a reply to what you extrapolate Eliezer’s opinion to be, because he read Tooby and Cosmides and claimed they were somewhere in the ballpark in a separate comment. So I ask again: what portion of the sequences do you have an actual problem with?
And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
This comment contains a specific disagreement.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.