I think in order to be “concrete” you need to actually point to a specific portion of the sequences that rests on these foundations you speak of, because as far as I can tell none of it does.
you need to actually point to a specific portion of the sequences that rests on these foundations you speak of,
I did.
My comment has 8 links. The first is a link to “Adaptation-Executers, not Fitness-Maximizers”, which is from the sequences (“The Simple Math of Evolution”), and it opens with a quote from Tooby and Cosmides. The second is a link to the comment section from another post in that sequence (Evolutions Are Stupid (But Work Anyway)) where EY explicitly discusses T&C, saying:
They’re certainly righter than the Standard Social Sciences Model they criticize, but swung the pendulum slightly too far in the new direction.
And a bit later:
In a sense, my paper “Levels of Organization in General Intelligence” can be seen as a reply to Tooby and Cosmides on this issue; though not, in retrospect, a complete one.
My third and fourth links are then the two T&C papers discussed, and the fifth link is a key quote from EY’s paper LOGI—his reply to T&C.
Such understanding as I have of rationality, I acquired in the course of wrestling with the challenge of artificial general intelligence (an endeavor which, to actually succeed, would require sufficient mastery of rationality to build a complete working rationalist out of toothpicks and rubber bands).
So in his words, his understanding of rationality comes from thinking about AGI—so the LOGI and related quotes are relevant, as that reveals the true foundation of the sequences (his thoughts about AGI) around the time when he wrote them.
These quotes—especially the LOGI quote—clearly establish that EY has an evolved modularity like view of the brain with all that entails. He is skeptical of neural networks (even today calls them “giant inscrutable matrices”) and especially “physics envy” universal learning type explanations of the brain (bayesian brain, free energy, etc), and this subtly influences everything where the sequences discusses the brain or related topics. The overall viewpoint is that the brain is a kludgy mess riddled with cognitive biases. He not so subtly disses systems/complexity theory and neural networks.
More key to the AI risk case are posts such as “Value is Fragile” which clearly builds on his larger wordview:
Value isn’t just complicated, it’s fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. … And then there are the long defenses of this proposition, which relies on 75% of my Overcoming Bias posts,
These views espouse the complexity and fragility of human values and supposed narrowness of human mind space vs AI mind space which are core components of the AI risk arguments.
None of which is a concrete reply to anything Eliezer said inside “Adaption Executors, not Fitness Optimizers”, just a reply to what you extrapolate Eliezer’s opinion to be, because he read Tooby and Cosmides and claimed they were somewhere in the ballpark in a separate comment. So I ask again: what portion of the sequences do you have an actual problem with?
And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.
I think in order to be “concrete” you need to actually point to a specific portion of the sequences that rests on these foundations you speak of, because as far as I can tell none of it does.
I did.
My comment has 8 links. The first is a link to “Adaptation-Executers, not Fitness-Maximizers”, which is from the sequences (“The Simple Math of Evolution”), and it opens with a quote from Tooby and Cosmides. The second is a link to the comment section from another post in that sequence (Evolutions Are Stupid (But Work Anyway)) where EY explicitly discusses T&C, saying:
And a bit later:
My third and fourth links are then the two T&C papers discussed, and the fifth link is a key quote from EY’s paper LOGI—his reply to T&C.
On OB in 2006 “The martial art of rationality”, EY writes:
So in his words, his understanding of rationality comes from thinking about AGI—so the LOGI and related quotes are relevant, as that reveals the true foundation of the sequences (his thoughts about AGI) around the time when he wrote them.
These quotes—especially the LOGI quote—clearly establish that EY has an evolved modularity like view of the brain with all that entails. He is skeptical of neural networks (even today calls them “giant inscrutable matrices”) and especially “physics envy” universal learning type explanations of the brain (bayesian brain, free energy, etc), and this subtly influences everything where the sequences discusses the brain or related topics. The overall viewpoint is that the brain is a kludgy mess riddled with cognitive biases. He not so subtly disses systems/complexity theory and neural networks.
More key to the AI risk case are posts such as “Value is Fragile” which clearly builds on his larger wordview:
And of course “The Design Space of Minds in General”
These views espouse the complexity and fragility of human values and supposed narrowness of human mind space vs AI mind space which are core components of the AI risk arguments.
None of which is a concrete reply to anything Eliezer said inside “Adaption Executors, not Fitness Optimizers”, just a reply to what you extrapolate Eliezer’s opinion to be, because he read Tooby and Cosmides and claimed they were somewhere in the ballpark in a separate comment. So I ask again: what portion of the sequences do you have an actual problem with?
And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
This comment contains a specific disagreement.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.