I suggest maybe re-titling this post to: ”I strongly disagree with Eliezer Yudkowsky about the philosophy of consciousness and decision theory, and so do lots of other academic philosophers”
or maybe: ”Eliezer Yudkowsky is Frequently, Confidently, Egregiously Wrong, About Metaphysics”
or consider: ”Eliezer’s ideas about Zombies, Decision Theory, and Animal Consciousness, seem crazy”
Otherwise it seems pretty misleading / clickbaity (and indeed overconfident) to extrapolate from these beliefs, to other notable beliefs of Eliezer’s—such as cryonics, quantum mechanics, macroeconomics, various political issues, various beliefs about AI of course, etc. Personally, I clicked on this post really expecting to see a bunch of stuff like “in March 2022 Eliezer confidently claimed that the government of Russia would collapse within 90 days, and it did not”, or “Eliezer said for years that X approach to AI couldn’t possibly scale, but then it did”.
Personally, I feel that beliefs within this narrow slice of philosophy topics are unlikely to correlate to being “egregiously wrong” in other fields. (Philosophy is famously hard!! So even though I agree with you that his stance on animal consciousness seems pretty crazy, I don’t really hold this kind of philosophical disagreement against people when they make predictions about, eg, current events.)
Philosophy is pretty much the only subject that I’m very informed about. So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on. That’s . . . worrying.
Some other potentially controversial views that a philosopher might be able to fact-check Eliezer on, based on skimming through an index of the sequences:
Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc. (IMO he’s right about this stuff. But idk if this counts as controversial enough within academia?)
A lot of assorted references to cognitive and evolutionary psychology, including probably a number of studies that haven’t replicated—I think Eliezer has expressed regret at some of this and said he would write the sequences differently today. But there are probably a bunch of somewhat-controversial psychology factoids that Eliezer would still confidently stand by. (IMO you could probably nail him on some stuff here.)
Maybe some assorted claims about the nature of evolution? What it’s optimizing for, what it produces (“adaptation-executors, not fitness-maximizers”), where the logic can & can’t be extended (can corporations be said to evolve? EY says no), whether group selection happens in real life (EY says basically never). Not sure if any of these claims are controversial though.
Lots of confident claims about the idea of “intelligence”—that it is a coherent concept, an important trait, etc. (Vs some philosophers who might say there’s no one thing that can be called intelligence, or that the word intelligence has no meaning, or generally make the kinds of arguments parodied in “On the Impossibility of Supersized Machines”. Surely there are still plenty of these philosophers going around today, even though I think they’re very wrong?)
Some pretty pure philosophy about the nature of words/concepts, and “the relationship between cognition and concept formation”. I feel like philosophers have a lot of hot takes about linguistics, and the way we structure concepts inside our minds, and so forth? (IMO you could at least definitely find some quibbles, even if the big picture looks right.)
Eliezer confidently dismissing what he calls a key tenet of “postmodernism” in several places—the idea that different “truths” can be true for different cultures. (IMO he’s right to dismiss this.)
Some pretty confident (all things considered!) claims about moral anti-realism and the proper ethical attitude to take towards life? (I found his writing helpful and interesting but idk if it’s the last word, personally I feel very uncertain about this stuff.)
Eliezer’s confident rejection of religion at many points. (Is it too obvious, in academic circles, that all major religions are false? Or is this still controversial enough, with however many billions of self-identified believers worldwide, that you can get credit for calling it?)
It also feels like some of the more abstract AI alignment stuff (about the fundamental nature of “agents”, what it means to have a “goal” or “values”, etc) might be amenable to philosophical critique.
Maybe you toss out half of those because they aren’t seriously disputed by any legit academics. But, I am pretty sure that at least postmodern philosophers, “complexity scientists”, people with bad takes on philosophy-of-science / philosophy-of-probability, and people who make “On the Impossibility of Supersized Machines”-style arguments about intelligence, are really out there! They at least consider themselves to be legit, even if you and I are skeptical! So I think EY would come across with a pretty good track record of correct philosophy at the end of the day, if you truly took the entire reference class of “controversial philosophical claims” and somehow graded how correct EY was (in practice, since we haven’t yet solved philosophy—how close he is to your own views?), and compared this to how correct the average philosopher is.
Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc. (IMO he’s right about this stuff. But idk if this counts as controversial enough within academia?)
His claims about Bayes go far beyond “better than frequentism”.
He also claims it is can be used as the sole basis of epistemology, and that it is better than “science”. Bayes of course is not a one stop shop for epistemology, because It can’t generate hypotheses, or handle paradigm shifts. It’s also far too complex to use of in practice, for informal decision making. Most “Bayesians” are deceiving themselves about how much they are using it.
Talk of emergence without any mechanism of emergence is bunk, but so is talk of reductionism without specific reductive explanations. Which is a live issue, because many rationalists do regard reductionism as a necessary and apriori. Since it isn;’t, other models and explanations are possible—reduction isn’t necerssary, so emergence is possible.
I feel like philosophers have a lot of hot takes about linguistics, and the way we structure concepts inside our minds, and so forth?
Is that good or bad?
the idea that different “truths” can be true for different cultures
That’s obviously true of a subset of claims, eg what counts as money, how fast you are allowed to drive. It would be false if applied to everything , but is very difficult to find a postmodernists who says so in so many words.
Some pretty confident (all things considered claims about moral anti-realism and the proper ethical attitude to take towards life?
I have never discerned a single clear theory of ethics or metaethics in Yudkowky’s writing. The linked article does not make a clear commitment to either realism or anti realism AFAICS. IMO he has as many as four theories.
0, The Argument Against Realism, maybe.
The Three Word Theory (Morality is values).
Coherent Extrapolated Volition
Utilitarianism of Some Variety.
Eliezer’s confident rejection of religion at many points.
The argument for atheism from Solomonoff induction is bizarre .
SI can only work in an algorithmic universe. Inasmuch as it is considering hypotheses, it is considering which algorithm is actually generating observed phenomena. It can’t consider and reject any non algorithmic hyposthesis, incuding non-algorithmic (non Turing computable) physics. Rationalists believe that SI can resolve theology in the direction of atheism.
Most theology regards God as supernatural or non-physical....but it is very doubtful that SI can even consider a supernatural deity.If SI cannot consider the hypothesis of supernaturalism, it cannot reject it. At best, if you allow that it can consider physical hypotheses, it can only consider a preternatural deity, a Ray Harrihausen god, that’s big and impressive, but still material and finite.
Most “Bayesians” are deceiving themselves about how much they are using it.
This is a frequently-made accusation which has very little basis in reality. The world is a big place, so you will be able to find some examples of such people, but central examples of LessWrong readers, rationalists, etc, are not going around claiming that they run their entire lives on explicit Bayes.
And then I thought to myself, “This LK99 issue seems complicated enough that it’d be worth doing an actual Bayesian calculation on it”—a rare thought; I don’t think I’ve done an actual explicit numerical Bayesian update in at least a year.
In the process of trying to set up an explicit calculation, I realized I felt very unsure about some critically important quantities, to the point where it no longer seemed worth trying to do the calculation with numbers. This is the System Working As Intended.
Good point that rationalism is over-emphasizing the importance of Bayes theorem in a pretty ridiculous way, even if most of the individual statements about Bayes theorem are perfectly correct. I feel like if one was trying to evaluate Eliezer or the rationalist community on some kind of overall philosophy scorecard, there would be a lot of situations like this—both “the salience is totally out of whack here even though it’s not technically /wrong/...”, and “this seems like a really important and true sentiment, but it’s not really the kind of thing that’s considered within the purview of academic philosophy...” (Such as the discussion about ethics / morality / value, and many other parts of the Sequences… I think there is basically a lot of helpful stuff in those posts, some of which might be controversial, but it isn’t really an Official Philosophical Debate over stuff like whether anti-realism is true. It’s more like “here’s how I think you should live your life, IF anti-realism is true”.)
Didn’t mention many-worlds because it doesn’t feel like the kind of thing that a philosopher would be fully equipped to adjudicate? I personally don’t feel like I know enough to have opinions on different quantum mechanics interpretations or other issues concerning the overall nature / reality of the universe—I still feel very uncertain and confused about that stuff, even though long ago I was a physics major and hoped to some day learn all about it. Although I guess I am sorta more sympathetic to Many Worlds than some of the alternatives?? Hard to think about, somehow...
Philosophers having hot takes on linguistics and the relationship between words and concepts—not good or bad that they have so many takes, and I’m also not sure if the takes themselves are good or bad. It is just my impression that, unlike some of the stuff above, philosophy seems to have really spent a lot of time debating these issues, and thus it would be ripe for finding well-formed disagreements between EY and various mainstream schools of thought. I do think that maybe philosophers over-index a little on thinking about the nature of words and language (ie that they have “too many takes”), but that doesn’t seem like such a bad thing—I’m glad somebody’s thinking about it, even if it doesn’t strike me as the most important area of inquiry!
Yeah, agreed that that Solomonoff induction argument feels very bizzarre! I had never encountered that before. I meant to refer to the many different arguments for atheism sprinkled throughout the Sequences, including many references to the all-time classic idea that our discovery of the principles of evolution and the mechanics of the brain are sufficient to “explain away” the biggest mysteries about the origin of humanity, and should thus sideline the previously-viable hypothesis of religious claims being true. (See here and here.) EY seems to (rightly IMO) consider the falseness of major religious claims to be a “slam dunk”, ie, totally overdetermined to be false—the Sequences are full of funny asides and stories where various religious people are shown to be making very obvious reasoning errors, etc.
I suggest maybe re-titling this post to:
”I strongly disagree with Eliezer Yudkowsky about the philosophy of consciousness and decision theory, and so do lots of other academic philosophers”
or maybe:
”Eliezer Yudkowsky is Frequently, Confidently, Egregiously Wrong, About Metaphysics”
or consider:
”Eliezer’s ideas about Zombies, Decision Theory, and Animal Consciousness, seem crazy”
Otherwise it seems pretty misleading / clickbaity (and indeed overconfident) to extrapolate from these beliefs, to other notable beliefs of Eliezer’s—such as cryonics, quantum mechanics, macroeconomics, various political issues, various beliefs about AI of course, etc. Personally, I clicked on this post really expecting to see a bunch of stuff like “in March 2022 Eliezer confidently claimed that the government of Russia would collapse within 90 days, and it did not”, or “Eliezer said for years that X approach to AI couldn’t possibly scale, but then it did”.
Personally, I feel that beliefs within this narrow slice of philosophy topics are unlikely to correlate to being “egregiously wrong” in other fields. (Philosophy is famously hard!! So even though I agree with you that his stance on animal consciousness seems pretty crazy, I don’t really hold this kind of philosophical disagreement against people when they make predictions about, eg, current events.)
Philosophy is pretty much the only subject that I’m very informed about. So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on. That’s . . . worrying.
Some other potentially controversial views that a philosopher might be able to fact-check Eliezer on, based on skimming through an index of the sequences:
Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc. (IMO he’s right about this stuff. But idk if this counts as controversial enough within academia?)
Probably a lot of assorted philosophy-of-science stuff about the nature of evidence, the idea that high-caliber rationality ought to operate “faster than science”, etc. (IMO he’s right about the big picture here, although this topic covers a lot of ground so if you looked closely you could probably find some quibbles.)
The claim / implication that talk of “emergence” or the study of “complexity science” is basically bunk. (Not sure but seems like he’s probably right? Good chance the ultimate resolution would probably be “emergence/complexity is a much less helpful concept than its fans think, but more helpful than zero”.)
A lot of assorted references to cognitive and evolutionary psychology, including probably a number of studies that haven’t replicated—I think Eliezer has expressed regret at some of this and said he would write the sequences differently today. But there are probably a bunch of somewhat-controversial psychology factoids that Eliezer would still confidently stand by. (IMO you could probably nail him on some stuff here.)
Maybe some assorted claims about the nature of evolution? What it’s optimizing for, what it produces (“adaptation-executors, not fitness-maximizers”), where the logic can & can’t be extended (can corporations be said to evolve? EY says no), whether group selection happens in real life (EY says basically never). Not sure if any of these claims are controversial though.
Lots of confident claims about the idea of “intelligence”—that it is a coherent concept, an important trait, etc. (Vs some philosophers who might say there’s no one thing that can be called intelligence, or that the word intelligence has no meaning, or generally make the kinds of arguments parodied in “On the Impossibility of Supersized Machines”. Surely there are still plenty of these philosophers going around today, even though I think they’re very wrong?)
Some pretty pure philosophy about the nature of words/concepts, and “the relationship between cognition and concept formation”. I feel like philosophers have a lot of hot takes about linguistics, and the way we structure concepts inside our minds, and so forth? (IMO you could at least definitely find some quibbles, even if the big picture looks right.)
Eliezer confidently dismissing what he calls a key tenet of “postmodernism” in several places—the idea that different “truths” can be true for different cultures. (IMO he’s right to dismiss this.)
Some pretty confident (all things considered!) claims about moral anti-realism and the proper ethical attitude to take towards life? (I found his writing helpful and interesting but idk if it’s the last word, personally I feel very uncertain about this stuff.)
Eliezer’s confident rejection of religion at many points. (Is it too obvious, in academic circles, that all major religions are false? Or is this still controversial enough, with however many billions of self-identified believers worldwide, that you can get credit for calling it?)
It also feels like some of the more abstract AI alignment stuff (about the fundamental nature of “agents”, what it means to have a “goal” or “values”, etc) might be amenable to philosophical critique.
Maybe you toss out half of those because they aren’t seriously disputed by any legit academics. But, I am pretty sure that at least postmodern philosophers, “complexity scientists”, people with bad takes on philosophy-of-science / philosophy-of-probability, and people who make “On the Impossibility of Supersized Machines”-style arguments about intelligence, are really out there! They at least consider themselves to be legit, even if you and I are skeptical! So I think EY would come across with a pretty good track record of correct philosophy at the end of the day, if you truly took the entire reference class of “controversial philosophical claims” and somehow graded how correct EY was (in practice, since we haven’t yet solved philosophy—how close he is to your own views?), and compared this to how correct the average philosopher is.
His claims about Bayes go far beyond “better than frequentism”. He also claims it is can be used as the sole basis of epistemology, and that it is better than “science”. Bayes of course is not a one stop shop for epistemology, because It can’t generate hypotheses, or handle paradigm shifts. It’s also far too complex to use of in practice, for informal decision making. Most “Bayesians” are deceiving themselves about how much they are using it.
Almost his only argument for science wrong, Bayes right is the supposedly “slam dunk” nature of MWI—which, oddly, you dont mention directly.
Talk of emergence without any mechanism of emergence is bunk, but so is talk of reductionism without specific reductive explanations. Which is a live issue, because many rationalists do regard reductionism as a necessary and apriori. Since it isn;’t, other models and explanations are possible—reduction isn’t necerssary, so emergence is possible.
Is that good or bad?
That’s obviously true of a subset of claims, eg what counts as money, how fast you are allowed to drive. It would be false if applied to everything , but is very difficult to find a postmodernists who says so in so many words.
I have never discerned a single clear theory of ethics or metaethics in Yudkowky’s writing. The linked article does not make a clear commitment to either realism or anti realism AFAICS. IMO he has as many as four theories.
0, The Argument Against Realism, maybe.
The Three Word Theory (Morality is values).
Coherent Extrapolated Volition
Utilitarianism of Some Variety.
The argument for atheism from Solomonoff induction is bizarre .
SI can only work in an algorithmic universe. Inasmuch as it is considering hypotheses, it is considering which algorithm is actually generating observed phenomena. It can’t consider and reject any non algorithmic hyposthesis, incuding non-algorithmic (non Turing computable) physics. Rationalists believe that SI can resolve theology in the direction of atheism. Most theology regards God as supernatural or non-physical....but it is very doubtful that SI can even consider a supernatural deity.If SI cannot consider the hypothesis of supernaturalism, it cannot reject it. At best, if you allow that it can consider physical hypotheses, it can only consider a preternatural deity, a Ray Harrihausen god, that’s big and impressive, but still material and finite.
This is a frequently-made accusation which has very little basis in reality. The world is a big place, so you will be able to find some examples of such people, but central examples of LessWrong readers, rationalists, etc, are not going around claiming that they run their entire lives on explicit Bayes.
Nonetheless, the founder claims they should be.
Pretty sure it’s just false.
First found example: the last post by EY
That’s a story where he thinks he should do a Bayesian analysis, then doesn’t. It’s not a story where no one should do one.
Good point that rationalism is over-emphasizing the importance of Bayes theorem in a pretty ridiculous way, even if most of the individual statements about Bayes theorem are perfectly correct. I feel like if one was trying to evaluate Eliezer or the rationalist community on some kind of overall philosophy scorecard, there would be a lot of situations like this—both “the salience is totally out of whack here even though it’s not technically /wrong/...”, and “this seems like a really important and true sentiment, but it’s not really the kind of thing that’s considered within the purview of academic philosophy...” (Such as the discussion about ethics / morality / value, and many other parts of the Sequences… I think there is basically a lot of helpful stuff in those posts, some of which might be controversial, but it isn’t really an Official Philosophical Debate over stuff like whether anti-realism is true. It’s more like “here’s how I think you should live your life, IF anti-realism is true”.)
Didn’t mention many-worlds because it doesn’t feel like the kind of thing that a philosopher would be fully equipped to adjudicate? I personally don’t feel like I know enough to have opinions on different quantum mechanics interpretations or other issues concerning the overall nature / reality of the universe—I still feel very uncertain and confused about that stuff, even though long ago I was a physics major and hoped to some day learn all about it. Although I guess I am sorta more sympathetic to Many Worlds than some of the alternatives?? Hard to think about, somehow...
Philosophers having hot takes on linguistics and the relationship between words and concepts—not good or bad that they have so many takes, and I’m also not sure if the takes themselves are good or bad. It is just my impression that, unlike some of the stuff above, philosophy seems to have really spent a lot of time debating these issues, and thus it would be ripe for finding well-formed disagreements between EY and various mainstream schools of thought. I do think that maybe philosophers over-index a little on thinking about the nature of words and language (ie that they have “too many takes”), but that doesn’t seem like such a bad thing—I’m glad somebody’s thinking about it, even if it doesn’t strike me as the most important area of inquiry!
Yeah, agreed that that Solomonoff induction argument feels very bizzarre! I had never encountered that before. I meant to refer to the many different arguments for atheism sprinkled throughout the Sequences, including many references to the all-time classic idea that our discovery of the principles of evolution and the mechanics of the brain are sufficient to “explain away” the biggest mysteries about the origin of humanity, and should thus sideline the previously-viable hypothesis of religious claims being true. (See here and here.) EY seems to (rightly IMO) consider the falseness of major religious claims to be a “slam dunk”, ie, totally overdetermined to be false—the Sequences are full of funny asides and stories where various religious people are shown to be making very obvious reasoning errors, etc.