Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc. (IMO he’s right about this stuff. But idk if this counts as controversial enough within academia?)
His claims about Bayes go far beyond “better than frequentism”.
He also claims it is can be used as the sole basis of epistemology, and that it is better than “science”. Bayes of course is not a one stop shop for epistemology, because It can’t generate hypotheses, or handle paradigm shifts. It’s also far too complex to use of in practice, for informal decision making. Most “Bayesians” are deceiving themselves about how much they are using it.
Talk of emergence without any mechanism of emergence is bunk, but so is talk of reductionism without specific reductive explanations. Which is a live issue, because many rationalists do regard reductionism as a necessary and apriori. Since it isn;’t, other models and explanations are possible—reduction isn’t necerssary, so emergence is possible.
I feel like philosophers have a lot of hot takes about linguistics, and the way we structure concepts inside our minds, and so forth?
Is that good or bad?
the idea that different “truths” can be true for different cultures
That’s obviously true of a subset of claims, eg what counts as money, how fast you are allowed to drive. It would be false if applied to everything , but is very difficult to find a postmodernists who says so in so many words.
Some pretty confident (all things considered claims about moral anti-realism and the proper ethical attitude to take towards life?
I have never discerned a single clear theory of ethics or metaethics in Yudkowky’s writing. The linked article does not make a clear commitment to either realism or anti realism AFAICS. IMO he has as many as four theories.
0, The Argument Against Realism, maybe.
The Three Word Theory (Morality is values).
Coherent Extrapolated Volition
Utilitarianism of Some Variety.
Eliezer’s confident rejection of religion at many points.
The argument for atheism from Solomonoff induction is bizarre .
SI can only work in an algorithmic universe. Inasmuch as it is considering hypotheses, it is considering which algorithm is actually generating observed phenomena. It can’t consider and reject any non algorithmic hyposthesis, incuding non-algorithmic (non Turing computable) physics. Rationalists believe that SI can resolve theology in the direction of atheism.
Most theology regards God as supernatural or non-physical....but it is very doubtful that SI can even consider a supernatural deity.If SI cannot consider the hypothesis of supernaturalism, it cannot reject it. At best, if you allow that it can consider physical hypotheses, it can only consider a preternatural deity, a Ray Harrihausen god, that’s big and impressive, but still material and finite.
Most “Bayesians” are deceiving themselves about how much they are using it.
This is a frequently-made accusation which has very little basis in reality. The world is a big place, so you will be able to find some examples of such people, but central examples of LessWrong readers, rationalists, etc, are not going around claiming that they run their entire lives on explicit Bayes.
And then I thought to myself, “This LK99 issue seems complicated enough that it’d be worth doing an actual Bayesian calculation on it”—a rare thought; I don’t think I’ve done an actual explicit numerical Bayesian update in at least a year.
In the process of trying to set up an explicit calculation, I realized I felt very unsure about some critically important quantities, to the point where it no longer seemed worth trying to do the calculation with numbers. This is the System Working As Intended.
Good point that rationalism is over-emphasizing the importance of Bayes theorem in a pretty ridiculous way, even if most of the individual statements about Bayes theorem are perfectly correct. I feel like if one was trying to evaluate Eliezer or the rationalist community on some kind of overall philosophy scorecard, there would be a lot of situations like this—both “the salience is totally out of whack here even though it’s not technically /wrong/...”, and “this seems like a really important and true sentiment, but it’s not really the kind of thing that’s considered within the purview of academic philosophy...” (Such as the discussion about ethics / morality / value, and many other parts of the Sequences… I think there is basically a lot of helpful stuff in those posts, some of which might be controversial, but it isn’t really an Official Philosophical Debate over stuff like whether anti-realism is true. It’s more like “here’s how I think you should live your life, IF anti-realism is true”.)
Didn’t mention many-worlds because it doesn’t feel like the kind of thing that a philosopher would be fully equipped to adjudicate? I personally don’t feel like I know enough to have opinions on different quantum mechanics interpretations or other issues concerning the overall nature / reality of the universe—I still feel very uncertain and confused about that stuff, even though long ago I was a physics major and hoped to some day learn all about it. Although I guess I am sorta more sympathetic to Many Worlds than some of the alternatives?? Hard to think about, somehow...
Philosophers having hot takes on linguistics and the relationship between words and concepts—not good or bad that they have so many takes, and I’m also not sure if the takes themselves are good or bad. It is just my impression that, unlike some of the stuff above, philosophy seems to have really spent a lot of time debating these issues, and thus it would be ripe for finding well-formed disagreements between EY and various mainstream schools of thought. I do think that maybe philosophers over-index a little on thinking about the nature of words and language (ie that they have “too many takes”), but that doesn’t seem like such a bad thing—I’m glad somebody’s thinking about it, even if it doesn’t strike me as the most important area of inquiry!
Yeah, agreed that that Solomonoff induction argument feels very bizzarre! I had never encountered that before. I meant to refer to the many different arguments for atheism sprinkled throughout the Sequences, including many references to the all-time classic idea that our discovery of the principles of evolution and the mechanics of the brain are sufficient to “explain away” the biggest mysteries about the origin of humanity, and should thus sideline the previously-viable hypothesis of religious claims being true. (See here and here.) EY seems to (rightly IMO) consider the falseness of major religious claims to be a “slam dunk”, ie, totally overdetermined to be false—the Sequences are full of funny asides and stories where various religious people are shown to be making very obvious reasoning errors, etc.
His claims about Bayes go far beyond “better than frequentism”. He also claims it is can be used as the sole basis of epistemology, and that it is better than “science”. Bayes of course is not a one stop shop for epistemology, because It can’t generate hypotheses, or handle paradigm shifts. It’s also far too complex to use of in practice, for informal decision making. Most “Bayesians” are deceiving themselves about how much they are using it.
Almost his only argument for science wrong, Bayes right is the supposedly “slam dunk” nature of MWI—which, oddly, you dont mention directly.
Talk of emergence without any mechanism of emergence is bunk, but so is talk of reductionism without specific reductive explanations. Which is a live issue, because many rationalists do regard reductionism as a necessary and apriori. Since it isn;’t, other models and explanations are possible—reduction isn’t necerssary, so emergence is possible.
Is that good or bad?
That’s obviously true of a subset of claims, eg what counts as money, how fast you are allowed to drive. It would be false if applied to everything , but is very difficult to find a postmodernists who says so in so many words.
I have never discerned a single clear theory of ethics or metaethics in Yudkowky’s writing. The linked article does not make a clear commitment to either realism or anti realism AFAICS. IMO he has as many as four theories.
0, The Argument Against Realism, maybe.
The Three Word Theory (Morality is values).
Coherent Extrapolated Volition
Utilitarianism of Some Variety.
The argument for atheism from Solomonoff induction is bizarre .
SI can only work in an algorithmic universe. Inasmuch as it is considering hypotheses, it is considering which algorithm is actually generating observed phenomena. It can’t consider and reject any non algorithmic hyposthesis, incuding non-algorithmic (non Turing computable) physics. Rationalists believe that SI can resolve theology in the direction of atheism. Most theology regards God as supernatural or non-physical....but it is very doubtful that SI can even consider a supernatural deity.If SI cannot consider the hypothesis of supernaturalism, it cannot reject it. At best, if you allow that it can consider physical hypotheses, it can only consider a preternatural deity, a Ray Harrihausen god, that’s big and impressive, but still material and finite.
This is a frequently-made accusation which has very little basis in reality. The world is a big place, so you will be able to find some examples of such people, but central examples of LessWrong readers, rationalists, etc, are not going around claiming that they run their entire lives on explicit Bayes.
Nonetheless, the founder claims they should be.
Pretty sure it’s just false.
First found example: the last post by EY
That’s a story where he thinks he should do a Bayesian analysis, then doesn’t. It’s not a story where no one should do one.
Good point that rationalism is over-emphasizing the importance of Bayes theorem in a pretty ridiculous way, even if most of the individual statements about Bayes theorem are perfectly correct. I feel like if one was trying to evaluate Eliezer or the rationalist community on some kind of overall philosophy scorecard, there would be a lot of situations like this—both “the salience is totally out of whack here even though it’s not technically /wrong/...”, and “this seems like a really important and true sentiment, but it’s not really the kind of thing that’s considered within the purview of academic philosophy...” (Such as the discussion about ethics / morality / value, and many other parts of the Sequences… I think there is basically a lot of helpful stuff in those posts, some of which might be controversial, but it isn’t really an Official Philosophical Debate over stuff like whether anti-realism is true. It’s more like “here’s how I think you should live your life, IF anti-realism is true”.)
Didn’t mention many-worlds because it doesn’t feel like the kind of thing that a philosopher would be fully equipped to adjudicate? I personally don’t feel like I know enough to have opinions on different quantum mechanics interpretations or other issues concerning the overall nature / reality of the universe—I still feel very uncertain and confused about that stuff, even though long ago I was a physics major and hoped to some day learn all about it. Although I guess I am sorta more sympathetic to Many Worlds than some of the alternatives?? Hard to think about, somehow...
Philosophers having hot takes on linguistics and the relationship between words and concepts—not good or bad that they have so many takes, and I’m also not sure if the takes themselves are good or bad. It is just my impression that, unlike some of the stuff above, philosophy seems to have really spent a lot of time debating these issues, and thus it would be ripe for finding well-formed disagreements between EY and various mainstream schools of thought. I do think that maybe philosophers over-index a little on thinking about the nature of words and language (ie that they have “too many takes”), but that doesn’t seem like such a bad thing—I’m glad somebody’s thinking about it, even if it doesn’t strike me as the most important area of inquiry!
Yeah, agreed that that Solomonoff induction argument feels very bizzarre! I had never encountered that before. I meant to refer to the many different arguments for atheism sprinkled throughout the Sequences, including many references to the all-time classic idea that our discovery of the principles of evolution and the mechanics of the brain are sufficient to “explain away” the biggest mysteries about the origin of humanity, and should thus sideline the previously-viable hypothesis of religious claims being true. (See here and here.) EY seems to (rightly IMO) consider the falseness of major religious claims to be a “slam dunk”, ie, totally overdetermined to be false—the Sequences are full of funny asides and stories where various religious people are shown to be making very obvious reasoning errors, etc.