So, ummm …. these beliefs are not controversial but they are low-status?
Feynman and Hawking and many other well known theoretical physicists have supported it.
And that is evidence that MW is not a low-status, or crackpot, belief. Certainly not among physicists.
Just like “you can run people on game of life” is not a low-status belief, certainly not among computer scientists.
Sure, these beliefs are low-status in communities that are low-status by less wrong standards (e.g. various kinds of non-reductionists). And this seems quite unavoidable given some of LW’s goals
Right, so whether a belief is low status is (among other things) a property of the audience.
But even if the audience consists of people who “who like philosophy and [are] familiar with the different streams and philosophical dilemmas, who know computation theory and classical physics, who [have] a good understanding of probability and math and somebody who [are] naturally curious reductionists”, which is a very educated audience, then the cognitive gap is still so large that it cannot be bridged in casual conversation.
I think it’s fair to say a highly educated reductionist audience is considered high status by almost any standard[1]. And my claim is, and my experience is, that if you casually slip in a LW-style argument then because of the cognitive gap you won’t be able to explain exactly what you mean, because it’s extraordinarily difficult to fall back on arguments that don’t depend on the sequences or any other prerequisites.
If you have a belief that you can’t explain coherently then I think people will assume that’s because your understanding of the subject matter is bad, even though that’s not the problem at all. So if you try to explain your beliefs but fail to do so in a manner that makes sense (to the audience) then you face a social penalty.
[1] we can’t get away with defining every group that doesn’t reason like we do as low-status
I think it’s fair to say a highly educated reductionist audience is considered high status by almost any standard[1].
Extreme non-reductionists tend to form communities with inverted status-ladders (relative to ours) where the high-status members constantly signal adherence to certain baseless assertions.
But even if the audience consists of (LW target audienct) … then the cognitive gap is still so large that it
cannot be bridged in casual conversation.
A: Hi! Have you ever heard of cellular automata?
B: No. What is it?
A: Well basically you take a large cartesian grid and every cell can have 2 values : “alive” or “dead”. And you modify it using these simple rules … and you can get all kinds of neat patterns.
B: Ah, I might have read something like that somewhere.
A: Did you know it’s turing-complete?
B: What?
A: Yes, you can run any computer on such a grid! Neat, huh.
B: One learns a new thing every day… (Note: I have gotten this exact response when I told a friend, a mathematician, about the turing-completeness of the game of life)
A: So, you’re a reductionist, right? No magical stuff inside the brain?
B: Yes, of course.
A: So in principle, we could simulate a human on a computer, right?
B: For sufficiently large values of “in principle”, yes.
A: So we can run a human on game of life!
B: Oh right. “In principle”. Why should I care, again?
OK, fictional evidence, I have only tried the first half of this conversation in reality.
This conversation starts from the non-controversial side, slowly building the infrastructure for the final declaration. If you have friends tolerant enough for you to introduce the LW sequences conversation by conversation in a “had you ever heard” type of way, and you have a lot of time, this will work fine.
However, the OP seems to be about the situation where you start by underestimating the inferential gap and saying something as if it should be obvious, while it still sounds crazy to your audience. How do you rescue yourself from that without a status hit, and without being dishonest?
So, ummm …. these beliefs are not controversial but they are low-status?
And that is evidence that MW is not a low-status, or crackpot, belief. Certainly not among physicists. Just like “you can run people on game of life” is not a low-status belief, certainly not among computer scientists.
Sure, these beliefs are low-status in communities that are low-status by less wrong standards (e.g. various kinds of non-reductionists). And this seems quite unavoidable given some of LW’s goals
Right, so whether a belief is low status is (among other things) a property of the audience.
But even if the audience consists of people who “who like philosophy and [are] familiar with the different streams and philosophical dilemmas, who know computation theory and classical physics, who [have] a good understanding of probability and math and somebody who [are] naturally curious reductionists”, which is a very educated audience, then the cognitive gap is still so large that it cannot be bridged in casual conversation.
I think it’s fair to say a highly educated reductionist audience is considered high status by almost any standard[1]. And my claim is, and my experience is, that if you casually slip in a LW-style argument then because of the cognitive gap you won’t be able to explain exactly what you mean, because it’s extraordinarily difficult to fall back on arguments that don’t depend on the sequences or any other prerequisites.
If you have a belief that you can’t explain coherently then I think people will assume that’s because your understanding of the subject matter is bad, even though that’s not the problem at all. So if you try to explain your beliefs but fail to do so in a manner that makes sense (to the audience) then you face a social penalty.
[1] we can’t get away with defining every group that doesn’t reason like we do as low-status
Extreme non-reductionists tend to form communities with inverted status-ladders (relative to ours) where the high-status members constantly signal adherence to certain baseless assertions.
A: Hi! Have you ever heard of cellular automata?
B: No. What is it?
A: Well basically you take a large cartesian grid and every cell can have 2 values : “alive” or “dead”. And you modify it using these simple rules … and you can get all kinds of neat patterns.
B: Ah, I might have read something like that somewhere.
A: Did you know it’s turing-complete?
B: What?
A: Yes, you can run any computer on such a grid! Neat, huh.
B: One learns a new thing every day… (Note: I have gotten this exact response when I told a friend, a mathematician, about the turing-completeness of the game of life)
A: So, you’re a reductionist, right? No magical stuff inside the brain?
B: Yes, of course.
A: So in principle, we could simulate a human on a computer, right?
B: For sufficiently large values of “in principle”, yes.
A: So we can run a human on game of life!
B: Oh right. “In principle”. Why should I care, again?
OK, fictional evidence, I have only tried the first half of this conversation in reality.
This conversation starts from the non-controversial side, slowly building the infrastructure for the final declaration. If you have friends tolerant enough for you to introduce the LW sequences conversation by conversation in a “had you ever heard” type of way, and you have a lot of time, this will work fine.
However, the OP seems to be about the situation where you start by underestimating the inferential gap and saying something as if it should be obvious, while it still sounds crazy to your audience. How do you rescue yourself from that without a status hit, and without being dishonest?