I think it’s fair to say a highly educated reductionist audience is considered high status by almost any standard[1].
Extreme non-reductionists tend to form communities with inverted status-ladders (relative to ours) where the high-status members constantly signal adherence to certain baseless assertions.
But even if the audience consists of (LW target audienct) … then the cognitive gap is still so large that it
cannot be bridged in casual conversation.
A: Hi! Have you ever heard of cellular automata?
B: No. What is it?
A: Well basically you take a large cartesian grid and every cell can have 2 values : “alive” or “dead”. And you modify it using these simple rules … and you can get all kinds of neat patterns.
B: Ah, I might have read something like that somewhere.
A: Did you know it’s turing-complete?
B: What?
A: Yes, you can run any computer on such a grid! Neat, huh.
B: One learns a new thing every day… (Note: I have gotten this exact response when I told a friend, a mathematician, about the turing-completeness of the game of life)
A: So, you’re a reductionist, right? No magical stuff inside the brain?
B: Yes, of course.
A: So in principle, we could simulate a human on a computer, right?
B: For sufficiently large values of “in principle”, yes.
A: So we can run a human on game of life!
B: Oh right. “In principle”. Why should I care, again?
OK, fictional evidence, I have only tried the first half of this conversation in reality.
This conversation starts from the non-controversial side, slowly building the infrastructure for the final declaration. If you have friends tolerant enough for you to introduce the LW sequences conversation by conversation in a “had you ever heard” type of way, and you have a lot of time, this will work fine.
However, the OP seems to be about the situation where you start by underestimating the inferential gap and saying something as if it should be obvious, while it still sounds crazy to your audience. How do you rescue yourself from that without a status hit, and without being dishonest?
Extreme non-reductionists tend to form communities with inverted status-ladders (relative to ours) where the high-status members constantly signal adherence to certain baseless assertions.
A: Hi! Have you ever heard of cellular automata?
B: No. What is it?
A: Well basically you take a large cartesian grid and every cell can have 2 values : “alive” or “dead”. And you modify it using these simple rules … and you can get all kinds of neat patterns.
B: Ah, I might have read something like that somewhere.
A: Did you know it’s turing-complete?
B: What?
A: Yes, you can run any computer on such a grid! Neat, huh.
B: One learns a new thing every day… (Note: I have gotten this exact response when I told a friend, a mathematician, about the turing-completeness of the game of life)
A: So, you’re a reductionist, right? No magical stuff inside the brain?
B: Yes, of course.
A: So in principle, we could simulate a human on a computer, right?
B: For sufficiently large values of “in principle”, yes.
A: So we can run a human on game of life!
B: Oh right. “In principle”. Why should I care, again?
OK, fictional evidence, I have only tried the first half of this conversation in reality.
This conversation starts from the non-controversial side, slowly building the infrastructure for the final declaration. If you have friends tolerant enough for you to introduce the LW sequences conversation by conversation in a “had you ever heard” type of way, and you have a lot of time, this will work fine.
However, the OP seems to be about the situation where you start by underestimating the inferential gap and saying something as if it should be obvious, while it still sounds crazy to your audience. How do you rescue yourself from that without a status hit, and without being dishonest?