the explanation has to actually be made to said outsiders in an examinable step-by-step fashion.
By the time a person can grasp the chain of inference, and by the time they are consequentialist and Aumann-agreement-savvy enough for it to work on them, they probably wouldn’t be considered outsiders. I don’t know if there’s a way around that. It is unfortunate.
To generalise your answer: “the inferential distance is too great to show people why we’re actually right.” This does indeed suck, but is indeed not reasonably avoidable.
The approach I would personally try is furiously seeding memes that make the ideas that will help close the inferential distance more plausible. See selling ideas in this excellent post.
For what it’s worth, I gather from various comments he’s made in earlier posts that EY sees the whole enterprise of LW as precisely this “furiously seeding memes” strategy.
Or at least that this is how he saw it when he started; I realize that time has passed and people change their minds.
That is, I think he believes/ed that understanding this particular issue depends on understanding FAI theory depends on understanding cognition (or at least on dissolving common misunderstandings about cognition) and rationality, and that this site (and the book he’s working on) are the best way he knows of to spread the memes that lead to the first step on that chain.
I don’t claim here that he’s right to see it that way, merely that I think he does. That is, I think he’s trying to implement the approach you’re suggesting, given his understanding of the problem.
Well, yes. (I noted it as my approach, but I can’t see another one to approach it with.) Which is why throwing LW’s intellectual integrity under the trolley like this is itself remarkable.
Well, there’s integrity, and then there’s reputation, and they’re different.
For example, my own on-three-minutes-thought proposed approach is similar to Kaminsky’s, though less urgent. (As is, I think, appropriate… more people are working on hacking internet security than on, um, whatever endeavor it is that would lead one to independently discover dangerous ideas about AI. To put it mildly.)
I think that approach has integrity, but it won’t address the issues of reputation: adopting that approach for a threat that most people consider absurd won’t make me seem any less absurd to those people.
However, discussion of the chain of reasoning is on-topic for LessWrong (discussing a spectacularly failed local chain of reasoning and how and why it failed), and continued removal of bits of the discussion does constitute throwing LessWrong’s integrity in front of the trolley.
By the time a person can grasp the chain of inference, and by the time they are consequentialist and Aumann-agreement-savvy enough for it to work on them, they probably wouldn’t be considered outsiders. I don’t know if there’s a way around that. It is unfortunate.
To generalise your answer: “the inferential distance is too great to show people why we’re actually right.” This does indeed suck, but is indeed not reasonably avoidable.
The approach I would personally try is furiously seeding memes that make the ideas that will help close the inferential distance more plausible. See selling ideas in this excellent post.
For what it’s worth, I gather from various comments he’s made in earlier posts that EY sees the whole enterprise of LW as precisely this “furiously seeding memes” strategy.
Or at least that this is how he saw it when he started; I realize that time has passed and people change their minds.
That is, I think he believes/ed that understanding this particular issue depends on understanding FAI theory depends on understanding cognition (or at least on dissolving common misunderstandings about cognition) and rationality, and that this site (and the book he’s working on) are the best way he knows of to spread the memes that lead to the first step on that chain.
I don’t claim here that he’s right to see it that way, merely that I think he does. That is, I think he’s trying to implement the approach you’re suggesting, given his understanding of the problem.
Well, yes. (I noted it as my approach, but I can’t see another one to approach it with.) Which is why throwing LW’s intellectual integrity under the trolley like this is itself remarkable.
Well, there’s integrity, and then there’s reputation, and they’re different.
For example, my own on-three-minutes-thought proposed approach is similar to Kaminsky’s, though less urgent. (As is, I think, appropriate… more people are working on hacking internet security than on, um, whatever endeavor it is that would lead one to independently discover dangerous ideas about AI. To put it mildly.)
I think that approach has integrity, but it won’t address the issues of reputation: adopting that approach for a threat that most people consider absurd won’t make me seem any less absurd to those people.
However, discussion of the chain of reasoning is on-topic for LessWrong (discussing a spectacularly failed local chain of reasoning and how and why it failed), and continued removal of bits of the discussion does constitute throwing LessWrong’s integrity in front of the trolley.