I was crying the other night because our light cone is about to get ripped to shreds. I’m gonna do everything I can to do battle against the forces that threaten to destroy us.
If you find yourself in a stereotypically cultish situation like that, consider that you might have been brainwashed, like countless others before you, and countless others after you. “This time is different” is the worst argument in the world, because it is always wrong. “But really, this time the AI/Comet/Nibiru/… will kill us all!” Yeah. Sure. Every time.
Consider that I have carefully thought about this for a long time, and that I’m not going to completely override my reasoning and ape the heuristic “if internet stranger thinks I’m in a cult then I’m in a cult.”
that was not the heuristic I referred to. More like “what is a reference class of cults and does this particular movement pattern-match it? What meta-level (not object-level) considerations distinguish it from the rest of the reference class?” I assume that you “have carefully thought about this for a long time”, and have reasonably good answers, whatever they are.
Humans continuously pick their own training data and generally aren’t especially aware of the implicit bias this causes and consequent attractor dynamics. This could be the only bias that really matters strongly, and ironically it is not especially recognized in the one community supposedly especially concerned about cognitive biases.
Debates of “who’s in what reference class” tend to waste arbitrary amounts of time while going nowhere. A more helpful framing of your question might be “given that you’re participating in a community that culturally reinforces this idea, are you sure you’ve fully accounted for confirmation bias and groupthink in your views on AI risk?”. To me, LessWrong does not look like a cult, but that does not imply that it’s immune to various epistemological problems like groupthink.
If you find yourself in a stereotypically cultish situation like that, consider that you might have been brainwashed, like countless others before you, and countless others after you. “This time is different” is the worst argument in the world, because it is always wrong. “But really, this time the AI/Comet/Nibiru/… will kill us all!” Yeah. Sure. Every time.
Consider that I have carefully thought about this for a long time, and that I’m not going to completely override my reasoning and ape the heuristic “if internet stranger thinks I’m in a cult then I’m in a cult.”
that was not the heuristic I referred to. More like “what is a reference class of cults and does this particular movement pattern-match it? What meta-level (not object-level) considerations distinguish it from the rest of the reference class?” I assume that you “have carefully thought about this for a long time”, and have reasonably good answers, whatever they are.
Humans continuously pick their own training data and generally aren’t especially aware of the implicit bias this causes and consequent attractor dynamics. This could be the only bias that really matters strongly, and ironically it is not especially recognized in the one community supposedly especially concerned about cognitive biases.
Debates of “who’s in what reference class” tend to waste arbitrary amounts of time while going nowhere. A more helpful framing of your question might be “given that you’re participating in a community that culturally reinforces this idea, are you sure you’ve fully accounted for confirmation bias and groupthink in your views on AI risk?”. To me, LessWrong does not look like a cult, but that does not imply that it’s immune to various epistemological problems like groupthink.