Assume these and such people who claim to be right actually are at-least-somewhat-straightforwardly right, and they have good evidence or arguments that you’re just not aware of. (There are many plausible reasons for your ignorance; e.g. for the longest time I thought Christianity and ufology were just obviously stupid, because I’d only read atheist/skeptic/scientismist diatribes. What evidence filtered evidence?) What is the most plausible evidence or argument that can be found while searching in good faith? This often splits in two directions:
The Vassarian steel method: E.g., you hear lots of stuff about fairies, so you go digging around and find Charles Bonnet syndrome. This might be akin to constructing steel men, but beware!, for it is often a path to sophistry & syncretism. You know how in Dan Brown novels he keeps constructing these shallow connections between spirituality and science in order to show that they’re not actually at odds? Don’t be Dan Brown.
The Newsomelike schizophrenic method: You find Charles Bonnet syndrome but decide that even that isn’t enough—you postulate that daimons are taking advantage of any plausible excuse (e.g. stroke, optical damage, sleep paralysis) to manipulate people into delusion. (You then independently re-derive justifications for burning witches or whatever, ’cuz why not?) This might be akin to paranoid schizophrenia, but beware!, for it is often the path to, um, paranoid schizophrenia.
Some contrarian topics I’ve had fun exploring:
Assume UFO phenomena and Marian apparitions are legit, i.e. caused by some transhumanly powerful process. E.g., the Miracle at Fatima. What would be the mechanism? More pertinently, what would be the motivations?
Assume legit retroscausal psi effects in parapsychology: What would be the mechanism?
Assuming it is legit, i.e. retrocausal results are legit, why is psi capricious?
Assume intelligent life isn’t fantastically unlikely. Why no signs of intelligent life? (Related to “why is psi capricious” question.)
Remember, skepticism is easy, it’s the default position: if the phenomenon you’re modeling is actually complex, your explanation will have to be subtle. It’s always too easy to shout “confirmation bias”, “mass hallucination”, “memetic selection pressures”, and what have you. Don’t fall for that trap; it’s just as much of an error as the Dan Brown trap—maybe moreso, because at least the Dan Brown trap doesn’t tell you to ignore important evidence.
If you make an argument along the lines of “the prior probability of that hypothesis is low”, deduct 10 of your contrarian points. If you make a reference to the universal prior, deduct 20 points and feel guilty for the next few weeks.
Note that I think I’m a decent contrarian but I’m bad at communicating contrarian ideas; I’m not sure to what extent this is a personal quirk or a general problem when talking to people who start out assuming that you’re crazy/deluded/trolling/whatever. If there is a General Contrarian Heuristic that’s more amenable to communicating resultant insights then maybe that heuristic is better.
“May we not forget interpretations consistent with the evidence, even at the cost of overweighting them.”
For comparison, the General Chess Heuristic: Think about a move you could make, think about the moves your opponent could make in reply, think about what moves you could make if they replied with any of those candidate moves, &c.; evaluate all possible resultant positions, subject to search heuristics and time constraints.
What’s interesting is that novice chess players reliably forget to even consider what moves their opponent could make; their thought process barely includes the opponent’s possible thought process as a fundamental subroutine. I think novice rationalists make the same error (where “opponent” is “person or group of people who disagree with me”), and unfortunately, unlike in chess, they don’t often get any feedback alerting them to their mistake.
(Interestingly, Roko once almost defeated me in chess despite having significantly less experience than me, because he just thought really hard and reliably calculated a ton of lines. I’d never seen anyone do that successfully, and was very impressed. I would’ve lost except he made a silly blunder in the endgame. He who has ears to hear, let him hear.)
I would love to be better at contrarianism, but I don’t know where to begin.
I got where I am today mostly through trial and error.
The General Contrarian Heuristic:
Assume these and such people who claim to be right actually are at-least-somewhat-straightforwardly right, and they have good evidence or arguments that you’re just not aware of. (There are many plausible reasons for your ignorance; e.g. for the longest time I thought Christianity and ufology were just obviously stupid, because I’d only read atheist/skeptic/scientismist diatribes. What evidence filtered evidence?) What is the most plausible evidence or argument that can be found while searching in good faith? This often splits in two directions:
The Vassarian steel method: E.g., you hear lots of stuff about fairies, so you go digging around and find Charles Bonnet syndrome. This might be akin to constructing steel men, but beware!, for it is often a path to sophistry & syncretism. You know how in Dan Brown novels he keeps constructing these shallow connections between spirituality and science in order to show that they’re not actually at odds? Don’t be Dan Brown.
The Newsomelike schizophrenic method: You find Charles Bonnet syndrome but decide that even that isn’t enough—you postulate that daimons are taking advantage of any plausible excuse (e.g. stroke, optical damage, sleep paralysis) to manipulate people into delusion. (You then independently re-derive justifications for burning witches or whatever, ’cuz why not?) This might be akin to paranoid schizophrenia, but beware!, for it is often the path to, um, paranoid schizophrenia.
Some contrarian topics I’ve had fun exploring:
Assume UFO phenomena and Marian apparitions are legit, i.e. caused by some transhumanly powerful process. E.g., the Miracle at Fatima. What would be the mechanism? More pertinently, what would be the motivations?
Assume legit retroscausal psi effects in parapsychology: What would be the mechanism?
Assuming it is legit, i.e. retrocausal results are legit, why is psi capricious?
Assume intelligent life isn’t fantastically unlikely. Why no signs of intelligent life? (Related to “why is psi capricious” question.)
Remember, skepticism is easy, it’s the default position: if the phenomenon you’re modeling is actually complex, your explanation will have to be subtle. It’s always too easy to shout “confirmation bias”, “mass hallucination”, “memetic selection pressures”, and what have you. Don’t fall for that trap; it’s just as much of an error as the Dan Brown trap—maybe moreso, because at least the Dan Brown trap doesn’t tell you to ignore important evidence.
If you make an argument along the lines of “the prior probability of that hypothesis is low”, deduct 10 of your contrarian points. If you make a reference to the universal prior, deduct 20 points and feel guilty for the next few weeks.
Note that I think I’m a decent contrarian but I’m bad at communicating contrarian ideas; I’m not sure to what extent this is a personal quirk or a general problem when talking to people who start out assuming that you’re crazy/deluded/trolling/whatever. If there is a General Contrarian Heuristic that’s more amenable to communicating resultant insights then maybe that heuristic is better.
“May we not forget interpretations consistent with the evidence, even at the cost of overweighting them.”
Upvoted. The easiest way to get the wrong answer is to never have considered the right answer.
I’ve always thought that imagination belonged on the list of rationalist virtues.
I like that a lot.
“What do you think are the rationalist virtues?” might be an interesting discussion post.
For comparison, the General Chess Heuristic: Think about a move you could make, think about the moves your opponent could make in reply, think about what moves you could make if they replied with any of those candidate moves, &c.; evaluate all possible resultant positions, subject to search heuristics and time constraints.
What’s interesting is that novice chess players reliably forget to even consider what moves their opponent could make; their thought process barely includes the opponent’s possible thought process as a fundamental subroutine. I think novice rationalists make the same error (where “opponent” is “person or group of people who disagree with me”), and unfortunately, unlike in chess, they don’t often get any feedback alerting them to their mistake.
(Interestingly, Roko once almost defeated me in chess despite having significantly less experience than me, because he just thought really hard and reliably calculated a ton of lines. I’d never seen anyone do that successfully, and was very impressed. I would’ve lost except he made a silly blunder in the endgame. He who has ears to hear, let him hear.)