Reading the comments, I saw it somehow turned into a discussion on whether or not Eliezer Yudkowski elects biased favor for cryonics, transhumanism, etc. Didn’t read far enough to see anyone hurl any accusations of Nazism or Hitler-likeness, but I’ll weigh in and say that I’m new to Lesswrong and enjoy a good amount of Eliezer’s articles and find them to be good tools for learning clear thought, and I also have almost no familiarity with any of his theories (or opinions as it may be) that fall outside the scope of heuristics, fallacies, statistics or decision. So far I’ve only managed to read a smattering of Bayesian statistics and Feynman (still struggling with both), but I would concider the whole thing a wasted effort if elected any human to a level beyond question. If I read Eliezer’s articles on the Affect Heuristic and think “I’ll just accept this as true because Mr Yudkowski says it’s true. Phew! Thank goodness someone smarter than me did all that heavy thinking for me” than CLEARLY I need to reread it
Odinn
It may not seem fair to respond to something that was meant to be a ‘closing’, but it also shouldn’t be an excuse for making your argument… well, a seperate magisterium. If you had taken the time to read the basics (assuming you ever read this, fully 5 years after claiming to leave, still others may benefit) you would know that Eliezer isn’t claiming that all religious people are characteristically insane. That hypothesis would be easily falsifiable by presenting any responsible, educated person who espouses a religious belief (and there are plenty.) The actual point, right in the article’s title, is that those beliefs, -Even If- they’re shared by really nifty, otherwise good people, are factually falsifiable.
I think the intended message is we should get nervous about applying an Absolute, Literal lens to any literature, especially if we get this Wonderful, Amazing, Good feeling from doing so.
I often wonder how people come around to disbelieving things they’ve seen with their own eyes. In your hypothetical, you have seen the flight for yourself, but the Brothers are prestidigitators. I can see the validity in thinking “I’ve just seen something hitherto extraordinary, so let’s make sure that any other explanations for what I’ve seen (like wire-tricks) are less likely than postulate: that plane can really fly!” But I don’t think it’s constructive to just pattern match “These guys make a living tricking people with unbelievable bologna, so going so far as even SEEING something perpetrated by these hoaxters would make me look stupid. Therefore, I didn’t see that plane fly”
This reminds me of a thought experiment where perfect averages skew towards one extreme when you eliminate one radical. It makes mathematical sense. Apparently, a village can come extremely close to guessing the weight of an ox by taking all of their guesses and averaging them, even if some indivuals are radically under or over. But change the scope and you may change the median’s accuracy (or sanity, as the articles metaphor) Lock the village in a room with no clocks or windows and wait until 6 am, just before any hint of sunlight, then show them the sky and take their guesses. The radicals that guess ‘midnight’ won’t change, but the ones who would have said ‘noon’ will, so your average would slide to ever more inaccurately early. Just a thought model though, I’ve never read this precise test being done.