Crisis of Faith
It ain’t a true crisis of faith unless things could just as easily go either way.
—Thor Shenkel
Many in this world retain beliefs whose flaws a ten-year-old could point out, if that ten-year-old were hearing the beliefs for the first time. These are not subtle errors we’re talking about. They would be child’s play for an unattached mind to relinquish, if the skepticism of a ten-year-old were applied without evasion. As Premise Checker put it, “Had the idea of god not come along until the scientific age, only an exceptionally weird person would invent such an idea and pretend that it explained anything.”
And yet skillful scientific specialists, even the major innovators of a field, even in this very day and age, do not apply that skepticism successfully. Nobel laureate Robert Aumann, of Aumann’s Agreement Theorem, is an Orthodox Jew: I feel reasonably confident in venturing that Aumann must, at one point or another, have questioned his faith. And yet he did not doubt successfully. We change our minds less often than we think.
This should scare you down to the marrow of your bones. It means you can be a world-class scientist and conversant with Bayesian mathematics and still fail to reject a belief whose absurdity a fresh-eyed ten-year-old could see. It shows the invincible defensive position which a belief can create for itself, if it has long festered in your mind.
What does it take to defeat an error that has built itself a fortress?
But by the time you know it is an error, it is already defeated. The dilemma is not “How can I reject long-held false belief X?” but “How do I know if long-held belief X is false?” Self-honesty is at its most fragile when we’re not sure which path is the righteous one. And so the question becomes:
How can we create in ourselves a true crisis of faith, that could just as easily go either way?
Religion is the trial case we can all imagine.2 But if you have cut off all sympathy and now think of theists as evil mutants, then you won’t be able to imagine the real internal trials they face. You won’t be able to ask the question:
What general strategy would a religious person have to follow in order to escape their religion?
I’m sure that some, looking at this challenge, are already rattling off a list of standard atheist talking points—“They would have to admit that there wasn’t any Bayesian evidence for God’s existence,” “They would have to see the moral evasions they were carrying out to excuse God’s behavior in the Bible,” “They need to learn how to use Occam’s Razor—”
Wrong! Wrong wrong wrong! This kind of rehearsal, where you just cough up points you already thought of long before, is exactly the style of thinking that keeps people within their current religions. If you stay with your cached thoughts, if your brain fills in the obvious answer so fast that you can’t see originally, you surely will not be able to conduct a crisis of faith.
Maybe it’s just a question of not enough people reading Gödel, Escher, Bach at a sufficiently young age, but I’ve noticed that a large fraction of the population—even technical folk—have trouble following arguments that go this meta.3 On my more pessimistic days I wonder if the camel has two humps.
Even when it’s explicitly pointed out, some people seemingly cannot follow the leap from the object-level “Use Occam’s Razor! You have to see that your God is an unnecessary belief!” to the meta-level “Try to stop your mind from completing the pattern the usual way!” Because in the same way that all your rationalist friends talk about Occam’s Razor like it’s a good thing, and in the same way that Occam’s Razor leaps right up into your mind, so too, the obvious friend-approved religious response is “God’s ways are mysterious and it is presumptuous to suppose that we can understand them.” So for you to think that the general strategy to follow is “Use Occam’s Razor,” would be like a theist saying that the general strategy is to have faith.
“But—but Occam’s Razor really is better than faith! That’s not like preferring a different flavor of ice cream! Anyone can see, looking at history, that Occamian reasoning has been far more productive than faith—”
Which is all true. But beside the point. The point is that you, saying this, are rattling off a standard justification that’s already in your mind. The challenge of a crisis of faith is to handle the case where, possibly, our standard conclusions are wrong and our standard justifications are wrong. So if the standard justification for X is “Occam’s Razor!” and you want to hold a crisis of faith around X, you should be questioning if Occam’s Razor really endorses X, if your understanding of Occam’s Razor is correct, and—if you want to have sufficiently deep doubts—whether simplicity is the sort of criterion that has worked well historically in this case, or could reasonably be expected to work, et cetera. If you would advise a religionist to question their belief that “faith” is a good justification for X, then you should advise yourself to put forth an equally strong effort to question your belief that “Occam’s Razor” is a good justification for X.4
If “Occam’s Razor!” is your usual reply, your standard reply, the reply that all your friends give—then you’d better block your brain from instantly completing that pattern, if you’re trying to instigate a true crisis of faith.
Better to think of such rules as, “Imagine what a skeptic would say—and then imagine what they would say to your response—and then imagine what else they might say, that would be harder to answer.”
Or, “Try to think the thought that hurts the most.”
And above all, the rule:
Put forth the same level of desperate effort that it would take for a theist to reject their religion.
Because if you aren’t trying that hard, then—for all you know—your head could be stuffed full of nonsense as bad as religion.
Without a convulsive, wrenching effort to be rational, the kind of effort it would take to throw off a religion—then how dare you believe anything, when Robert Aumann believes in God?
Someone (I forget who) once observed that people had only until a certain age to reject their religious faith. Afterward they would have answers to all the objections, and it would be too late. That is the kind of existence you must surpass. This is a test of your strength as a rationalist, and it is very severe; but if you cannot pass it, you will be weaker than a ten-year-old.
But again, by the time you know a belief is an error, it is already defeated. So we’re not talking about a desperate, convulsive effort to undo the effects of a religious upbringing, after you’ve come to the conclusion that your religion is wrong. We’re talking about a desperate effort to figure out if you should be throwing off the chains, or keeping them. Self-honesty is at its most fragile when we don’t know which path we’re supposed to take—that’s when rationalizations are not obviously sins.
Not every doubt calls for staging an all-out Crisis of Faith. But you should consider it when:
A belief has long remained in your mind;
It is surrounded by a cloud of known arguments and refutations;
You have sunk costs in it (time, money, public declarations);
The belief has emotional consequences (note this does not make it wrong);
It has gotten mixed up in your personality generally.
None of these warning signs are immediate disproofs. These attributes place a belief at risk for all sorts of dangers, and make it very hard to reject when it is wrong. And they hold for Richard Dawkins’s belief in evolutionary biology, not just the Pope’s Catholicism.
Nor does this mean that we’re only talking about different flavors of ice cream. Two beliefs can inspire equally deep emotional attachments without having equal evidential support. The point is not to have shallow beliefs, but to have a map that reflects the territory.
I emphasize this, of course, so that you can admit to yourself, “My belief has these warning signs,” without having to say to yourself, “My belief is false.”
But what these warning signs do mark is a belief that will take more than an ordinary effort to doubt effectively. It will take more than an ordinary effort to doubt in such a way that if the belief is in fact false, you will in fact reject it. And where you cannot doubt in this way, you are blind, because your brain will hold the belief unconditionally. When a retina sends the same signal regardless of the photons entering it, we call that eye blind.
When should you stage a Crisis of Faith?
Again, think of the advice you would give to a theist: If you find yourself feeling a little unstable inwardly, but trying to rationalize reasons the belief is still solid, then you should probably stage a Crisis of Faith. If the belief is as solidly supported as gravity, you needn’t bother—but think of all the theists who would desperately want to conclude that God is as solid as gravity. So try to imagine what the skeptics out there would say to your “solid as gravity” argument. Certainly, one reason you might fail at a crisis of faith is that you never really sit down and question in the first place—that you never say, “Here is something I need to put effort into doubting properly.”
If your thoughts get that complicated, you should go ahead and stage a Crisis of Faith. Don’t try to do it haphazardly; don’t try it in an ad-hoc spare moment. Don’t rush to get it done with quickly, so that you can say, “I have doubted, as I was obliged to do.” That wouldn’t work for a theist, and it won’t work for you either. Rest up the previous day, so you’re in good mental condition. Allocate some uninterrupted hours. Find somewhere quiet to sit down. Clear your mind of all standard arguments; try to see from scratch. And make a desperate effort to put forth a true doubt that would destroy a false—and only a false—deeply held belief.
Elements of the Crisis of Faith technique have been scattered over many essays:
Avoiding Your Belief’s Real Weak Points—One of the first temptations in a crisis of faith is to doubt the strongest points of your belief, so that you can rehearse your good answers. You need to seek out the most painful spots, not the arguments that are most reassuring to consider.
The Meditation on Curiosity—Roger Zelazny once distinguished between “wanting to be an author” versus “wanting to write,” and there is likewise a distinction between wanting to have investigated and wanting to investigate. It is not enough to say, “It is my duty to criticize my own beliefs”; you must be curious, and only uncertainty can create curiosity. Keeping in mind conservation of expected evidence may help you update yourself incrementally: for every single point that you consider, and each element of new argument and new evidence, you should not expect your beliefs to shift more (on average) in one direction than another. Thus you can be truly curious each time about how it will go.
Original Seeing—To prevent standard cached thoughts from rushing in and completing the pattern.
The Litany of Gendlin and the Litany of Tarski—People can stand what is true, for they are already enduring it. If a belief is true, you will be better off believing it, and if it is false, you will be better off rejecting it. You would advise a religious person to try to visualize fully and deeply the world in which there is no God, and to, without excuses, come to the full understanding that if there is no God then they will be better off believing there is no God. If one cannot come to accept this on a deep emotional level, one will not be able to have a crisis of faith. So you should put in a sincere effort to visualize the alternative to your belief, the way that the best and highest skeptic would want you to visualize it. Think of the effort a religionist would have to put forth to imagine, without corrupting it for their own comfort, an atheist’s view of the universe.
Tsuyoku Naritai!—The drive to become stronger.
The Genetic Heuristic—You should be extremely suspicious if you have many ideas suggested by a source that you now know to be untrustworthy, but by golly, it seems that all the ideas still ended up being right.
The Importance of Saying “Oops”—It really is less painful to swallow the entire bitter pill in one terrible gulp.
Singlethink—The opposite of doublethink. See the thoughts you flinch away from, that appear in the corner of your mind for just a moment before you refuse to think them. If you become aware of what you are not thinking, you can think it.
Affective Death Spirals and Resist the Happy Death Spiral—Affective death spirals are prime generators of false beliefs that it will take a Crisis of Faith to shake loose. But since affective death spirals can also get started around real things that are genuinely nice, you don’t have to admit that your belief is a lie, to try and resist the halo effect at every point—refuse false praise even of genuinely nice things. Policy debates should not appear one-sided.
Hold Off On Proposing Solutions—Don’t propose any solutions until the problem has been discussed as thoroughly as possible. Make your mind hold off on knowing what its answer will be; and try for five minutes before giving up—both generally, and especially when pursuing the devil’s point of view.
And these standard techniques, discussed in How to Actually Change Your Mind and Map and Territory, are particularly relevant:
The sequence on the bottom line and rationalization, which explains why it is always wrong to selectively argue one side of a debate.
Positive bias, motivated skepticism, and motivated stopping, lest you selectively look for support, selectively look for counter-counterarguments, and selectively stop the argument before it gets dangerous. Missing alternatives are a special case of stopping. A special case of motivated skepticism is fake humility, where you bashfully confess that no one can know something you would rather not know. Don’t selectively demand too much authority of counterarguments.
Beware of semantic stopsigns, applause lights, and the choice between explaining, worshiping, and ignoring something.
Feel the weight of burdensome details—each detail a separate burden, a point of crisis.
But really, there’s rather a lot of relevant material, here and on Overcoming Bias. There are ideas I have yet to properly introduce. There is the concept of isshokenmei—the desperate, extraordinary, convulsive effort to be rational. The effort that it would take to surpass the level of Robert Aumann and all the great scientists throughout history who never broke free of their faiths.
The Crisis of Faith is only the critical point and sudden clash of the longer isshoukenmei—the lifelong uncompromising effort to be so incredibly rational that you rise above the level of stupid damn mistakes. It’s when you get a chance to use the skills that you’ve been practicing for so long, all-out against yourself.
I wish you the best of luck against your opponent. Have a wonderful crisis!
1See “Occam’s Razor” (in Map and Territory).
2Readers born to atheist parents have missed out on a fundamental life trial, and must make do with the poor substitute of thinking of their religious friends.
3See “Archimedes’s Chromophone” (http://lesswrong.com/lw/h5/archimedess_chronophone) and “Chromophone Motivations” (http://lesswrong.com/lw/h6/chronophone_motivations).
4Think of all the people out there who don’t understand the Minimum Description Length or Solomonoff induction formulations of Occam’s Razor, who think that Occam’s Razor outlaws many-worlds or the simulation hypothesis. They would need to question their formulations of Occam’s Razor and their notions of why simplicity is a good thing. Whatever X in contention you just justified by saying “Occam’s Razor!” is, I bet, not the same level of Occamian slam dunk as gravity.
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Mental Mountains by 27 Nov 2019 5:30 UTC; 151 points) (
- Compartmentalization in epistemic and instrumental rationality by 17 Sep 2010 7:02 UTC; 123 points) (
- The Sin of Underconfidence by 20 Apr 2009 6:30 UTC; 103 points) (
- The Cowpox of Doubt by 16 Apr 2014 1:04 UTC; 63 points) (
- Back Up and Ask Whether, Not Why by 6 Nov 2008 19:20 UTC; 54 points) (
- Working Mantras by 24 Aug 2009 22:08 UTC; 48 points) (
- Let Your Mind Be Not Fixed by 31 Jul 2020 17:54 UTC; 46 points) (
- Exclude the supernatural? My worldview is up for grabs. by 25 Jun 2011 3:46 UTC; 34 points) (
- My Model of Gender Identity by 1 Apr 2023 3:03 UTC; 34 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- Seeking models of LW’s aversion to religion by 21 Feb 2022 18:54 UTC; 24 points) (
- 15 Nov 2021 0:51 UTC; 19 points) 's comment on Open & Welcome Thread November 2021 by (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- Ask OB: Leaving the Fold by 9 Nov 2008 18:08 UTC; 15 points) (
- 6 Jul 2009 20:02 UTC; 14 points) 's comment on The Dangers of Partial Knowledge of the Way: Failing in School by (
- God Is Great by 29 Oct 2021 7:00 UTC; 13 points) (EA Forum;
- 19 Mar 2009 8:35 UTC; 13 points) 's comment on How to Not Lose an Argument by (
- The Singularity and Its Metaphysical Implications by 28 Mar 2022 0:18 UTC; 12 points) (EA Forum;
- 24 Jul 2012 18:41 UTC; 11 points) 's comment on Less Wrong fanfiction suggestion by (
- 24 Sep 2010 21:28 UTC; 10 points) 's comment on (Virtual) Employment Open Thread by (
- The Solution to Sleeping Beauty by 4 Mar 2024 6:46 UTC; 9 points) (
- Rationality Reading Group: Part K: Letting Go by 8 Oct 2015 2:32 UTC; 8 points) (
- How can there be a godless moral world ? by 21 Jun 2021 12:34 UTC; 7 points) (
- 27 Aug 2019 14:54 UTC; 7 points) 's comment on The Apologist and the Revolutionary by (
- 3 Jan 2013 1:34 UTC; 6 points) 's comment on Rationality Quotes January 2013 by (
- Crisis of Faith case study: beyond reductionism? by 8 Jun 2023 6:11 UTC; 6 points) (
- 24 Mar 2009 19:58 UTC; 6 points) 's comment on Levels of Power by (
- 9 Apr 2009 12:20 UTC; 5 points) 's comment on Extreme Rationality: It’s Not That Great by (
- [LINK, TED video] Kathryn Schulz on Being Wrong by 4 May 2011 15:52 UTC; 5 points) (
- 5 Apr 2011 17:54 UTC; 5 points) 's comment on Recent de-convert saturated by religious community; advice? by (
- 16 Jan 2011 21:02 UTC; 5 points) 's comment on The annoyingness of New Atheists: declaring God Dead makes you a Complete Monster? by (
- 5 Jul 2022 6:45 UTC; 4 points) 's comment on Looking back on my alignment PhD by (
- Any Christians Here? by 12 Jun 2017 23:18 UTC; 4 points) (
- 8 Jan 2013 19:21 UTC; 4 points) 's comment on Macro, not Micro by (
- 18 Mar 2015 14:17 UTC; 4 points) 's comment on Open thread, Mar. 16 - Mar. 22, 2015 by (
- 1 Jun 2011 0:26 UTC; 4 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- [SEQ RERUN] Crisis of Faith by 20 Sep 2012 4:43 UTC; 4 points) (
- 22 Aug 2015 4:31 UTC; 4 points) 's comment on Robert Aumann on Judaism by (
- 7 Nov 2012 7:18 UTC; 3 points) 's comment on Please don’t vote because democracy is a local optimum by (
- 2 Nov 2014 9:27 UTC; 3 points) 's comment on A discussion of heroic responsibility by (
- 27 Jan 2012 10:43 UTC; 3 points) 's comment on The problem with too many rational memes by (
- 28 Oct 2013 19:38 UTC; 2 points) 's comment on Open Thread, October 27 − 31, 2013 by (
- 9 Jun 2009 22:14 UTC; 2 points) 's comment on The Aumann’s agreement theorem game (guess 2/3 of the average) by (
- 16 Jan 2014 9:26 UTC; 2 points) 's comment on Stupid Questions Thread—January 2014 by (
- 6 Jan 2009 11:56 UTC; 1 point) 's comment on Rationality Quotes 21 by (
- 18 Dec 2015 22:51 UTC; 1 point) 's comment on Outside the Laboratory by (
- 1 Mar 2020 18:28 UTC; 1 point) 's comment on The Failures of Eld Science by (
- 14 Mar 2009 11:01 UTC; 1 point) 's comment on The Most Important Thing You Learned by (
- 5 Mar 2018 9:53 UTC; 1 point) 's comment on The Jordan Peterson Mask by (
- Book of Mormon Discussion by 1 Aug 2011 18:25 UTC; -1 points) (
- God Is Great by 30 Oct 2021 13:03 UTC; -11 points) (
- 9 Oct 2022 4:15 UTC; -17 points) 's comment on The Genetic Fallacy by (
This is an unusually high quality post, even for you Eliezer; congrats!
It seems that it takes an Eliezer-level rationalist to make an explicit account of what any ten-year-old can do intuitively. For those not quite Eliezer-level or not willing to put in the effort, this is really frustrating in the context of an argument or debate.
I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like “if a belief is false, you are better off knowing that it is false”.
It is even possible that some overoptimistic transhumanists/singularitarians are better off, by their own standards, remaining deluded about the potential dangers of technology. You have the luxury of being intelligent enough to be able to utilize your correct belief about how precarious our continued existence is becoming. For many people, such a belief is of no practical benefit yet is psychologically detrimental.
This creates a “tradgedy of the commons” type problem in global catastrophic risks: each individual is better off living in a fool’s paradise, but we’d all be much better off if everyone faced up to the dangers of future technology.
Many in this world retain beliefs whose flaws a ten-year-old could point out
Very true. Case in point: the belief that “minimum description length” or “Solomonoff induction” can actually predict anything. Choose a language that can describe MWI more easily than Copenhagen, and they say you should believe MWI; choose a language that can describe Copenhagen more easily than MWI, and they say you should believe Copenhagen. I certainly could have told you that when I was ten...
The argument in this post is precisely analogous to the following:
Bayesian reasoning cannot actually predict anything. Choose priors that result in the posterior for MWI being greater than that for Copenhagen, and it says you should believe MWI; choose priors that result in the posterior for Copenhagen being greater than that for MWI, and it says you should believe Copenhagen.
The thing is, though, choosing one’s own priors is kind of silly, and choosing one’s own priors with the purpose of making the posteriors be a certain thing is definitely silly. Priors should be chosen to be simple but flexible. Likewise, choosing a language with the express purpose of being able to express a certain concept simply is silly; languages should be designed to be simple but flexible.
It seems to me that you’re waving the problem away instead of solving it. For example, I don’t know of any general method for devising a “non-silly” prior for any given parametric inference problem. Analogously, what if your starting language accidentally contains a shorter description of Copenhagen than MWI?
If you’re just doing narrow AI, then look at your hypothesis that describes the world (e.g. “For any two people, they have some probability X of having a relationship we’ll call P. For any two people with relationship P, every day, they have a probability Y of causing perception A.”), then fill in every parameter (in this case, we have X and Y) with reasonable distributions (e.g. X and Y independent, each with a 1⁄3 chance of being 0, a 1⁄3 chance of being 1, and a 1⁄3 chance of being the uniform distribution).
Yes, I said “reasonable”. Subjectivity is necessary; otherwise, everyone would have the same priors. Just don’t give any statement an unusually low probability (e.g. a probability practically equal to zero that a certain physical constant is greater than Graham’s number), nor any statement an unusually high probability (e.g. a 50% probability that Christianity is true). I think good rules are that the language your prior corresponds to should not have any atoms that can be described reasonably easily (perhaps 10 atoms or less) using only other atoms, and that every atom should be mathematically useful.
If the starting language accidentally contains a shorter description of Copenhagen than MWI? Spiffy! Assuming there is no evidence either way, Copenhagen will be more likely than MWI. Now, correct me if I’m wrong, but MWI is essentially the idea that the set of things causing wavefunction collapse is empty, while Copenhagen states that it is not empty. Supposing we end up with a 1⁄3 chance of MWI being true and a 2⁄3 chance that it’s some other simple thing, is that really a bad thing? Your agent will end up designing devices that will work only if a certain subinterpretation of the Copenhagen interpretation is true and try them out. Eventually, most of the simple, easily-testable versions of the Copenhagen interpretation will be ruled out—if they are, in fact, false—and we’ll be left with two things: unlikely versions of the Copenhagen interpretation, and versions of the Copenhagen interpretation that are practically identical to MWI.
(Do I get a prize for saying “e.g.” so much?)
Yes. Here is an egg and an EEG.
The minimum description length formulation doesn’t allow for that at all. You are not allowed to pick whatever language you want, you have to pick the optimal code. If in the most concise code possible, state ‘a’ has a smaller code than state ‘b’, then ‘a’ must be more probable than ‘b’, since the most concise codes possible assign the smallest codes to the most probable states.
So if you wanna know what state a system is in, and you have the ideal (or close to ideal) code for the states in that system, the probability of that state will be strongly inversely correlated with the length of the code for that state.
I haven’t read anything like this in my admittedly limited readings on Solomonoff induction. Disclaimer: I am only a mere mathematician in a different field, and have only read a few papers surrounding Solomonoff.
The claims I’ve seen revolve around “assembly language” (for some value of assembly language) being sufficiently simple that any biases inherent in the language are small (some people claim constant multiple on the basis that this is what happens when you introduce a symbol ‘short-circuiting’ a computation). I think a more correct version of Anti-reductionist’s argument should run, “we currently do not know how the choice of language affects SI; it is conceivable that small changes in the base language imply fantastically different priors.”
I don’t know the answer to that, and I’d be very glad to know if someone has proved it. However, I think it’s rather unlikely that someone has proved it, because 1) I expect it will be disproven (on the basis that model-theoretic properties tend to be fragile), and 2) given the current difficulties in explicitly calculating SI, finding an explicit, non-trivial counter-example would probably be difficult.
Note that
is not such a counter-example, because we do not know if “sufficiently assembly-like” languages can be chosen which exhibit such a bias. I don’t think the above thought-experiment is worth pursuing, because I don’t think we even know a formal (on the level of assembly-like languages) description of either CI or MWI.
Not Solomonoff, minimum description length, I’m coming from an information theory background, I don’t know very much about Solomonoff induction.
OP is talking about Solomonoff priors, no? Is there a way to infer on minimum description length?
What is OP?
EY
I meant Anti-reductionist, the person potato originally replied to… I suppose grandparent would have been more accurate.
He was talking about both.
So how do you predict with minimum description length?
With respect to the validity of reductionism, out of MML and SI, one theoretically predicts and the other does not. Obviously.
Aren’t you circularly basing your code on your probabilities but then taking your priors from the code?
Yep, but that’s all the proof shows: the more concise your code, the stronger the inverse correlation between the probability of a state and the code length of that state.
Bo, the point is that what’s most difficult in these cases isn’t the thing that the 10-year-old can do intuitively (namely, evaluating whether a belief is credible, in the absence of strong prejudices about it) but something quite different: noticing the warning signs of those strong prejudices and then getting rid of them or getting past them. 10-year-olds aren’t specially good at that. Most 10-year-olds who believe silly things turn into 11-year-olds who believe the same silly things.
Eliezer talks about allocating “some uninterrupted hours”, but for me a proper Crisis of Faith takes longer than that, by orders of magnitude. If I’ve got some idea deeply embedded in my psyche but am now seriously doubting it (or at least considering the possibility of seriously doubting it), then either it’s right after all (in which case I shouldn’t change my mind in a hurry) or I’ve demonstrated my ability to be very badly wrong about it despite thinking about it a lot. In either case, I need to be very thorough about rethinking it, both because that way I may be less likely to get it wrong and because that way I’m less likely to spend the rest of my life worrying that I missed something important.
Yes, of course, a perfect reasoner would be able to sit down and go through all the key points quickly and methodically, and wouldn’t take months to do it. (Unless there were a big pile of empirical evidence that needed gathering.) But if you find yourself needing a Crisis of Faith, then ipso facto you aren’t a perfect reasoner on the topic in question.
Wherefore, I at least don’t have the time to stage a Crisis of Faith about every deeply held belief that shows signs of meriting one.
I think there would be value in some OB posts about resource allocation: deciding which biases to attack first, how much effort to put into updating which beliefs, how to prioritize evidence-gathering versus theorizing, and so on and so forth. (We can’t Make An Extraordinary Effort every single time.) It’s a very important aspect of practical rationality.
If I believe something that’s wrong, it’s probably because I haven’t thought about it, merely how nice it is that it’s true, or how I should believe it… or I’ve just been rehearsing what I’ve read in books about how you should think about it. A few uninterrupted hours is probably enough to get the process of actually thinking about it started.
Some interesting, useful stuff in this post. Minus the status-cocaine of declaring that you’re smarter than Robert Aumann about his performed religious beliefs and the mechanics of his internal mental state. In that area, I think Michael Vassar’s model for how nerds interpret the behavior of others is your God. There’s probably some 10 year olds that can see through it (look everybody, the emperor has no conception that people can believe one thing and perform another). Unless this is a performance on your part too, and there’s shimshammery all the way down!
“How do I know if long-held belief X is false?”
Eliezer, I guess if you already are asking this question you are well on your way. The real problem arises when you didn’t even manage to pinpoint the possibly false believe. And yes I was a religious person for many years before realizing that I was on the wrong way.
Why didn’t I question my faith? Well, it was so obviously true to me. The thing is: did you ever question heliocentrism? No? Why not? When you ask the question “How do I know if Heliocentrism is false?” You are already on your way. The thing is, your brain needs a certain amount of evidence to pinpoint the question.
How did I overcome my religion? I noticed that something was wrong with my worldview like seeing a deja vu in the matrix every now and then. This on an intellectual level, not as a visible thing. But much more subtle and less obvious so you really have to be attentive no notice it, to notice that there is a problem in the pattern. Things aren’t the way they should be.
But over time I became more and more aware that the pieces weren’t fitting together. But from there to arrive at the conclusion that my basic assumptions where wrong was really not easy. If you live in the matrix and see strange things happening, how will you arrive at the conclusion that this is because you are in a simulation?
Your posts on rationality were a big help, though. They always say: “Jesus will make you free.” Unfortunately that didn’t work out for me. Well, I finally am free after a decade of false believing, and during all the time I was a believer I never was as happy as I’m now.
Good post but this whole crisis of faith business sounds unpleasant. One would need Something to Protect to be motivated to deliberately venture into this masochistic experience.
All these posts present techniques for applying a simple principle: check every step on the way to your belief. They adapt this principle to be more practically useful, allowing a person to start on the way lacking necessary technical knowledge, to know which errors to avoid, which errors come with being human, where not to be blind, which steps to double-check, what constitutes a step and what a map of a step, and so on. All the techniques should work in background mode, gradually improving the foundations, propagating the consequences of the changes to more and more dearly held beliefs, shifting the focus of inquiry.
Crisis of faith finds a target to attack, boosts a priority of checking the foundations for a specific belief. I’m not sure how useful forcing this process could be, major shifts in defining beliefs take time, and probably deservingly so. Effects of a wrong belief should be undone by the holes in a network supporting these beliefs, not by executive decision declaring the belief wrong. Even though executive decision is based on the same grounds, it’s hard to move more than one step of inferential distance without shooting yourself in the foot, before you train yourself to intuitively perceive the holes, or rather repaired fabric. So I guess that the point of exercise is in making the later gradual review more likely to seriously consider the evidence, to break the rust, not in changing the outlook overnight. Changing the outlook is a natural conclusion of a long road, it doesn’t take you by surprise. One day you just notice the old outlook to be dead, and so leave it in the past.
Fact check: MDL is not Bayesian. Done properly, it doesn’t even necessarily obey the likelihood principle. Key term: normalized maximum likelihood distribution.
My father is an atheist with Jewish parents, and my mother is a (non-practicing) Catholic. I was basically raised “rationalist”, having grown up reading my father’s issues of Skeptical Inquirer magazine. I find myself in the somewhat uncomfortable position of admitting that I acquired my belief in “Science and Reason” in pretty much the same way that most other people acquire their religious beliefs.
I’m pretty sure that, like everyone else, I’ve got some really stupid beliefs that I hold too strongly. I just don’t know which ones they are!
Great post. I think that this sort of post on rationality is extremely valuable. While one can improve everyday judgment and decision making by learning about rationality from philosophy, econ and statistics, I think that these informal posts can also make a significant difference to people.
The recent posts on AI theorists and EY’s biography were among my least favorite on OB. If you have a choice, please spend more time on either technical sequences (e.g. stuff on concepts/concept space, evolutionary bio, notion of bias in statistics) or stuff on rationality like this.
A good reminder. I’ve recently been studying anarcho-capitalism. It’s easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.
EDIT: This comment is redacted.
Replace “anarcho-capitalism” with “singularitarianism” and that’s the experience I’m having. It’s not so much wondering if a long-held belief is false as wondering if the new belief I’m picking up is false.
“Try to think the thought that hurts the most.”
This is exactly why I like to entertain religious thoughts. My background, training, and inclination are to be a thoroughgoing atheist materialist, so I find that trying to make sense of religious ideas is good mental exercise. Feel the burn!
In that vein, here is an audio recording of Robert Aumann on speaking on “The Personality of God”.
Also, the more seriously religious had roughly the same idea, or maybe it’s the opposite idea. The counterfactuality of religious ideas is part of their strength, apparently.
Here’s a doubt for you: I’m a nerd, I like nerds, I’ve worked on technology, and I’ve loved techie projects since I was a kid. Grew up on SF, all of that.
My problem lately is that I can’t take Friendly AI arguments seriously. I do think AI is possible, that we will invent it. I do think that at some point in the next hundreds of years, it will be game over for the human race. We will be replaced and/or transformed.
I kind of like the human race! And I’m forced to conclude that a human race without that tiny fraction of nerds could last a good long time yet (tens of thousands of years) and would change only slowly, through biological evolution. They would not do much technology, since it takes nerds (in the broadest sense) to do this. But, they would still have fulfilling, human, lives.
On the other hand, I don’t think a human race with nerds can forever avoid inventing a self-destructive technology like AI. So much as I have been brought up to think of politicians and generals as destroyers, and scientists and other nerds as creators, I have to admit that it’s the other way around, ultimately.
The non-nerds can’t destroy the human race. Only we nerds can do that.
That’s my particular crisis of faith. Care to take a side?
I would say that the non-nerds can’t save the human race either though. Without nerds our population never exceeds what can be supported by hunting, gathering, and maybe some primitive agriculture.
Which isn’t much. We’d be constantly hovering just short of being wiped out by some global cataclysm. And there’s some evidence that we’ve narrowly missed just that at least once in our history. If we want to survive long-term we need to get off this rock, and then we need to find at least one other solar system. After that we can take a breather while we think about finding another galaxy to colonize.
Yes, we might destroy ourselves with new technology. But we’re definitely dead without it. And if you look at how many new technologies have been denounced as being harbingers for the end of the world vs how many times the world has actually ended, I’d have to think that gut feelings about what technologies are the most dangerous and how badly we’ll handle them are probably wrong more often than they’re right.
Have you ever heard of the term hubris?
If you can’t imagine ways in which the human race can be destroyed by non-nerds then that shows a lack of imagination not that it can not be done. Also, it isn’t like nerds and non-nerds are actually a different species, people that do not have a natural aptitude for a subject are still capable of learning the subject,. If nerds all moved to nerdtopia other people would study what material there was on the subject and attempt to continue on. If this is not possible then you have applied the term nerd to be too broad such that it contains the majority of people and all that would be left are people that are incapable to fully taking care of themselves without some form of outside assistance and would thus destroy the human race by sheer ineptitude at basic survival skills.
The vast majority of people is both incapable of and uninterested in creating new technology OR doing science (and their incapability supports their lack of interest). So, if nerds move to nerdtopia taking some already-deadly technologies with them, the remaining world will never create something AI-like… well, given that newborns with nerds’ skills are taken away early. People are generally stupid—not only in the sense of exhibiting specific biases discussed by Eliezer but also in the sense of lack of both curiosity and larger-than-three working memory (or larger-than-120 IQ, whitherever you prefer) in the majority (and larger-than-two/larger-than-100 in a big group). Having intelligence—IQ above roughly 120 or any isomorphic measure—is something so rare that from standard p<0.05 view it’s inexistent (Bell’s curve, 100 as mean, 10 as sigma).
I’d be interested in a list of questions you had decided to have a crisis of faith over. If I get round to it I might try and have one over whether a system can recursively self-improve in a powerful way or not.
A lot of truths in EY’s post. Though I also agree with Hopefully Anon’s observations—as is so often the case, Eliezer reminds me of Descartes—brilliant, mathematical, uncowed by dogma, has his finger on the most important problems, is aware of how terrifyingly daunting those problems are, thinks he has a universal method to solve those problems.
Trying to set up an artificial crisis in which one outcome is as likely as another is a very bad idea.
If your belief is rationally unjustfiable, a ‘crisis’ in which one has only a fifty-fifty chance of rejecting the belief is not an improvement in rationality. Such a crisis is nothing more than picking a multiple-choice answer at random—and with enough arbirarily-chosen options, the chance of getting the right one becomes arbitrarily small.
A strategy that actually works is setting your specific beliefs aside and returning to a state of uncertainty, then testing one possibility against the other on down to first principles. Uncertainty != each possibility equally likely.
I think he meant that each possibility appears equally likely before you look at the evidence. Basically reset your prior, if that were possible.
Thank you for this post, Eliezer. I must painfully question my belief that a positive Singularity is likely to occur in the foreseeable future.
before i could post my little comment being beliver in faith; do not you think your identity stands because;there are theists and as such you are athiest? if this is your permanet identity; then is it not necessary that thiests must exist? thanks
That doesn’t make sense. It is possible to be a theist with out the existence of atheists just as it is possible to be an earthling with out the existence of non-earthlings and the opposite must also be true.
Nazir Ahmad Bhat, you are missing the point. It’s not a question of identity, like which ice cream flavor you prefer. It’s about truth. I do not believe there is a teapot orbiting around Jupiter, for the various reasons explained on this site (see Absence of evidence is evidence of absence and the posts on Occam’s Razor). You may call this a part of my identity. But I don’t need people to believe in a teapot. Actually, I want everyone to know as much as possible. Promoting false beliefs is harming people, like slashing their tires. You don’t believe in a flying teapot: do you need other people to?
Nazir, must there be atheists in order for you to believe in a god? The “identity” of those who believe that the world is round does not depend on others believing that the world is flat, or vice versa. Truth does not require disagreement.
Excellent post, Eliezer. Along with your comments on MR about the financial crisis, definitely good stuff worth reading.
I would submit that, for you, the belief you are unable to question is materialistic reductionism. I would suggest reading Irreducible Mind which will acquaint you with a great deal of evidence that reality is different from the current model of it you hold in your mind. I would suggest that you begin with chapter 3 which presents a vast body of observational and research evidence from medicine that simply doesn’t fit into your current belief system. Start with the introduction, read the entire introduction (which is very good and fits with many of the more conceptual posts you have made here about avoiding pitfalls along the path of rationality), and then read chapter 3 about empirical findings of the relationship between mind and body.
And this is why the mainstream believes in black holes, dark matter, dark energy and invisible unicorns.
Matthew C.,
You’ve been suggesting that for a while:
http://www.overcomingbias.com/2007/01/godless_profess.html#comment-27993437 http://www.overcomingbias.com/2008/09/psychic-powers.html#comment-130445874
Those who have read it (or the hundreds of pages available on Google Books, which I have examined) don’t seem to be impressed.
Why do you think it’s better than Broderick’s book? If you want to promote it more effectively in the face of silence (http://www.overcomingbias.com/2007/02/what_evidence_i.html), why not pay for a respected reviewer’s time and a written review (in advance, so that you’re not accused of bribing to ensure a favorable view)? Perhaps from a statistician?
Do these methods actually work? There were a few posts here on how more evidence and bias awareness don’t actually change minds or reduce bias, at least not without further effort. Can a practical “Deduce the Truth in 30 Days” guide be derived from these methods, and change the world?
A fifty-fifty chance of choosing your previous belief does not constitute a reasonable test. If your belief is unreasonable, why would treating it as equally plausible as the alternative be valid?
The trick is to suspend belief and negate the biasing tendencies of belief when you re-evaluate, not to treat all potentials as equal.
Eliezer:
I think you should try applying your own advice to this belief of yours. It is usually true, but it is certainly not always true, and reeks of irrational bias.My experience with my crisis of faith seems quite opposite to your conceptions. I was raised in a fundamentalist family, and I had to “make an extraordinary effort” to keep believing in Christianity from the time I was 4 and started reading through the Bible, and finding things that were wrong; to the time I finally “came out” as a non-Christian around the age of 20. I finally gave up being Christian only when I was worn out and tired of putting forth such an extraordinary effort.
So in some cases your advice might do more harm than good. A person who is committed to making “extraordinary efforts” concerning their beliefs is more likely to find justifications to continue to hold onto their belief, than is someone who is lazier, and just accepts overwhelming evidence instead of letting it kick them into an “extraordinary effort.” In other words, you are advocating a combative, Western approach; I am bringing up a more Eastern approach, which is not to be so attached to anything in the first place, but to bend if the wind blows hard enough.
From “Twelve virtues of rationality” by Eliezer:
Eliezer uses almost the same words as you do.( Oh, and this document is from 2006, so he has not copied your lines.) Some posts earlier Eliezer accused you of not reading his writings and just making stuff up regarding his viewpoints.......
Agreed.
Every time I changed my mind about something, it felt like “quitting,” like ceasing the struggle to come up with evidence for something I wanted to be true but wasn’t. Realizing “It’s so much easier to give up and follow the preponderance of the evidence.”
Examples: taking an economics class made it hard to believe that government interventions are mostly harmless. Learning about archaeology and textual analysis made it hard to believe in the infallibility of the Bible. Hearing cognitive science/philosophy arguments made it hard to believe in Cartesian dualism. Reading more papers made it hard to believe that looking at the spectrum of the Laplacian is a magic bullet for image processing. Extensive conversations with a friend made it hard to believe that I was helping him by advising him against pursuing his risky dreams.
When something’s getting hard to believe, consider giving up the belief. Just let the weight fall. Be lazy. If you’re working hard to justify an idea, you’re probably working too hard.
One of the problems with your examples in both economics and archeology is that less is known on the subject then what you think is known, especially if you have just taken introductory courses on the subject.
The posts on making an extraordinary effort didn’t explicitly exclude preserving the contents of one’s beliefs as an effort worth being made extraordinarily, so you’ve definitely identified a seeming loophole, and yet you’ve simultaneously seemed to ignore all of the other posts about epistemic rationality.
MichaelG:
The idea is that if we invent Friendly AI first, it will become powerful enough to keep later, Unfriendly ones in check (either alone, or with several other FAIs working together with humanity). You don’t need to avoid inventing one forever: it’s enough to avoid inventing one as the first thing that comes up.
You have a set of beliefs optimized for co-occurence, and you are replacing one of these beliefs with a more-true belief. In other words, the new true belief will cause you harm because of other untrue (or less true) beliefs that you still hold.
If an entire community can be persuaded to adopt a false belief, it may enable them to overcome a tragedy-of-the-commons or prisoners’-dilemma situation.
If you still aren’t convinced whether you are always better-off with a true belief, ask yourself whether you have ever told someone else something that was not quite true, or withheld a truth from them, because you thought the full truth would be harmful.
I was raised in a christian family, fairly liberal Church of England, and my slide into agnosticism started when I about 5-7 when I asked if Santa Claus and God were real. I refused to get confirmed and stopped going to church when I was 13ish I think.
The trouble is that you cannot break new ground this way. You can’t do einstein like feats. You should follow the direction of the wind, but engage nitrous to follow that direction. Occasionally stopping and sticking a finger out the window to make sure you are going the right direction.
If an entire community can be persuaded to adopt a false belief, it may enable them to overcome a tragedy-of-the-commons or prisoners’-dilemma situation.
In a PD, agents hurt each other, not themselves. Obviously false beliefs in my enemy can help me.
Study this deranged rant. Its ardent theism is expressed by its praise of the miracles God can do, if he choses.
And yet,… There is something not quite right here. Isn’t it merely cloakatively theistic? Isn’t the ringing denounciation of “Crimes against silence” militant atheism at its most strident?
So here is my idea: Don’t try to doubt a whole core belief. That is too hard. Probe instead for the boundary. Write a little fiction, perhaps a science fiction of first contact, in which you encounter a curious character from a different culture. Write him a borderline belief, troublingly odd to both sides in a dispute about which your own mind is made up. He sits on one of our culture’s fences. What is his view like from up there?
Is he “really” on your side, or “really” on the other side. Now there is doubt you can actually be curious about. You have a thread to pull on; what unravels if you tug?
If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it. I think evolution facilitated self-delusion precisely because that is not the case.
I was a Fred Phelps style ultra-Calvinist and my transition involved scarcely any effort.
Also, anti-reductionist, that’s the first comment you’ve made I felt was worth reading. You may take it as an insult but I felt compelled to give you kudos.
Of course I deliberately did not qualify it. Frankly, if you’re still qualifying the statement, you’re not the intended audience for a post about how to make a convulsive effort to be rational using two dozen different principles.
And you seriously believe that, in all circumstances and for all people with any false belief, those people are better off believing the truth concerning that belief? The obvious counterexample is the placebo effect, where a false belief is scientifically proven to have a benefit. The beneficial effects of false beliefs are so powerful, that you can’t conduct a pharmaceutical study without accounting for them. And you are no doubt familiar with that effect. Another example would be believing that you’re never better off believing a false belief, because then you have more incentive to investigate suspicious beliefs.
The difficult epistemic state to get into is justifiably believing that you’re better off believing falsely about something without already, in some sense, knowing the truth about it.
It’s actually very easy and common to believe that you’re better off believing X, whether or not X is true, without knowing the truth about it. This is also well-justified in decision theory, and by your definition of rationality, if believing X will help you win. A common example is choosing to believe that your date has an “average romantic history” and choosing not to investigate.
If you think you can’t do this, I propose this math problem. Using a random number generator over all American citizens, I have selected Bob (but not identified him to you). If you can guess Bob’s IQ (with margin of error +/- 5 points), you get a prize. Do you think it is possible for Bob’s IQ to be higher or lower than you expected, and if so, do you believe you’re better off not having any expectation at all rather than a potentially false expectation? See, as soon as a question is asked, you fill in the answer with [expected answer range of probabilities] rather than [no data].
It’s much easier to believe something and not investigate, than to investigate and try to deceive yourself. And unless you add as an axiom “unlike all other humans for me false beliefs are never beneficial” (which sounds like a severe case of irony), then a rationalist on occasion must be in favor of said false beliefs. Just out of curiosity, why the switch from “rational” to “epistemic”?
Eliezer, what do you mean here? Do you mean:
(A1) Individuals in the reference class really are always better off with the truth, with sufficient probability that the alternative does not bear investigating;
(A2) Humans are so unreliable as judges of what we would and would not benefit from being deceived about that the heuristic “we’re always better off with the truth” is more accurate than the available alternatives;
(B) Individuals must adopt the Noble Might-be-truth “I’m always better off with the truth” to have a chance at the Crisis of Faith technique?
Eliezer: The position that people may be better off deluded in some situations is VERY compelling. If your audience is people who are literally NEVER better off deluded then I sincerely doubt that it includes you or anyone else. Obviously not every belief need receive all appropriate qualifications every single time, but when someone else points out a plausible qualification you should, as a rationalist, acknowledge it.
I’m very open to Anna’s (A1), especially given the special difficulties of this sort of investigation, but only with respect to themselves. I would expect someone as smart as me who knew me well enough to some day come upon a situation where I should, by my values, be deceived, at least for some period.
Mr.paul Rohdes: thanks this may include the response to other friend.the athiest belives in rationality much on scientific terms and desires to see GOD as easy as some physical thing in hand. it is appreciating that athiests are not common belivers and do claim to have the force of critical thinking behind in testing the things. assuming in their favour as crude as scientific argument to disprove the existence of GOD; athiest fails to appreciate similar kind of test to prove the existence of God.for instance on the discovery of “gravitation”newton just assumed on facts that gravitation could be there which was neither seen nor touched. he said”it is incomprehensible that inanimate and insensitive matter can exert a force of attraction on another without any {visible} contact without any meduim between them”{ refer worksof bently vol:3rd.p.221}in the sphere of critical thinking one can say that universe being so designed without error must have its designer; which theory of theists depend much on the same lines of scientific observaton as relied by athiests in other matters.however; from the post”crisis of faith” the learned author had already criticised such of those scietists who inspite of being scientists do belivein God.and finally; for athiests if science is the measuring rod; which is constantly going unde change; the athiests never hold that their scientific beilve is adhoc as on date to disblieve in GOd; may be the same science and criticl thinking tomorrow hold that GOD exists they shall have to beilve in it; then why propoganda on finality of argument? thanks
@Anna:
I mean that you’ve given up trying to be clever.
@Vassar:
The position that people may be optimally deluded, without a third alternative, is much less compelling.
The position that realistic human students of rationality can be trying to do their best (let alone do the impossible), while trying to deliberately self-delude, strikes me as outright false. It would be like trying to win a hot-dog eating contest while keeping a golf ball in your mouth.
It is this outright falsity that I refer to when I say that by the time you attempt to employ techniques at this level, you should already have given up on trying to be clever.
As someone once said to Brennan:
It’s easy to visualize Jeffreyssai deciding to not say something—in fact, he does that every time he poses a homework problem without telling the students the answer immediately. Can you visualize him lying to his students? (There are all sorts of clever-sounding reasons why you might gain a short-term benefit from it. Don’t stop thinking when you come to the first benefit.) Can you imagine Jeffreyssai deliberately deciding that he himself is better off not realizing that X is true, therefore he is not going to investigate the matter further?
Clearly, if everyone was always better off being in immediate possession of every truth, there would be no such thing as homework. But the distinction between remaining silent, and lying, and not wanting to know the truth even for yourself, suggests that there is more at work here than “People are always better off being in immediate possession of every truth.”
The problem with the idea that sometimes people are better of not knowing is that it has no practical impact on how an ideal rationalist should behave, even assuming it’s true. By the time you’ve learned something you’d be better off not knowing, it’s too late to unlearn it. Humans can’t really do doublethink, and especially not at the precision that would be required to be extremely rational while using it.
Well, I’ve just sat down and done one of those, and it was really difficult. Not so much because I was pushing against established beliefs (I had strong beliefs both ways, so it was more that any movement pushed somewhere) but because the largest worry I had, “Is this a fad?”, is hard to answer specifically because I’ve recently changed to become so much more Bayesian. I used to do daft things like “giving ideas a chance”. Consequently, I can’t look to my long and undistinguished history in order to glean hints. I already don’t do the obvious wrong stuff.
So the problem has to be phrased as “what sorts of irrationality would not be obvious to a beginner Bayesian?”
That’s a real poser. Just by being one, I’m in the worst possible place to guess.
(FWIW, the outcome was “continue, for now”.)
1) Do you believe this is true for you, or only other people?
2) If you know that someone’s cherished late spouse cheated on them, are you justified in keeping silent about the fact?
3) Are you justified in lying to prevent the other person from realizing?
4) If you suspect for yourself (but are not sure) that the cherished late spouse might have been unfaithful, do you think that you will be better off, both for the single deed, and as a matter of your whole life, if you refuse to engage in any investigation that might resolve your doubts one way or the other? If there is no resolving investigation, do you think that exerting some kind of effort to “persuade yourself”, will leave you better off?
5) Would you rather associate with friends who would (a) tell you if they discovered previously unsuspected evidence that your cherished late spouse had been unfaithful, or who would (b) remain silent about it? Which would be a better human being in your eyes, and which would be a better friend to you?
thanks for being selective in taste and desire to have readers and reviewrs of your personel choice.i am surprised that ordinary persons like me could disturb you much less to say over ride your privilege of being critical thinker within limits.since; you have opted not to publish my full post; you were otherwise then under obligation not to publish half of it.sooner u delete better it is. thanks for rationale patience.
Einstein had evidence; it just wasn’t experimental evidence. The discovery that your beliefs contain a logical inconsistency is a type of evidence.
I am better off (in most circumstances) if deluding myself to believe that the weather in Maine on the 23rd of June 1865 was near what I think the seasonal average might be, for that decade, rather than memorising the exact temperature and rainfall if it was presented to me.
I believe this is true for most people, apart from climatologists.
I would be rather not be around people who kept telling me true minutiae about the world and he cosmos, if they have no bearing on the problems I am trying to solve.
Am I justified in giving people a guess of the average temp, if someone had told me earlier what the exact temp was? Yes, if I didn’t discard data, even assuming I had a 100% truth detector, people could quite easily DOS me by truth flooding me, over running my memory buffers and preventing me from doing useful things.
There are an extremely high number of truths, some more valuable than others.
There is no way you can differentiate externally between someone telling a lie and someone forgetting. It has the exact same consequence, people will give less accurate information than they could have done.
Nazir, a secret hack to prevent Eliezer from deleting your posts is here. #11.6 is particularly effective.
Religion is the classic example of a delusion that might be good for you. There is some evidence that being religious increases human happiness, or social cohesion. It’s universality in human culture suggests that it has adaptive value.
Nope. There is some evidence that christians in the USA are happier than atheists in the USA. But since that correlation doesn’t hold up in Europe I prefer to interprete it as: America is bad for atheists.
Carl Schuman,
I keep posting the link, for a very simple reason.
Eliezer continues to post about the certainty of reductionism, while he has completely failed to investigate the evidence that reductionism cannot account for all of the observations.
He also continues to post snide remarks about the reality of psi phenomena. Again, he has completely failed to investigate the best evidence that he is wrong about this.
The post he wrote here shows a great committment to intellectual integrity. And I honestly believe he means what he wrote here.
I suspect at some point Eli’s desire for the truth will overcome his ego identification with his current beliefs as well as his financial interest in preserving them.
I happen to have come across a PDF of Irreducible Mind which is temporarily available here.
Start with the introduction (the best part of the intro begins on page 23 (xxiii) ), then read chapter 3 which covers in detail a vast panoply of medical phenomena seen in clinical practice and in research which simply does not fit into the reductionistic framework.
Of course there are lots of other good books and thousands of important research papers, many of which are cited in the appendices of Irreducible Mind. But the advantage of this book, and especially chapter 3, is that the inability of the standard reductionistic dogmas to account for the evidence simply becomes crushingly obvious.
His point is necessarily correct, as well as empirically so.
No, he hasn’t. The best evidence strongly indicates that there are no ‘unusual’ phenomena that require explanation, and that psi does not exist.The fact that you have deluded yourself into believing otherwise does not constitute a failure on Eliezer’s part.
I guess I am questioning whether making a great effort to shake yourself free of a bias is a good or a bad thing, on average. Making a great effort doesn’t necessarily get you out of biased thinking. It may just be like speeding up when you suspect you’re going in the wrong direction.
If someone else chose a belief of yours for you to investigate, or if it were chosen for you at random, then this effort might be a good thing. However, I have observed many cases where someone chose a belief of theirs to investigate thoroughly, precisely because it was an untenable belief that they had a strong emotional attachment to, or a strong inclination toward, and wished to justify. If you read a lot of religious conversion stories, as I have, you see this pattern frequently. A non-religious person has some emotional discontent, and so spends years studying religions until they are finally able to overcome their cognitive dissonance and make themselves believe in one of them.
After enough time, the very fact that you have spent time investigating a premise without rejecting it becomes, for most people, their main evidence for it.
I don’t think that, from the inside, you can know for certain whether you are trying to test, or trying to justify, a premise.
See last week’s Science, Oct. 3 2008, p. 58-62: “The origin and evolution of religious prosociality”. One chart shows that, in any particular year, secular communes are four times as likely to dissolve as religious communes.
Caledonian,
Read chapter 3, then come back and explain why a reductionistic explanation best accounts for the phenomena described there. Because if you are inconversant with the evidence, you simply have no rational basis to make any comment whatsoever.
You also seem to be playing some kind of semantic games with the word “reductionism” which I’ll just note and ignore.
It’s important in these crisis things to remind yourself that 1) P does not imply “there are no important generally unappreciated arguments for not-P”, and 2) P does not imply “the proponents of P are not all idiots, dishonest, and/or users of bad arguments”. You can switch sides without deserting your favorite soldiers. IMO.
Matthew C -
You are advocating nonreductionism and psi at the same time.
Supposing that you are right requires us to suppose that there is both a powerful argument against reductionism, and a powerful argument in favor of psi.
Supposing that you are a crank requires only one argument, and one with a much higher prior.
In other words, if you were advocating one outrageous theory, someone might listen. The fact that you are advocating two simultaneously makes dismissing all of your claims, without reading the book you recommend, the logical response. We thus don’t have to read it to have a rational basis to dismiss it.
Phil: One of psi or non-reductionism being true would be a powerful argument in favor of the other.
Ben: great example.
First of all, great post Eliezer. If somebody holding that kind of standard thinks that cryogenics is a good investment, I should someday take some time into investing the question deeper than I had.
Now, without lessening the previous praise, I would like to make the following remarks about friendly AI:
- The belief has long remained in your mind;
- It is surrounded by a cloud of known arguments and refutations;
- You have sunk costs in it (time, money, public declarations).
I do not know if it has emotional consequences for you or if it has gotten mixed up in your personality generally.
I think the following questions better translate my line of thoughts than any explanations I could formulate. Given a limited amount of EY resources:
- is friendly AI the best bet for “saving mankind”?
- would this “crisis of faith technique” (or similar rational approaches) be more popular, could other alternatives than FAI be envisioned?
- if FAI is required (the sole viable alternative), would it worth the cost to invest time into “educating people” into such rational approaches (writing books, publicise etc.) in order to gather ressources/manpower to achieve FAI?
Maybe you have already passed through such a reasoning and came to the answer that the current time you invest on OB is the optimal amount of publicity...
In other words, if you were advocating one outrageous theory, someone might listen.
There is nothing outrageous about either psi or non-reductionism, except that both contradict your dogmatic belief structures of what the world must be like. In fact most human beings at most times (including today) have accepted both propositions as real. The word “crank” is used in your belief structure exactly the way “heretic” or “heathen” would be used in some previously dominant belief structures.
All I can say, is that science is a method of inquiry into what is true, not a set of doctrines to be believed in without question. You are welcome to wave the flag of “I am rational” all you like, but it’s still blind flag-waving, cached thinking, and tribalism, no different than any other expression of this universal human expression. And pontificating on psi and reductionism, while refusing to actually read evidence that contradicts your priors, is much more like religion than science. Eli damn well knows this, and I strongly suspect he will read the material I provided before so pontificating again, even if many of his followers would rather beat the tribal drums. . .
As for your text—no one who seriously suggests that mental states affecting health and shamanistic death spells are evidence for either ‘non-reductionism’ or psi is worth taking the time to refute in detail.
(Hint: there are perfectly-suitable existing explanations for both of those things. Immune system cells are highly responsive to neurotransmitters, and are thought to have either be the evolutionary progenitors or descendents of neurons. There are obvious benefits to distributing resources differently when organisms are under stresses, and the immune system is an obvious resource drain. As for the voodoo explanation, it’s called “the vagus nerve”.)
The nifty thing about science is that, when it possesses dogmas, it does a pretty good job of overturning them. Psi advocates have had plenty of opportunities to demonstrate real phenomena. They have failed. They have repeatedly, demonstrably, empirically failed. If you do not believe their failure constitutes a valid reason to reject their premises, what premises DO you believe science has had reason to reject?
I was raised a theist and came to no longer belive as an adult. One of the turning points was reading the Anglican confession of faith, and supposing what my own beliefs might look like to an Anglican, who was also a christian, saved by Jesus just like me—just a different variety of.
Eventually I began to wonder what my life experiences might look like to an atheist—religion is above all an interpretive filter that we use to make sense of our lives. Although I knew that my beliefs in God were right, what would my life look like to me if I did not belive it?
Eventually, I could not help noticing that the nonbeliver point of view made better sense of the world.
If I had to attach labels to the personal qualities that changed my mind (excuse me if I sound vain), I’d say: curiosity—a drive to know the truth, whatever it may be; humility—from the first, I was prepared to accept that the Anglicans might be right and I wrong (and the the catholics, the muslims, the hindus, and finally the atheists); refraining from judgment—being prepared to tolerate an open question; perhaps even courage. And … a decision to trust myself to come to a right conclusion—somthing that religions actively discourage. Perhaps we might call it “integrity”.
But deliberately setting out to have a crisis of faith? I cant imagine doin … actually, yes I can. I did it every time I asked myself “what would an atheist think of this miracle, this prophecy, this teaching, this world event”.
No: that’s not the key. The key is not “what would an athest think ..”, but “what would I think, if I were an atheist?”. Admitting the possibility of change. Fully owning, if only for a moment, another point of view. Seeing the world with your own eyes from someone else’s point of view. Or at least, making an honest effort to.
Matthew C, I read the introduction and chapters 1 and 3. Are you sure you meant chapter 3? It does not seem to say what you think it says. Most of it is a description of placebos and other psychosomatic effects. It also discusses some events that are unlikely in isolation but seem trivially within the realm of chance given 100 years and approaching 7 billion people. There is also a paragraph with no numbers saying it can’t just be chance.
It feels kind of like asking everyone in the country to flip a coin 25 times, then calling the 18 or so people who have continuous streaks psychics. And ignoring that all-heads and all-tails both count. And maybe also counting the people who got HHHHHTTTTTHHHHHTTTTTHHHHH or HTHTHTHTHTHTHTHTHTHTHTHTH or such. Survivorship and publication bias and all that.
There were a few things that might have fallen outside those obvious mistakes, but given the quality of analysis, I did not feel a pressing need to check that they reported their sources properly, that their sources reported theirs properly, and that there was no deception etc. involved. This Stevenson fellow might be worth pursuing, but it seems likely that he is just the archivist of the one-in-a-million events that continuously happen with billions of people. I feel compelled to read on, however, by the promise that not only identity but also skills can survive bodily death. I am picturing a free-floating capacity for freecell or ping-pong, just looking for somewhere to reincarnate. Sadly, I do not expect the text to be that fun.
If I could give you extra points I would. Many thanks for actually having read this stuff, then given us a clear explanation of what it entails… so we don’t have to bother :)
Odd, I’m a Christian daughter of two atheists. I guess I didn’t miss out after all.
I agree. I was raised atheist… went through a “religious phase” then figured myself out again. Being raised atheist doesn’t mean you haven’t been through all those crises too. :)
Atheism is believing that the state of evidence on the God question is similar to the state of evidence on the werewolf question.
s/werewolf/Easter bunny/ IMHO.
Would that apply to someone with a particularly high prior on the werewolf question? So, you would agree that anyone who believes that the state of evidence on “the God question” is more positive than the state of evidence on the “werewolf question” should consider labeling themselves an agnostic? theist?
And, I presume that you believe that one’s current belief in the state of evidence would be controlled by 1) verifiable general evidence, 2) experience, and 3) priors on both questions?
Then we’re in agreement: you should (apparently) call yourself an atheist, and I should call myself a Christian, as we differ on #2 and #3. (not that theism = Christian, but that goes back to #2).
[quote]I became a Christian because I was a Bayesian first. I know there are others like me. I saw and experienced evidence that caused me to positively update my belief.
Now if you don’t like that argument, then please tell me how can anyone become an atheist via Bayesian updating? Can your posterior really go to a point mass at zero (belief in God)? If so, please tell me what prior you were using. If not, please tell me how you define atheism.
[/quote]
And how can your probability go to one? You erect a straw man, sir. My probablility that there is a god is not exactly zero, any more than yours is exactly one. If God were to send an actual angel down, right now, and make my dinner vanish with his magic stick (it’s in Genesis, somewhere) then that would shift my probability.
But as things stand, I am confident that there are no gods.
Psi advocates have had plenty of opportunities to demonstrate real phenomena. They have failed. They have repeatedly, demonstrably, empirically failed.
That is just talking point bull**. Go read the literature cited in Irreducible Mind, Entangled Minds or many other books.
Your problem is you won’t believe anything unless some “magic man” shows up on a stage in front of James Randi and telepathically bends spoons in front of all the world. I agree with you that people who can do that do not appear to exist, and those who make those claims universally appear to be fraudulent.
You’re not interested in evaluating the evidence that actually exists, because it isn’t dramatic and theatrical enough for you. There is a huge body of scientific evidence that psi effects occur—the fact that you are utterly unfamiliar with it is irrelevant to its merits.
no one who seriously suggests that mental states affecting health and shamanistic death spells are evidence for either ‘non-reductionism’ or psi is worth taking the time to refute in detail.
You obviously missed the salience of much of the chapter, which ranged from effects like voodoo death which could perhaps be explained reductionistically through nocebo effects, to effects like tumors disappearing because of placebo influence, which are much more difficult to account for, to effects like hypnotic induction of burns and blisters in particular locations which are exceedingly difficult to explain reductionistically, to effects like skin writing and remote staring experiments which cannot be explained reductionistically in any plausible manner.
In any event, most reductionists refuse to accept any role for placebo effect beyond subjective comfort, so certainly their views will be washed away by the torrent of evidence in Chapter 3.
I don’t need to read the book. I believe that psi effects are not real, because if they were, they would already be part of accepted science.
It’s not a matter of being closed-minded or open-minded. I’m just not accepting your book author as a legitimate authority. Most things I believe, I believe because they are asserted by authorities I accept. For example, I have never personally seen an experiment performed that establishes that the Sun is made primarily of hydrogen and helium, that an atom of gold contains seventy-nine protons, that George Washington was the first President of the United States, that light is quantized, or many other things I learned in school.
My criteria is simple: on matters in which I have no special expertise or direct knowledge, I simply accept the view of the majority of those I consider legitimate experts. If you want to persuade me that “psi: is real, go persuade the Nobel Prize committee; anyone who can establish it through controlled, repeatable experiments would certainly be deserving of the Nobel Prize in Physics.
In other words, I rely on people like James Randi and Joe Nickell to do the investigating for me. Convince them, and I’ll believe in psi. Until then, don’t go shoving your data in my face, because I’ll just conclude that your data, or your interpretation of it, is wrong.
@Mattew C.
Do you mean by “remote staring experiments” those of Wiseman/Schlitz?
It seems that when properly controlled, they produced no statistically significant effect: http://forums.randi.org/archive/index.php/t-43727.html
So here I am having been raised in the Christian faith and trying not to freak out over the past few weeks because I’ve finally begun to wonder whether I believe things just because I was raised with them. Our family is surrounded by genuinely wonderful people who have poured their talents into us since we were teenagers, and our social structure and business rests on the tenets of what we believe. I’ve been trying to work out how I can ‘clear the decks’ and then rebuild with whatever is worth keeping, yet it’s so foundational that it will affect my marriage (to a pretty special man) and my daughters who, of course, have also been raised to walk the Christian path.
Is there anyone who’s been in this position—really, really invested in a faith and then walked away?
Quite a few. Dan Barker was a Christian minister before he walked away. But the truth is, it is much harder to walk away from a religion when one is married and has a family. And sometimes, it can destroy families to even voice doubts.
Christianity isn’t the only religion that has this aspect. Among Orthodox Jews there’s a common refrain that too many are leaving the faith and a standard suggested solution for this is to make kids marry earlier because once they are married they are much more likely to stay in the faith.
But whenever this sort of thing comes up it is important to ask how much do the social structures really depend on the religion? Will your husband love you less if you tell you him don’t believe? Will your friends no longer be friendly? Will they stop providing social support? And if they will stop being friendly on such a basis, what makes you see them as genuine friends in the first place?
There’s no question that these issues are deep and difficult and should probably be handled slowly. I’d recommend maybe sending a version of your question to The Friendly Atheist- one of the writers there has a column (Ask Richard) where he regularly answers questions much like yours and if your question gets to posted then it likely to get a large amount of input in the comments threads from people who went through similar circumstances(it might be worth looking in the archives also to see if they’ve had similar letters in the past. I think they have but I don’t have a link to one off the top of my head).
Daniel Everett was a missionary to the Piraha of Brazil and a husband and father.
This is exactly the situation where the Litany of Gendlin seems most questionable to me. I haven’t been in your situation. One option for dealing with the situation might be to learn to lie really well. It might be the compassionate thing to do, if you believe that the people you interact with would not benefit from hearing that you no longer believe.
(Yes, I’m aware that I’m responding to a stale post.)
I don’t believe I should lie to you (or anyone) because there might be one way you might not benefit from my honest and forthright communication. So, unfortunately, I’ve decided to reply to you and tell you that your advice is terrible, however well-intentioned. You seem to think that if you can imagine even one possible short-term benefit from lying or not-disclosing something, then that’s sufficient justification to do so. But where exactly is the boundary dividing those things that, however uncomfortable or even devastating, must be said or written and those things about which one can decieve or dupe those one loves and respects?
‘Radical honesty’ isn’t obviously required, but I would think that honesty about fundamental beliefs would be more important than what is normally considered acceptable dishonesty or non-disclosure for social purposes.
That’s not what I said. I said several things, and it’s not clear which one you’re responding to; you should use quote-rebuttal format so people know what you’re talking about. Best guess is that you’re responding to this:
You sharpened my “might be” to “is” just so you could disagree.
This is a rhetorical question, and it only makes sense in context if your point is that in the absence of such a boundary with an exact location that makes it clear when to lie, we should be honest. But if you can clearly identify which side of the boundary the alternative you’re considering is on because it is nowhere close to the boundary, then the fact that you don’t know exactly where the boundary is doesn’t affect what you should do with that alternative.
You’re doing the slippery slope fallacy.
Heretics have been burned at the stake before, so compassion isn’t the only consideration when you’re deciding whether to lie to your peers about your religious beliefs. My main point is that the Litany of Gendlin is sometimes a bad idea. We should be clear that you haven’t cast any doubt on that, even though you’re debating whether lying to one’s peers is compassionate.
Given that religious relatives tend to fubar cryonics arrangements, the analogy with being burned at the stake is apt. Religious books tend to say nothing about cryonics, but the actual social process of religious groups tends to be strongly against it in practice.
(Edit: This all assumes that the Litany of Gendlin is about how to interact with others. If it’s about internal dialogue, then of course it’s not saying that one should or should not lie to others. IMO it is too ambiguous.)
The Litany of Gendlin is specifically about what you should or should not believe, and your feelings about reality. It says nothing about telling people what you think is true — although “owning up to it” is confusingly an idiom that normally means admitting the truth to some authority figure, whereas in this case it is meant to indicate admitting the truth to yourself.
And that’s why I wrote “You seem to think that …”; I was describing why I thought you would privilege the hypothesis that lying would be better.
You’re absolutely right that learning to lie really well and actually lying to one’s family, the “genuinely wonderful people” they know, everyone in one’s “social structure” and business, as well as one’s husband and daughter MIGHT be the “compassionate thing to do”. But why would you pick out exactly that option among all the possibilities?
Actually it wasn’t a rhetorical question. I was genuinely curious how you’d describe the boundary.
The reason why I think it’s a justified presumption to be honest to others is in fact because of a slippery slope argument. Human being’s minds run on corrupted hardware and deception is dangerous (for one reason) because it’s not always easy to cleanly separate one’s lies from one’s true beliefs. But your implication (that lying is sometimes right) is correct; there are some obvious or well-known schelling fences on that slippery slope, such as lying to the Nazis when they come to your house while you’re hiding Jews.
Your initial statement seemed rather cavalier and didn’t seem to be the product of sympathetic consideration of the original commenter’s situation.
Have you considered Crocker’s rules? If you care about the truth or you have something to protect then the Litany of Gendlin is a reminder of why you might adopt Crocker’s rules, despite the truth possibly not being the “compassionate thing to do”.
Rationality can’t be wrong, but it can be misused.
“People can stand what is true, for they are already enduring it.” is technically correct, but omits factors relevant to the situations when most people consider lying to be necessary. The fact that you know something is true is itself a truth.
So if you reason “they have to endure the truth whether I tell them it or not”, you also have to acknowledge that by telling them you’ve added a second-order truth, and they now have to endure that second-order truth that they didn’t before. The implication that telling someone the truth doesn’t change anything because it didn’t change the original truth… isn’t true,
Of course most people don’t think in terms of “telling someone a truth adds another truth”, but if you try to analyze it, it turns out that it does.
Virtually nobody “cares about the truth” in the absolute sense needed to make that statement logically correct. Most people care about the truth as one of several things that they care about, which need to be balanced against each other.
As a matter of logic nobody caring about the truth (in whatever sense is meant by the claim) is sufficient to ensure that the statement is always correct (the part replaced by the ellipsis need not be resolved). (The problem is that it is then probably useless.)
Because it’s a possibility that the post we’re talking about apparently did not consider. The Litany of Gendlin was mentioned in the original post, and I think that when interpreted as a way to interact with others, the Litany of Gendlin is obviously the wrong thing to do in some circumstances.
Perhaps having these beautifully phrased things with a person’s name attached is a liability. If I add a caveat that it’s only about one’s internal process, or it’s only about communication with people that either aspire to be rational or that you have no meaningful relationship with, then it’s not beautifully phrased anymore, and it’s not the Litany of Gendlin anymore, and it seems hopeless for the resulting Litany of Tim to get enough mindshare to matter.
I’m not curious about that, and in the absence of financial incentives I’m not willing to try to answer that question. There is no simple description of how to deal with the world that’s something a reasonable person will actually want to do.
May I offer a better solution to lying? Perhaps helping to uncover the lies being purported to Jo by family and friends which they knowingly or unknowingly communicate could benefit all involved?
Oh, Jo. It is so painful to be where you are. I did that growth myself. Sue Monk Kidd, who wrote Secret Lives of Bees, also wrote several books about her expansion of faith beyond Christian beliefs. “Dance of the Dissident’s Daughter” was a very meaningful book for me, when I was early in the path. I have, I think, stepped out of the circle within which I was raised, Catholicism, and now believe that all faiths are related. I’ve raised my kids to believe that you don’t judge the face of god that presents itself to other people, you simply try to find the one that you’re most comfortable with. My son gave me a bumpersticker he found, ‘God is too big for any one religion’. I will pray for you. You are doing the right thing by searching.
“I do not feel obliged to believe that that same God who has endowed us with senses, reason: and intellect has intended to forgo their use”. Gods “thoughts are higher then out thoughts” but “this is life eternal, that they might know thee the only true God, and Jesus Christ, whom thou hast sent.” and we are approaching “A time to come in which nothing shall be withheld”.
The easiest way to have a crisis of faith is to go out and commit sins, but then the question is are you actually questioning the faith or justifying your sins?
As for everyone that says that delusion is better then truth, perhaps not for you but for others, what makes you think you are different than anyone else? Why do you want truth but think others shouldn’t?
“Self-honesty is at its most fragile when we’re not sure which path is the righteous one.”
Are we ever “sure” of anything (especially ethics)?
For the past three days I have been repeatedly performing the following mental operation:
“Imagine that you never read any documents claimed to be produced by telepathy with extraterrestrials. Now gauge your emotional reaction to this situation. Once calm, ask yourself what you would believe about the world in this situation. Would you accept materialism? Or would you still be seeking mystical answers to the nature of reality?”
I am still asking myself this question. Why? I am struggling to figure out whether or not I am wrong.
I believe things that raise a lot of red flags for “crazy delusion.” Things like:
“I came from another planet, vastly advanced in spiritual evolution relative to Earth, in order to help Earth transition from the third dimension to the fourth dimension. My primary mission is to generate as much light and love as possible, because this light and love will diffuse throughout Earth’s magnetic fields and reduce the global amount of strife and suffering while helping others to achieve enlightenment. I am being aided in this mission by extraterrestrials from the fourth dimension who are telepathically beaming me aid packages of light and love.”
These beliefs, and many others like them, are important to my worldview and I use them to decide my actions. Because I like to think of myself as a rational person, it is a matter of great concern to me to determine whether or not they are true.
I have come across nobody who can put forth an argument that makes me question these beliefs. Noboby except for one person: Eliezer Yudkowsky. This man did what no other could: he made me doubt my basic beliefs. I am still struggling with the gift he gave me.
This gift is that he made me realize, on a gut level, that I might be wrong, and gave me motivation to really figure out the truth of the matter.
So many intelligent people believe patently absurd things. It is so difficult to escape from such a trap once you have fallen into it. If I am deluded, I want to be one of the fortunate ones who escaped from his insanity.
The thing is, I really don’t know whether or not I am deluded. I have never before been so divided on any issue. Does anybody have anything they’d like to add, which might stimulate my thinking towards resolving this confusion?
There are several things to ask about beliefs like this:
Do they make internal sense? (e.g. “What is the fourth dimension?”)
Do they match the sort of evidence that you would expect to have in the case of non-delusion? (e.g. “Do you have any observable physical traits indicating your extraterrestrial origin? Would someone looking into records of your birth find discrepancies in your records indicating forgery?”)
Do they try to defend themselves against testing? (e.g. “Do you expect to illuminate a completely dark room at night by generating light? Would you expect to exist happily in psychological conditions that would harm normal humans by subsisting on aid packages full of love?”)
Do they have explanatory power? (e.g. “Has there, as a matter of historical fact, been a sudden and dramatic reduction in global strife and suffering since the date of your supposed arrival?”)
Do they have a causal history that can be reasonably expected to track with truth across the entire reference class from an outside view? (e.g. “Did you receive your information via private mental revelation or a belief from as long ago as you can remember, similar to the beliefs of people you do consider crazy?”)
Hi, Alicorn!
Yes. They are drawn from the material at http://lawofone.info/ . The philosophy presented there is internally consistent, to the best of my understanding.
There is no physical evidence. All of the “evidence” is in my head. This is a significant point.
There are a variety of points in the source document which could be interpreted as designed to defend its claims against testing. This is a significant point.
I am not aware of any physically testable predictions that these beliefs make. This is a significant point.
The causal history of these beliefs is that I read the aforementioned document, and eventually decided that it was true, mainly on the basis of the fact that it made sense to my intuition and resonated personally with me. This is a significant point.
Thanks for asking!
Currently reading Law of One. I’m not sure what the mechanism is, but it seems to involve people receiving telepathic messages (from an entity named Ra) and speaking them aloud. I would like to note that I have experienced messages coming into my head, seemingly from outside (either as voices or as an impulse to write), and can even occasionally cause it voluntarily. Their content can be partially unexpected, but it never contains information I could test independently. I consider this an entertaining misbug in my brain, not evidence of an external telepathic entity.
I understand your point, but it also reminded me of this :-)
Good luck! It may help to remember that this sort of thing seems to be a failure mode of the human mind. I know someone who had a manic episode during which he believed he was destined to bring enlightenment to the world. (He also believed he could control the weather.)
In case you haven’t come across this already, go here and read the paragraph that starts “But it is possible to do better, even if your brain malfunctions on you.”
It’s interesting realising how many of these generally apply to the idea “I don’t want a sex change” (and “I’m happy with my sexual orientation / current relation / current job / current social circle”, but specifically I’ve noticed that transitioning from one sex to another seems to require that sort of heroic rational effort)
My belief in the tenants of the Church of Jesus Christ of Latter-Day Saints has these warning signs.
Years later, I’m reading this.
I’ve been reading through the Major Sequences and I’m getting towards the end of “How to actually change your mind” with this growing sense of unease that there are all these rational principles that I could use but I know not to what end and I think I might finally have found somewhere: My political convictions.
I’m not going to say which direction I lean, but I lean that direction very strongly. My friends lean that way. My parents do not lean that way as much as I do but they are in that general direction.
And I realize that I may not lean that way because it is the rational way to approach a well-run country, but because I am merely used to it.
So perhaps, one of these weeks, I will sit down and have a long hard think on my politics. I think I would prefer to stay leaned the way I currently am—but I would, wouldn’t I.
You’ll be much more vulnerable to biases if you do this alone.
It would be ideal if you could find someone who’s more moderate or opposite your view who is also wanting to consider politics objectively, and you talked about things with them. This could backfire if they’re not very patient, and you shouldn’t let it become an excuse for delaying, but if you can find someone suitable who is like this then I think that would be better.
I started letting go of my faith when I realized that there really isn’t much Bayesian evidence for it. Realizing that the majority of the evidence needed to believe something is used just to isolate that something out of all the other possible beliefs finished it off. But I do have one question: If Jesus wasn’t magic, where did the Bible even come from? Lee Strobel “proves” that Jesus died and came back from the dead, but his proofs are based on the Bible. Why was the Bible so widely accepted if there wasn’t anything extra-special about Jesus after all?
Some people wrote it down. That’s also the Christian story of where the Bible came from.
There probably was something extra-special about Jesus, in the sense that he was highly charismatic, or persuasive, and so on. And his followers probably really did think that he’d come back from the dead, or at least that his body had mysteriously vanished. But none of that adds up to magic or divinity. Look at people in the current day—convinced (rightly or wrongly) in the existence of aliens, or homeopathy, or whatever else. “If L. Ron Hubbard wasn’t magic, where did Dianetics come from?”
Alternatively, consider Joseph Smith. He’s far more recent and far better-attested than Jesus, who also had a loyal group of followers who swore blind that they’d seen miracles—even the ones who later broke with him, and who after his death, carried on his teachings and founded a religion with the utmost seriousness and in the face of extreme hardship and sacrifice. Yet chances are you’re not a Mormon (or, if you are a Mormon, consider Mohammed ibn Abdullah). Apply the same thinking to Jesus’s life as you do to that of Josepth Smith, and see where it takes you.
https://en.wikipedia.org/wiki/Christ_myth_theory
Neither sufficient nor necessary:
The origins of Christianity become more mysterious, not less, if there never was a Jesus.
We don’t need to tie ourselves to a fringe hypothesis to posit non-supernatural origins for the Gospels.
Would you say the origins of other religions become more mysterious if there never were whatever magical beings those religions posit? Would you think it likely that Guanyin was real human of unknown gender? Do the origins of fictional stories become more mysterious if there never were the fictitious characters in the flesh? Did Paul Bunyan exist, as there were similar lumberjacks?
You’re not supposed to tie yourself to any hypothesis, even if mainstream, but rather update your probability distributions. Bits of the NT weren’t written until long enough after the supposed death of Jesus that people wouldn’t have been like, ‘Who you talkin’ about?′ And I doubt they would’ve cared whether the character existed, like no one cares whether Harry Potter existed, because it’s the stories that matter.
Yes, of course.
The least mysterious explanation of Paul Bunyan stories is that there really was a Paul Bunyan. And the closer the real Paul Bunyan hews to the Bunyan of the stories, the smaller the mystery. P(stories about Bunyan | Bunyan) > P(stories about Bunyan | !Bunyan).
But just because a story is simple, doesn’t necessarily make it likely. We can’t conclude from the above that P(Bunyan | stories about Bunyan) > P(!Bunyan | stories about Bunyan).
You left out the ‘magical’ part of my question. If magical beings exist(ed), then everything becomes more mysterious. That’s partly why we don’t pester JK Rowling about what extra-special boy Harry Potter was based on. We don’t even suspect comic superheros like Batman, who has no magic, to have been based on a real-life billionaire. We certainly don’t have scholars wasting time looking for evidence of ‘the real Batman.’ Modern stories of unlikely events are easily taken as imaginings, yet when people bucket a story as ‘old/traditonal’, for some people, that bucket includes ‘characters must’ve been real persons’, as if humans must’ve been too stupid to have imagination. https://en.wikipedia.org/wiki/Fakelore
No, I didn’t leave that part out.
Of course magic makes everything else more mysterious i.e. P(magical Jesus) is infinitesimal. But P(non magical Jesus) is not low. We do ask JK Rowling what non magical boy inspired Harry Potter.
I guess you mean that we could and it wouldn’t be obviously silly, with which I agree. But, for what it’s worth, it never crossed my mind to assume that Harry Potter was based on any specific non-magical boy. The characteristics he has that aren’t essentially dependent on story-specific things (magic, being the prime target of a supervillain, etc.) seem pretty ordinary and not in any particular need of explanation.
I wouldn’t be astonished if it turned out that there was some kid Rowling knew once whom she used as a sort of basis for the character of Harry Potter, but I’d be a bit surprised. And if it did, I wouldn’t expect particular incidents in the books to be derived from particular things that happened to that child.
In particular, I wouldn’t say that the simplest (still less the most likely) explanation for the Harry Potter stories involves there being some non-magical child on whom they are based.
I don’t think any of this has much bearing on whether the simplest explanation for stories about Jesus, Muhammad, the Buddha, Zeus, etc., involves actual historical characters on which they’re based. The answer to that surely varies a lot from case to case. (FWIW I’d say: historical Jesus of some sort likely but not certain; historical Muhammad almost certain; historical Buddha likely but not certain; historical Zeus-predecessor very unlikely. But I am not expert enough for my guesses to be worth anything.)
Historical Muhammad not certain: http://www.wsj.com/articles/SB122669909279629451 . Of course, people have set about trying to protect minds from a ‘fringe’ Bayesian view: “Prof. Kalisch was told he could keep his professorship but must stop teaching Islam to future school teachers.” In case anyone missed it, Richard Carrier explicitly used Bayes on question of historical Jesus. I don’t know if Kalisch used Bayes, but his language conveys intuitive Bayesian update.
The bearing of fictional stories is simple: calculate probabilities of historical X based on practically 100% probability that human imagination was a factor (given that the stories contain highly unlikely magic like in known-to-be fiction stories, plus were written long after X supposedly lived). Note that that still leaves out probabilities of motivations for passing fiction as nonfiction like Joseph Smith or L. Ron Hubbard did. Once you figure probabilities including motivations and iterations of previous religious memes, it becomes increasingly unlikely that X existed. Paul Bunyan, AFAIK, wasn’t based on previous memes for controlling people, nor were the stories used to control people, so I wouldn’t be suspicious if someone believed the stories started based on someone real. When people insist religious characters were real, OTOH, I become suspicious of their motivations, given unlikelihood that they examined evidence and updated Bayesian-like.
@Salemicus: Citation for “We do ask JK Rowling what non magical boy inspired Harry Potter”?
What’s your comparison baseline? Compared to the screen in front of your face, he’s not certain. Compared to pretty much anyone born in the VI century, he is quite certain.
Then why don’t you just point to evidence of his existence being more likely than others’? We have bodily remains, intact own writings, or historical records made during the lives of many born in 6th century, e.g. Columbanus, Pope Gregory I, founding emperor of Tang Dynasty, Radegund, Venantius Fortunatus, Theodora). So why don’t we have any one of those types of evidence about Muhammad?
You don’t count the Koran as “intact own writings”? :-) Yes, I am well aware that it was compiled quite some time after his death from a collection of records and that, by tradition, Muhammad was illiterate.
The Arab society around VII century wasn’t big on writing—the cultural transmission was mostly oral. However external sources mention Muhammad already in 636 AD.
You’re referring to the phrase “many villages were ravaged by the killing of the Arabs of Muhammad”, written after Muhammad’s supposed death, “Arabs of Muhammad” meaning ‘Muslims’ the way “people of Christ” means ‘Christians’. That Muslims and Christians existed doesn’t mean the characters they invoked to justify violence, supremacism, etc. existed as actual humans.
Criteria for considering Muhammad and Jesus near certain are so lax, we’d have to consider some Greek/Roman gods near certain.
So you’re arguing that by 632 the violent and supremacist Arab hordes were justifying their violence and supremacism by inventing an imaginary prophet who lived merely a few decades before (so some of “his” contemporaries were still alive). Because they were so tricksy they made him not a terribly appealing character—an illiterate merchant’s apprentice who married a cougar and then went a bit crazy—and attributed to him a whole book of poetry clearly written while on acid. And hey—it worked! Their creation (I guess it was a joint effort—takes a village and all that..?) was so successful that it caused the fastest massive conquest in human history.
An interesting theory.
The criteria for the historicity of Greek/Roman Gods and Muhammad/Jesus are not the same.
The Roman Gods are for the most part just Romanized versions of Greek Gods. If you examine the different characteristics closely, then the Greek Gods have much in common with Gods in the pantheons of other Indo-European peoples. For example Zeus is the God of Thunder, Thor is the God of Thunder in Germanic mythologies, and Perun serves the same purpose in Slavic mythologies.
Based on these similarities you can trace these stories to the stories of some common ancestral Gods of the old Indo-European nomads on the steppes of Russia and the Ukraine… So these stories are so ancient that any link to anyone living whether man or whatever is highly unlikely.
However stories of Jesus and Muhammad are much more likely considering since they occured at times when writing was already invented and shortly after their death, we can see stirrings of historical events linked to them. With Jesus, we have historical writing of him maybe 50 years after his death, including by his enemies. So a historical figure of Jesus is highly likely, although the miracles and stuff attributed to him are made up.
With Muhammad the probabilities are even higher. Shortly after his death, there were conquests of neighboring lands done by people who were saying they were his friends (meaning they saw him live). While most of the stories about him are probably highly exaggerated, there most likely was a historical Muhammad.
I did say almost certain. My impression—which, as I said above, is no more than that and could easily be very wrong—is that the Jesus-myth theories require less “conspiracy” than the Muhammad-myth ones.
Interestingly, after looking over Wikipedia a bit, apparently there may have been a Paul Bon Jean on whom the earliest Paul Bunyan tales could have been based… a big lumberjack, but with “big” being more like six to seven foot and less like sixty to seventy foot.
Hmmm. To mess around with equations a bit… what can we say about P(Bunyan | stories about Bunyan) and P(!Bunyan | stories about Bunyan), given P(stories about Bunyan | Bunyan) > P(stories about Bunyan | !Bunyan)?
Let’s genaralise it a bit (and reduce typing). What can we say about P(A|B) and P(!A|B) when P(B|A) > P(B|!A)?
Consider Bayes’ Theorem: P(A|B) = [(P(B|A)*P(A)]/P(B). Thus, P(B) = [(P(B|A)*P(A)]/P(A|B)
Therefore, P(!A|B) = [(P(B|!A)*P(!A)]/P(B)
Now, P(!A) = 1-P(A). So:
P(!A|B) = [(P(B|!A)*{1-P(A)}]/P(B)
Solve for P(B):
P(B) = [(P(B|!A)*{1-P(A)}]/P(!A|B)
Since P(B) = [(P(B|A)*P(A)]/P(A|B):
[(P(B|A)*P(A)]/P(A|B) = [(P(B|!A)*{1-P(A)}]/P(!A|B)
Since P(B|A) > P(B|!A)
[(P(B|A)*P(A)]/P(A|B) > [(P(B|!A)*P(A)]/P(A|B)
Therefore:
[(P(B|!A)*{1-P(A)}]/P(!A|B) > [(P(B|!A)*P(A)]/P(A|B)
Since probabilities cannot be negative:
[{1-P(A)}]/P(!A|B) > [P(A)]/P(A|B)
.[1-P(A)]*P(A|B) > [P(A)]*P(!A|B)
...which means that either (1-P(A)) > P(A) or P(A|B) > P(!A|B), and quite possibly both; and whichever of these two inequalities is false (if either) the ratio between the two sides is closer than the inequality that is true.
To return to the original example; either P(Bunyan | stories about Bunyan) > P(!Bunyan | stories about Bunyan) OR P(!Bunyan) > P(Bunyan).
Also, if P(Bunyan | stories about Bunyan) > P(!Bunyan | stories about Bunyan) is false, then it must be true that P(Bunyan|stories about Bunyan) > P(Bunyan).
Your second point is clearly true. The first seems false; Christianity makes much more sense from a Greco-Roman perspective if Jesus was supposed to be a celestial being, not an eternal unchanging principle that was executed for treason. And the sibling comment leaves out the part about first-century Israelites wanting a way to replace the ‘corrupt,’ Roman-controlled, Temple cult of sacrifice with something like a sacrifice that Rome could never control.
Josephus saw the destruction of that Temple coming. For others to believe it would happen if they ‘restored the purity of the religion’ only requires the existence of some sensible zealots.
Broadly speaking, I agree, and Jesus mythicist Richard Carrier would also agree:
But reading some of his stuff made me upgrade the idea that there was no historical Jesus from “almost certainly false” to “plausible”. (Carrier has written a couple books on this —Proving History: Bayes’s Theorem and the Quest for the Historical Jesus and On the Historicity of Jesus: Why We Might Have Reason for Doubt —but I haven’t read those, only some stuff available on the web.)
Carrier:
Carrier:
(To make the following paragraph more concise I’ll omit hedge phrases like “according to Carrier”. And even Carrier doesn’t regard this as certain, only more likely than not.)
The writings about Jesus that come the closest to being contemporary with his putative lifetime are Paul’s seven or so authentic letters. Paul, who converted to Christianity after Jesus came to him in a vision sometime around 33 CE, never claims to have met the historical Jesus, and never unambiguously talks about Jesus as a human who lived on Earth. (E.g.: Paul talks about about Jesus being crucified, but this crucifixion took place in some celestial realm not on Earth. Paul mentions “James the Lord’s brother”, but this means not that James was a literal brother of Jesus of Nazareth but that James is a fellow Christian, the way a modern Christian might refer to their “brothers and sisters in Christ”.)
I think this fails in the case where the experts are infected by a meme plague.
Isn’t this a Fully General Counterargument, though? Climate change deniers can claim that climate experts are ‘infected by a meme plague’. Creationists can claim anyone who accepts evolution is ‘infected by a meme plague’. So on and so forth.
It’s not a counterargument, it’s an observation about the limits of the maxim quoted. And while it can certainly be misapplied, are you going to argue that a memetic plague never happens?
Then I may have misunderstood the intention of the phrase.
As an observation about the limits of the maxim, I agree with it. And no, I’m not going to argue that a memetic plague never happens.
I am, however, going to argue that a memetic plague is hard to identify, making this observation very difficult to actually apply with any reliability. It’s just too easy—if I see a bunch of experts in the subject all saying something that I disagree with—for me to think “they’re infected by a memetic plague”. It’s so much more comforting to think that than to think “maybe I’m wrong”—especially when I already have some evidence that seems to say that I am right. So, while this observation can be applied correctly, it would be far, far too easy to misapply. And if I were to misapply it—I would have no idea that I am, in fact, misapplying it.
As a general observation, then, I cautiously agree. As a specific argument in virtually any debate, I deeply mistrust it.
I hope that makes my position clearer.
We know how religion spreads. We know it well enough that when it is obvious enough that the “experts” are basing their “expertise” on religion, we can ignore it without worrying that we are just dismissing the experts because doing so is comforting.
It’s not as if the way religion spreads is seriously in question.
I’m not sure that you do.
From your previous post:
If this were true—and if it were an exhaustive list of the predominant ways—then I would expect to see the following:
Parent-to-child transmission only works if the parents are Christian. Social ostracisation only works if a majority of a given person’s possible social acquaintances are.
Thus, the only means on the list of introducing is into a new area is by the sword
Thus, I would expect missionaries to either have been abandoned, or to be given a sword as standard equipment on setting out. I do not see this.
Furthermore, I would expect to see, in countries where it is not a majority religion, it would slowly fade and die (as social ostracism is used against it by the majority)
Now, I am not saying that it is never spread by such means. (Fortunately, ‘by the sword’ appears to have been largely abandoned in recent history). But assuming it to be an exhaustive list does not appear to match reality—there seems to be a rather large gap where a single missionary, armed with nothing more than information and presumably a fairly persuasive tongue, can go into a large enough group of humans who have little or no previous knowledge of religion and end up persuading a number of them to join.
You would expect (peaceful) missionaries to be abandoned (at least as a tool for spreading Christianity to places where there is no Christianity) if there were a careful effort to track their effectiveness. I do not believe there usually is. Is your impression different?
If you look at the places where there are a lot of Christians, they do seem to match up pretty well with (1) where the Roman Empire was plus (2) places colonized by countries that used to be part of the Roman Empire.
One obvious counterexample is Korea, which (I think) is evidence that missionaries can sometimes introduce Christianity to a new place with long-term success. But what others are there?
(Incidentally, I think your analysis is incomplete. Another way to introduce Christianity to a new area would be immigration. I don’t know to what extent this has actually happened.)
I don’t think you need a careful effort to track their exact effectiveness. It would be fairly obvious in a couple of generations that peaceful missionaries would fall in one of two categories—either they have some success (as evidenced by some number of converts that they win over) or they have no success (as evidenced by every missionary outreach pretty much collapsing as soon as the missionary either leaves or dies).
A careful effort to track effectiveness could tell the difference between slight success and strong success, but I think that even with a merely cursory checkup people could tell the difference between some success and no success at all.
I’m not surprised. There are many possible explanations for this; a sufficient explanation might be that these are places that early (Latin-speaking) missionaries could be reasonably sure of finding Latin-speaking people, and thus were not required to face the additional hurdle of learning a new language first.
Hmmm… would Japan count?
That is true. I don’t know to what extent that has happened either, but I imagine it would be accompanied (if successful) by a very strong spread of the immigrant’s culture in other ways, as well. (Such as language).
I think missionaries are usually sent to particular places by organizations, and when one leaves another goes. So there isn’t opportunity to identify where they aren’t making progress. And the actual question isn’t really “no success” versus “any success”; no one claimed or implied that converting people is literally impossible, only that generally when Christianity spreads successfully it does so along with military conquest.
You’re welcome to be (having had the facts pointed out to you) as surprised or unsurprised as you please; I remark that much the simplest explanation would seem to be that Christianity mostly spreads by military conquest.
It’s hard to tell how big a Christian community the missionaries there were able to produce. (Right now, as I understand it, Japan is one of the world’s least religious countries, so I guess you are thinking of the 17th century.) So, I dunno: maybe?
… Oh, I thought of another way for Christianity to get into a new area that’s consistent with the “converting people is really ineffective” narrative. Again, no one claims that converting people is 100% ineffective. So, what you do is to find a place whose rulers are very much in control of the population, and send your missionaries to the royal court or whatever. They probably won’t convince the ruler, but if they do then bingo, you’ve got thousands or millions of new converts fairly immediately. I think this has happened once or twice. I bet it’s been attempted a lot more.
It’s not going to be perfect. Sometimes there will be more missionaries than established places to send them, and new missions can be opened—but sometimes a missionary will, through mischance or malice, die before he’s expected to do so and there will be no replacement ready to send.
I don’t actually know about specific incidences, but there should be enough data on what happens when a mission is abandoned to be able to tell how successful it can be.
That is a simple explanation, yes. Another simple explanation is that Christianity mostly spreads where language barriers don’t get in the way.
I don’t see either of these two explanations as being significantly simpler than the other.
Hmmmm. That would be a sensible scenario. There have also been cases where non-Christian rulers, perhaps fearing the political power of the church, made practice of the religion illegal, with severe punishments for doing so. Taking the two together, it seems fairly clear that converting the ruler would be a very important step for many successful missionaries.
I remain doubtful, but perhaps you’re right.
Also a reasonable hypothesis. Hmm, do we have cases where the boundaries of the Roman Empire don’t match up well with linguistic boundaries? Probably not, simply because anywhere conquered by the Romans would probably have tended to learn to speak Latin, producing an artificial lowering of language barriers within the empire.
Yes. Though in the most famous recent case I can think of—the Soviet Union—it seems that they weren’t very effective in suppressing Christianity; it came back pretty strongly once the communists lost power. Still, paying a lot of attention to the ruler(s) does seem like an effective strategy for those wanting to spread a religion to a new place.
Going back to the higher-level question of how necessary conquest is to the spread of Christianity: there are apparently something like 100M Christians in China, and not because China was ever conquered by Christians. On the other hand, in the past there seem to have been multiple instances where Christian missions produced a fair number of converts but then the religion largely died out until the next wave of missionaries came in.
My impression after all this is as follows. (1) It is certainly not impossible for Christianity to spread without conquest, and there are a few major instances where it has done so. (2) Most of the world’s Christians, however, are part of Christian communities that got way way by conquest. (3) Attempts to spread Christianity by mere persuasion are sometimes very effective but often very ineffective.
I would expect that all these things apply equally to any other major religion. #2 will of course be untrue for religions that have never gained official approval by any political power, but we should expect all such religions to be pretty small in numbers for that exact reason. Maybe Hinduism is a sort of exception, being found almost exclusively in India, but I am shockingly ignorant of Indian history and don’t know whether e.g. there’s a history of conquest within what is now a single country.
Hmmm. I don’t know enough history to be able to name specific situations, but what about the other way round—countries that learned Latin without being conquered? (Perhaps for ease of trading?)
I believe the Roman Empire once tried to suppress it as well. It doesn’t appear to have worked then, either.
Yes; there seem to have been specific instances where missionary conversion worked, and specific instances where it did not.
Those conclusions do not seem unreasonable to me.
I think it also depends somewhat on the structure of the religion in question. Judaism doesn’t have missionaries, for example, and I don’t think there’s any way for a non-Jew to become a Jew (I may be wrong on that point, but if there is, the Jews certainly don’t advertise it).
You can convert to Judaism. However you are right that they are not actively interested in converting someone.
There seems to be a certain historical arc here. The earliest religions did not try to convert anyone because they were simply part of the culture of an individual nation, and you don’t convert people to a nationality. Judaism is part of this tradition but at the border of the next, namely the point where people realize that insofar as religions make claims about the world, it does not make sense for some people to accept them and some people to reject them. If a claim about the world is true, everyone should accept it. This leads religions to try to convert people. Now we are reaching a third stage: as even religious people come closer to realizing that those claims were not actually true in the first place, even the religious people are backing away again from converting people. An example would be Pope Francis condemning proselytism and saying that he is not interested in converting Evangelicals etc.
Consider also that religions that convert more people tend to spread faster and farther than religions that don’t. So over time religions should become more virulent.
The book “The Rise of Christianity” is an excellent analysis, using the tools of modern sociology, of the rise of Christianity in the Roman Empire. Key insights
It grew exponentially mostly via transmission from people you knew. As your social world became more than 50% Christian, you were more likely to convert. In recent times Mormanism has grown in a similar fashion.
It had many rules that encouraged having large families (no birth control, no abortion, no infanticide, no sex outside marriage which encouraged young marriage, bans on many sources of fun other than having sex with your spouse, bans of divorce which made marriage more secure in a sense).
The higher status of women in Christianity than in the Roman world encouraged women to convert. An example of this higher status was that a pagan man could order his wife to have an abortion. Many of the patriarchal statements in the new testament were latter additions when the church, which was originally very egalitarian, did become very patriarchal.
Christians were only allowed to marry pagans if the pagan converted, or at a minimum, agreed for the children to brought up as Christians.
(3) and (4) combined with the shortage of women due to infanticide of female children meant that men who wanted a wife often had little choice but to marry a Christian. The children would then be Christians.
Once they achieved critical mass they seized control of the state and enacted coercive measures which ruthlessly crushed the other religions. As an example, even visiting pagan temples was banned, books were destroyed, priests killed, temples burned or converted to churches.
Another factor is that Christianity is exclusive—one could not adhere to Christianity and, say, Mithraism at the same time, since Christianity claimed a monopoly on religious truth. Other saviour cults which did not function in the same way would not have been able to work up the same amount of religious fervour, since a man’s trust in his religion is limited by that religion’s trust in itself.
When do you believe this happened, aside from cases where “Jesus” was translated as “Buddha”? Missionaries today typically harass other Christians.
A brief Google points me at this fellow. He was a medieval Fransiscan missionary to China, and established what appears to have been a reasonably successful church there that stayed around for about forty years after his death (until the Ming Dynasty arose in 1369 and expelled them from the country).
No, it just has to get big enough that Christians have enough other Christians around that the social structure becomes self-sustaining. Social ostracism is used to get rid of spontaneously appearing non-Christian individuals, not large groups.
So don’t assume it’s an exhaustive list.
It really doesn’t matter for the purposes of my point that it also spreads through peaceful missionaries. You seem to think that I’m complaining that Christianity spreads violently, so you’re bringing up non-violent missionaries. But that isn’t my point.
My point is that Christianity spreads as a meme system. Belief systems have traits which lead them to spread regardless of their truth. Some of those traits I listed above. Other traits include, of course, the belief system telling its members to send out missionaries to spread the belief system. Having missionaries is an adaptation which helps the belief system to spread, in the same way that coconuts being able to float so they can travel to distant islands helps coconuts to spread. Belief systems which spread efficiently will do better than belief systems that don’t, and will soon cover as much area as they can right until they run into other well-adapted belief systems.
Fair enough. A neighbourhood or other small community can be self-sustaining, then.
But it still needs to be started.
As soon as I don’t assume it’s an exhaustive list, your point collapses. Yes, it does spread as a meme system, This is because it is a meme system.
Newtonian physics is also a meme system. And Newtonian physics can also spread as a meme system, in all three of the ways you describe. (I don’t think anyone ever has tried to spread Newtonian physics by the sword, but it could be done in theory; but Newtonian physics has most certainly been spread by parent-to-child transmission and by social ostracisation).
Similarly for relativistic physics. Or, for that matter, any other descriptive model of the universe, including ones that are perfectly accurate and 100% true. Because any descriptive model of the universe is a meme system, and can therefore be spread as a meme system.
Your conclusion, in short, relies on the idea that Christianity is only spread by means that are not dependant on the truth of its ideas, and never spread by means that are dependant on the truth of those ideas. This you have not shown.
It isn’t all or nothing. These methods of transmission exist for Newtonian physics, but they are much less fundamental to how Newtonian physics spreads.
If it’s medieval times, and I announce to the members of my village that I’m not a Christian and act accordingly. I may end up dead, lynched, expelled, tortured by the Inquisition, or sent to a ghetto. If i get up now and announce that I don’t believe in Newtonian physics, not much is going to happen to me unless I have a job that depends on Newtonian physics. The social ostracization may not be completely missing (people can still laugh at me), but it’s far weaker than for Christianity.
And parents teach Christianity to their children because Christianity directly asserts that it is good to teach itself to your children, and implies that their children will be in terrible supernatural peril if they don’t. There really isn’t anything comparable for Newtonian physics that isn’t related to the fact that Newtonian physics works—if parents don’t teach their children not to walk off cliffs, the children won’t grow up to refuse to teach Newtonian physics to their own children.
I’m pretty sure that the main modern transmission vector for Newtonian physics is schoolteacher-to-child (which is very similar to parent-to-child, except that the parent hires an intermediary). Mind you, I don’t have any stats or data handy to back that up, it’s just a general impression.
But again, that happens because it’s piggybacking on the fact that people teach things that work. Since science works, it gets taught. If science didn’t make factual claims with real-world implications, nobody would teach it. Religion is not bound by this; it gets taught even in the absence of such factual claims, because it has a bunch of commands that amount to “spread this religion regardless of the facts”.
Do schools also teach Shakespeare because “that’s what works”?
I think there’s some equivocation here between different meanings of “expert”. Experts in Shakespeare are experts in what Shakespeare said and what things mean within the context of Shakespeare’s plays and Shakespeare’s life. A comparable “expert in Christianity” would be able to tell me what Christianity claims and put it into context as a whole.
But “amateurs should defer to experts”, in reference to Christianity, doesn’t mean “amateurs should accept the experts’ word about Christianity,” it means “amateurs should accept the claims presented by Christianity”. There’s nothing comparable for Shakespeare. In this sense, neither experts nor schools teach Shakespeare at all.
Um.
Going back to the comment that started this all—over here—shows that the quote originally comes from this page, which is an essay written from the atheist perspective on how to go about arguing the historicity of Jesus. The ‘experts’ in question appear (to me) to be not theologists but historians, seeking whether or not a given person, referenced in certain historical documents, actually lived at one point or not, and the author bluntly states that he expects the odds of said existence, using his best estimate of requisite probabilities, to be about one in twelve thousand. (He then goes on to say that this is far from the least likely claim in the Christian faith; supernatural miracles are far more unlikely, and thus far better things to call into question).
So, no, the original context does not say that amateurs should accept the claims made by Christianity (and it does not define professionals by their religious leanings). It says that amateurs should not take a firm position on a question where the experts do not take that firm position. (It does not say that the amateurs have to agree with the experts when those experts do take a firm position, amateurs are allowed to remain uncertain).
You made a claim that schools teach their curriculum because the curriculum is useful. (e.g. If science didn’t make factual claims with real-world implications, nobody would teach it)
Teaching Shakespeare is an example where it’s not clear whether there any use to it. Schools might simply teach it because teaching Shakespeare is high status.
The math curriculum is also not optimized by teaching children the kind of math that’s likely to be useful for them. It instead tries to teach them calculus because calculus is high status while making Fermi estimates isn’t.
I didn’t claim that the only reason schools teach their curriculum is that it is useful. There can be (and are) different parts of the curriculum taught for different reasons, some related to being useful and some not.
How do you know that the reason students teach Newtons laws is them being useful and not for status purposes?
How well do they serve each purpose? I’m given to understand Newton’s Laws are highly useful in engineering. How do they compare with alternative means of producing status, like teaching everyone ‘Ubik’ and ‘fnord?’
But most students will not do jobs as engineers.
Touch typing is a useful skill for nearly all jobs yet most schools don’t teach it.
There are no professors of touch typing that give the subject academic prestige. On the other hand academic physic has prestige.
Calculus has more academic prestige than statistics and thus schools are focusing more on teaching calculus.
Or latin. Theres a bedrock sense of what works, and there is a more socially defined sense. If your society values some religion, dead language or author, then it works to teach it, because it gives people acceptability and status.
Schools still teach Latin?
...mine didn’t. (It did teach Shakespeare, though).
I live in a suburban school district in the Southeast US. The public middle and high schools here do teach Latin as one of the foreign language options, along with Mandarin, French, German and Spanish.
Ive no idea if they do now. I went to a old fashioned school, a long time ago, which did.
This is a good argument, and one way of seeing that is by contrast with Islam, where the method described is historically much closer to being exhaustive—and in general it was indeed introduced into new areas was by means of swords, and missionaries did take swords with them as standard equipment. (In the future Islam may continue to spread more in the fashion that Christianity did in the past, however.)
This idea seems to be more or less taken for granted by people who oppose either Islam. Is there actually a perspicuous source of data describing in detail how Islam spread, that allows assessments of that kind to be made?
You could start by reading about the topic on Wikipedia (that will also refer you to many other sources.) Of course you could say that probably most of those articles and their sources were written by non-Muslims. But that is like saying that most people who have argued for any position have tended to be people who believe that position, and therefore we should ignore their arguments.
reading about the topic on Wikipedia
Just because there’s an article on the spread of Islam doesn’t mean that a balanced quantitative analysis on the means of its proliferation either exists or is possible. Usually when someone asserts something to that effect, the onus is on them to support their assertion by referencing a specific source.
The predominant ways in which Christianity has spread are conversion by the sword, parent to child transmission, and social ostracism for people who refuse to believe it. It spreads for reasons related to its fitness as a system of ideas but unrelated to its factual truth. This is not how evolution spreads.
Also, distinguish between “anyone can claim X” and “anyone can correctly claim X”. Creationists could claim that evolution spreads the same way—but they’d be wrong.
The historical survival of religions and societies is a matter of factual truth. Evolution rewards success, not epistemic purity. Is peacock’s plumage related to factual truth?
Please don’t be Internet-pedantic here. “Factual truth” here means “the factual truth of the statements made by the religion”, not “factual truths about the religion”.
Maybe there’s a confusion being caused here by the sentence “This is not how evolution spreads.”
It could mean at least one of the following: 1) “This is not how the theory of evolution itself was spread” 2) “This is not the mechanism according to which evolution spreads ideas”
It seems as if Lumifer interpreted your statement in the second sense (as I did initially), whereas reading your post in its original contexts suggests the first sense was the one which you intended.
Assume a climate change denier or a creationist who (a) makes such an argument and (b) firmly believes it to be correct. How would he be best convinced that he is, in fact, wrong?
Same way you convince him of anything else—by arguing specific facts.
Just because two sides can produce arguments with similar forms doesn’t mean they also have similar facts. “Anyone can claim X”, divorced from the facts about X, is only about having similar forms.
Hmmm. Could work. Or perhaps the first thing he’d conclude is that you are infected by the meme plague, and the second thing he’d do is suspect that you are trying to infect him with the meme plague.
He could respond to this in two ways; either by ending the debate, in the hope of immunising himself; or by arguing against you, in the hopes of curing you.
...huh. Actually, thinking about this, a lot of bad debate habits (ignoring the other person’s evidence, refusing to change your mind, etc.) actually make a lot of sense when seen as protective measures specifically to prevent infection by meme plagues.
What to do then, when experts sometimes are infected with meme plagues, have conflicts of interest, are able to prevent alternative views from being presented?
If all experts are infected with meme plagues, and are able to prevent alternative views from being presented, then you have a problem. This implies that one of the following is true:
Studying the subject at all carries a strong risk of meme plague infection
Only those pre-infected with the meme plague have the interest and/or the ability to study the subject
You’re wrong about something—either the presence of the meme plague or its spread or… something.
You could attempt to study the subject to expert level yourself, taking appropriate anti-meme-plague precautions; but you have to be very careful that you’re not shutting your ears to something that’s really true (you don’t want to become a climate-change-denying weather expert, after all) so you’ll need to seriously consider all necessary data (maybe re-run some vital experiments). This would take significant time and effort.
I don’t know what other strategy could reasonably be followed...
Real-life example. A relevant quote:
Huh.
Okay. In this particular real-life example, though, it is clear that the politicisation is in the infrastructure around the science, not in the science itself. That is to say, learning climate science is not memetically dangerous—it is simply difficult to get a paper published that does not agree with certain politics. And that is bad, but it is not the worst possibility—it means that someone merely studying climate science is safe in so doing.
So, in this particular case, the solution of studying climate science oneself, becoming an expert, and then forming a suitable opinion is a viable strategy (albeit one that takes some significant time).
(An alternative solution—which will also be a hard thing to do—is to create some form of parallel infrastructure for climate science; another magazine in which to publish, another source of funding, and so on. There will likely be serious attempts to politicise this infrastructure as well, of course, and fending off such attempts will doubtless take some effort).
If you are an autodidact and study the climate science by yourself from first principles, yes, it’s not dangerous. However if you study it in the usual way—by going to a university, learning from professors and published papers, etc. -- you will absorb the memes.
Hmmmm. Depends how ingrained the memes are in the material. Oh, you’d certainly have awareness of the memes—but accepting them is a different story, and a certain skepticism in a student (or in a professor) can probably blunt that effect quite a bit.
Even if the memes are that thoroughly integrated, though, the only effect is to make the establishment of a parallel infrastructure that much more appropriate a solution.
Oh, you actually believe this crap. Then you should be ashamed of yourself.
Request denied.
Anything else I should do?
There is another possibility: the selection process for experts eliminates diverse perspectives.
Try getting tenure as a political scientist as a conservative republican, as an example.
But there are more subtle problems. For example, the selection process for medical doctors actively screens out people with a high level of mathematical and statistical skill, knowledge and ability.
It does this by very strongly selecting for other characteristics—ability to memorize vast arrays of words and facts, physical and mental stamina. Because if you strongly select for X, it will generally be at a cost to anything else that is not strongly correlated with X.
This is not following the advice of the parent comment, since we do not yet have a Joseph Smith Myth theory.
Well, if you make the assumption that Jesus existed and behaved as described in the New Testament, this reduces to Lewis’s trilemma. The criticisms section of that page outlines some of the possible responses.
The option I personally find most compelling is that there’s plenty of room for distortion and myth-making between Jesus’s ministry and the writing of the earliest Christian works we know about: at least four decades [ETA: got this wrong earlier; see downthread], possibly more depending on how generous you’re being. Knowing what we do about how myths form, that’s more than enough time for the supernaturalism in the Gospels to have accumulated. Look at it this way and it’s no longer a question of “lunatic, liar, or Lord”; rather a colossal game of Telephone played between members of a fragmented and frequently persecuted sect, many of whom would have had incentive to play up the significance of the founding events. There are more recent religious innovations that you can look at for comparison: Mormonism, for example, or Rastafarianism.
Some have even used this to argue against the historicity of Jesus, although I don’t think doing so is necessary to a secular interpretation of the New Testament.
How do you get “between a hundred and twenty and two hundred” years? The standard story puts the death of Jesus around 30 B.C.E., and dates the composition of the earliest gospel to around 70 B.C.E. Admittedly, the standard story is certainly not beyond question[1] but I’d be interested if you had any specific reasons for advocating a different timeline. Of course, 40 years is more than sufficient for pretty much unlimited distortion and mythmaking anyway.
[1] The chain of reasoning for dating the composition is, sadly, too often along these lines: we know that A was certainly written before date X, because A must be before B. We know this because B contains a vague reference that kind of looks like it refers to A, and it doesn’t look all that likely that B was tampered with by later scholars to insert the reference. B must be before C for similar reasons, and C before D, and D before E, and E actually contains some fairly specific references to being written around date Y which we again don’t think are all that likely to have been tampered with by later copyists. It is unlikely at each stage that the next writer acquired and made use of the text as soon as it was written, so we subtract a few years from Y for each stage for the transmission of the text and arrive at X as the latest possible date for A to have been written.
My mistake, I was thinking of non-Christian references to the life of Jesus (and didn’t have the dates quite right there either; Tacitus wrote in the early second century and Josephus late in the first, although both references are rather brief). As best I can tell, you’re right about the chronology of Christian writings; Mark is thought to be the earliest of the surviving Gospels, and that was probably written around CE 70. The hypothetical Q source may have come somewhat earlier, but seems to have been a collection of sermons and proverbs rather than a gospel as such, if its projected influence on later works is anything to go by.
Edited to correct. But yes, forty years is a large enough gap to explain a lot of drift.
Ugh. Why’d I write “B.C.E.” when I meant “C.E.”? Oh well, I guess it didn’t confuse anyone. Anyway, besides a handful of people who question the usual gospel dating and try to argue that it was really considerably later, I know there’s also a tiny minority of scholars who date the life of Jesus much earlier, as much as a century or more before what the standard story reports. Hence, I’d wondered if you were a subscriber to one of those theories. It means having to assume some of the references to contemporary events in the gospels are just wrong, but honestly the standard story also has to do that; it just has a different set of mistakes it needs to explain away. Still, it’s a pretty tiny minority theory, and I haven’t really investigated what the evidence for it is supposed to be.
Asking similar questions about the Quran and various other religion’s holy texts, and just general popularity of many cults and things, makes you realise an idea or set of such things has no requirement to be true to be popular. In fact, looking at the self-help section in a bookstore reminds you of this (see Lukeprog’s self-help sequence first post). I also believe that Richard Carrier has a book called ‘Not the Impossible Faith’ which discusses this question, although defo check that if you’re thinking of buying for that purpose.
I really am having trouble doubting my conviction in rational thought. I can’t visualize an alternative. I can visualize an alternative to my atheist philosophy though, since if God descended from heaven and handed me a bunch of concrete evidence that He exists, I wouldn’t say ‘ah, rationality was wrong.’ I would say ‘Oh, so you exist. I’ll eat my hat on that one and concede that my confidence in your non-existence has been defeated, but to be fair until just now you’ve given me no rational reason to believe in you.’ I’m a rational atheist because all of the convincing evidence is in that bucket, but even if a religion came along that was rigorously provably correct I would just be a rational theist. And I would have many pointed questions for that deity about the way life in the universe seems to be ‘designed’ in the sloppiest, most reckless way possible, like a programmer trying to compile all of the text from Wikipedia and then making random edits until it returned with no errors. Yes, I stole a joke from xkcd.
Well, imagine the world was such that “rational thought” (whatever you mean by it) tended to result in incorrect beliefs, and a different type of thinking (say, choosing positions randomly) always achieved better accuracy. You can’t fool it either; pretending to choose beliefs randomly won’t work unless you actually believe them.
...I’m having trouble imagining how such a world would work even in principle. It seems logically self-contradictory.
There is a God/simulator, and he inspects your thought processes, making the world such that rational thinking fails.
For a person who already escaped from religion a thought about “What general strategy would a religious person have to follow in order to escape their religion?” is like a thought about how to make all people on Earth stop eating meat for a vegetarian person. Not a very constructive thought. If one starts thinking about such general strategy then one implicitly sets a goal to somehow assist all religious persons to escape from their religion. But that kind of goal is not necessarily a good one :) Instead, a person who already (or from the start) escaped the religion, can spend that time to look for more people with similar minds.
It seems to me that in our civilisation we have a quite nice way of dealing with deficiency of faith’s crises—assuming the narrative of epistemological societal progress the people with poor epistemic hygiene, along with a smaller mix of those with a better one die off and a new generation is generally more able to look at the issues with a fresh set of eyes.
Not sure however how true it is that accurate memes tend to live and propagate—there are quite a few cases that are still disputed despite being settled for hundreds of years, although I may be looking at not big enough time frame here.
Saint Peter of Verona, patron saint of inquisitors, practiced this method when dealing with suspected heretics. By allowing himself to have a crisis of faith when confronted with the sincerity of his opposition, his beliefs came out stronger in the end and we’re often persuasive. Saint Peter not only never lost his faith, but through his example, inspired his assassin to devote his life to the Church.
I suggest instead finding an unforgivable sin within the religion you are seeking to escape. Then committing that sin gives you a personal incentive to build a belief structure that does not require good standing within that religion. For Christianity, simply saying ‘I deny the holy Spirit’ can be sufficient to meet this condition. For Islam, saying the words, Allah is not god, and Mohammed is not a prophet might work, but I’m less familiar with Muslim theology.
Is there such a cardinal sin in ‘rationality’?
It seems to me that a rationalist provisionally believes that which is supported by the evidence, by the rationalist’s experience, and by logical demonstration/argumentation. If there is no such rational basis to believe in a particular religion, it is not clear to me why the rationalist would need any sort of trick to escape from it.
It seems to me that deciding what belief you want to have and then tricking yourself into believing it via a ritualistic act rather than via an examination of the evidence, etc., (in other words, following the advice given in your post) could be considered a cardinal sin of (epistemic) rationality.
Thank you! So, the path of purposeful self-deception is not the road to higher rationality, no matter how well it happens to work.
To use the monkey riding on the tiger analogy for human cognition, I wonder which is more effective. The monkey putting the tiger in a pen and swinging through the trees alone...Or the monkey that ties a steak to a stick and rides the tiger.
Correct
I suspect the monkey is better off putting the tiger in a pen and swinging through the trees alone—with a steak and a stick it is just a matter of time before the monkey loses control of the situation and becomes a side dish to the steak. Similarly, trying to harness self-deception to lead one to truth/rationality is apt to backfire.
Taking the analogy further to a community of tiger riding monkeys...The monkey that waves the steak on a stick in front of some other monkey’s tiger probably has a future in marketing.
The monkeys who decide to pen their tigers may have a problem, the tigers are still present, may be unhappy about their confinement, and after a time, the monkeys may not watch them as closely as they should...
As a case in point, I give you the prevalence of polyamory in the rationalist community. Historically, polygyny has been a feature of insular communities that wanted to become more insular. Is polyamory serving its purpose as a strong social barrier to entry for the high table of the rationalist community, or is it really just pure rationality at work?
I don’t see how it can be very useful as a “strong social barrier to entry”. It’s not as if you have to be poly to be accepted as a rationalist, is it?
The problem with faith is that for many people it has become a part of their identity. The brain cells are intertwined and when someone attacks their faith, their self-protection mechanism kicks in and their rational thinking turns off.
It’s basically like Plato’s Allegory of the Cave, where prisoners choose to disbelieve the real world and go back to their own fake reality.
Link is dead. Is this the new link?
About ten years late to the party here, but regarding Aumann, I think you do him an injustice—he is well aware of the conflict between rationality and God. Here is an interview with him that goes in depth into these issues:
http://www.ma.huji.ac.il/~hart/papers/md-aumann.pdf?
He says: “Religion is an experience, mainly an emotional and aesthetic one. It is not about whether the earth is 5,765 years old. ” He goes into more detail. For him, the question of whether or not god really exists is almost irrelevant to his religion. He then delves into game theory and explains how religion and the idea of god allow for coordination among otherwise rational agents, leading to better outcomes for all.
Yep; in the time since this was written, the LW community has gone pretty heavily in the direction of “let’s figure out how to reclaim the coordination and community benefits of religion separately from the weird belief stuff”, and (imo) done pretty well at it.
For me, I’d already absorbed all the right arguments against my religion, as well as several years’ worth of assiduously devouring the counterarguments (which were weak, but good enough to push back my doubts each time). What pushed me over the edge, the bit of this that I reinvented for myself, was:
“What would I think about these arguments if I hadn’t already committed myself to faith?”
Once I asked myself those words, it was clear where I was headed. I’ve done my best to remember them since.
Why would this question be relevant? Let’s say that the answer is “I would think that the arguments in favour of religion are stupid”. What is that supposed to prove?
I used to believe, as do many Christians, that an open-hearted truthseeker will become convinced of the existence of the true God once they are exposed. To say otherwise makes missionary work seem rather manipulative (albeit still important for saving souls). More importantly, the principle is well attested in Christian thought and in the New Testament (Jesus with Nicodemus, Paul with the Athenians, etc).
There are and have been world religions that don’t evangelize because they don’t have the same assumption, but Christianity in particular is greatly wounded if that assumption proves false.
Oh, so then the question should be “What would I think about these arguments if I hadn’t already committed myself to faith and I were an open-hearted truthseeker?”. Your claim is that:
1) such a person should consider arguments for the Christian faith to be good, on balance (otherwise “Christianity is greatly wounded”), and
2) such a person often would not consider arguments for the Christian faith to be good.
Why do you believe (2)? That is, how can you know what a sincere seeker is going to think of any particular argument? Or, even worse, about all the arguments so that they can decide which theory is more probable on the balance?
I met unbelievers who found some arguments convincing and others who found them unconvincing, but there is no way for me to know if any of them were open-hearted truthseekers. If doctrine (1) is true, it’s just not an empirically verifiable doctrine, since there is no observation by means of which you could determine even your own sincerity, much less that of others.
https://www.reasonablefaith.org/writings/question-answer/is-gods-existence-evident-to-every-sincere-seeker
Really?
Here’s a newspaper review whose author says Pythagoras “may well be a mythical amalgam of various forgotten sages”. The book under review itself says “Sadly, it is now almost universally assumed by classical scholars that Pythagoras never existed”. I suspect this is partly tongue in cheek, since the other information I can find doesn’t seem consistent with what it says on its face, but if it’s a joke I think it’s the sort that depends on not being too far from the truth. Here’s the History Channel suggesting Pythagoras may not have existed. Everything I can find about Pythagoras, scholarly or popular, emphasizes that our sources of information about him are late and untrustworthy and that scarcely anything is known about him.
It looks to me as if the usual belief is “probably real but essentially nothing is actually known about his life”, and a few people, mostly not actual scholars, say “actually, the evidence is so thin he may well not have been real”. Which is not so different from the situation with Jesus, except that most people who aren’t out-and-out mythicists about Jesus are willing to concede that some things are known about his life.
I would be rather not be around people who kept telling me true minutiae about the world and he cosmos, if they have no bearing on the problems I am trying to solve.
Will, not wishing to be told pointless details is not the same as deluding yourself.
I was discussing the placebo effect with a friend last night though, and found myself arguing that this could well be an example of a time when more true knowledge could hurt. Paternalistic issues aside, people appear to get healthier when they believe falsehoods about the effectiveness of, say, homeopathy or sugar pills.
Would I rather live in a world where doctors seek to eliminate the placebo effect by disseminating more true knowledge; or one where they take advantage of it, save more lives, but potentially give out misinformation about what they’re prescribing? I honestly don’t know.
Why do you consider religious faiths to be obviously untrue? “They would be child’s play for an unattached mind to relinquish, if the skepticism of a ten-year-old were applied without evasion.” Why do you consider the questions of a ten-year old to be unaswerable except through evasion? On the contrary, such questions are almost invariably easily answerable to anyone who has the slighest knowledge about philosophy of religion and the doctrine of their particular religion. I would be silly to be guided by the questions of a 10-year old instead of the answers of a 20- or 45-year old whose knowledge of the subject matter is non-negligible.
”a belief whose absurdity a fresh-eyed ten-year-old could see” Why would it matter if a belief seems absurd to a 10-year old? It would be fairly stupid to be guided by children’s opinions about an important subject.
I came back to this post to draw inspiration from it and found several issues with it, that I now spot as a much older and more mature adult, almost 30.
God and other spiritual or/and religious topics are placeholders to be tabooed, so there can be many individual beliefs behind them. What is sound? Is it the wave or the phenomenon in the brain? Same with God. Every theist has his or her own little bubble of belief.
We cannot refute the ARGUMENTS for God easily, we have to refute the THINKING.
That is what the article indeed argues and tries to explain, but I feel it fails to highlight that some of the beliefs hiding behind a theists worldview are likely TRUE and VALID. It is hard to let go of the false beliefs, because they are entangled with true beliefs. This is the case of religion etc., not of specific false beliefs. The larger the Mystery word, the more stuff you can throw behind it.
There are broad and vague claims here, such as Occam’s razor being more productive than faith in history. Given how new the idea is, I wonder in what sense? And what evidence is there for this? Although I am inclined to agree, this is far from obvious.
Not all children of atheists grow up to be atheists. I feel this is worth pointing out
There are hard limits to scientific knowledge and what can be mathematically provable. Any belief, whether about God or something more specific, can hide behind our general ignorance.
In truth, this is why God IS the simplest explanation for many theists. We cannot
explainprove within current paradigms how the always-has-been universe works, yet somehow we also claim to know it is growing in complexity and expanding all on its own. This may be true, but it is far from what a ten year old can easily grasp.So let’s say God made it instead.
I feel this IS the Occam’s razor argument for theists. How can we refute existential off-the-shelf answers to someone who DOES NOT want to go through the rigor of experimental science and rational query in order to investigate the origins of everything? When we know from the start that we will most likely never find a perfect answer anyway? It is much simpler to reference a God who always was. The same goes for any field where there is a mystery that can be hidden behind our ignorance.
Fair point: Christianity in Europe is more “persuade the ruler and others will fall into line” than “spread by the sword”. (Though IIRC there’s some reason to think that the ruler was willing to be persuaded partly because substantial fractions of his people already had been.)
it has no practical impact on how an ideal rationalist should behave
With respect to themselves, not necessarily to others. Withholding information or even lying can be rational.
By “bending with the wind” I don’t mean “bending with public opinion”. I mean not being emotionally attached to your views.
In a PD, everyone having accurate information about the payoff matrix leads to a worse outcome for everyone, than some false payoff matrices you could misinform them with. That is the point.In retrospect, I was certainly leaving christianity the day I decided that if god existed, he could not possibly object to me honestly trying to determine The Truth. Doubts stopped feeling like sins.
I think your something-to-protect must be the accuracy of your map, not anything on the map. (at least for a moment)
If someone says you need to fire a revolver at your something-to-protect, you will raise objection based on strongly held beliefs about the effects of revolvers. It’s so hard to risk those beliefs because, with your current belief set, someone who lacked those beliefs would lose their something-to-protect. You can’t stop believing as long as you believe the cost for disbelieving is any kind of hell.
I was about to say I was lucky to have such a god, but no: I constructed a god just nice enough to let me relieve that cognitive tension.
It’s relatively easy to invent and defend very contrarian ideas when you start off joking. This could be a technique if you’re confident you can later break a good idea out of the silly-idea prison.
In a PD, everyone having accurate information about the payoff matrix leads to a worse outcome for everyone, than some false payoff matrices you could misinform them with. That is the point.
Do you agree that in a PD, it is not the case for any individual that that individual is harmed by that individual’s knowledge? Your point goes through if we somehow think of the collective consisting as a single “you” with beliefs and preferences, but raises all sorts of issues and anyway isn’t what Eliezer was talking about.
If the wind is following occams or something internal, then it can be blowing in the wrong direction...
If the wind is following occams or something internal, then it can be blowing in the wrong direction...
Isn’t that the subject of The Ritual?
One more argument against deceiving epistemic peers when it seems to be in their interest is that if you are known to have the disposition to do so, this will cause others to trust your non-deceptive statements less; and here you could recommend that they shouldn’t trust you less, but then we’re back into doublethink territory.
Phil Goetz, who I was replying to, was saying that type of thought should be unnecessary, if you don’t hang on to your ideas tightly.
Not hanging on to ideas tightly is great for engineers and experimental scientists. It doesn’t matter to a chemist if MWI or bohm is right. He can use either, switching back or forth from the view points as he sees fit.
For a theoretical quantum physicist, he has to have some way of determining at which face of the knowledge mine to work, he has to pick one or the other. If it is not a strong reason then he might split his work and get less far with either.
For this sort of person it makes sense to pick one direction and run with it, getting invested in it etc. At least until he comes across reasons that maybe he should take the opposite direction or neither direction, then the crisis of faith might be needed.
From where I’m standing, the spouse thing looks like obvious nonsense (of the category: not looking for a third alternative). You’d be far better off learning to share—especially since, if your spouse died, you’d have someone to talk to.
1) Do you believe this is true for you, or only other people?
I don’t fit the premise of the statement—my cherished spouse is not yet late, so it’s hard to say.
2) If you know that someone’s cherished late spouse cheated on them, are you justified in keeping silent about the fact?
Mostly yes.
3) Are you justified in lying to prevent the other person from realizing?
Mostly no.
4) If you suspect for yourself (but are not sure) that the cherished late spouse might have been unfaithful, do you think that you will be better off, both for the single deed, and as a matter of your whole life, if you refuse to engage in any investigation that might resolve your doubts one way or the other?
Depends on the person. Some people would be able to leave their doubts unresolved and get on with their life—others would find their quality of life affected by their persistent doubts.
If there is no resolving investigation, do you think that exerting some kind of effort to “persuade yourself”, will leave you better off?
No. You can count that as a win if you like—“deluding myself” is too strong. “I am better off remaining deluded …” is more likely to be true for some people.
5) Would you rather associate with friends who would (a) tell you if they discovered previously unsuspected evidence that your cherished late spouse had been unfaithful, or who would (b) remain silent about it?
Supposing I am emotionally fragile and might harm myself if I discovered that my spouse had been unfaithful, (b). Supposing that I am emotionally stable and that I place great weight on having an accurate view of the circumstances of my life, (a). Other situations, other judgment calls.
Which would be a better human being in your eyes, and which would be a better friend to you?
Depends on how I can reasonably be expected to react.
Caledonian, maybe you had arguments on this thread previously, but it seems more like the place for that sub-debate.
I became a Christian because I was a Bayesian first. I know there are others like me. I saw and experienced evidence that caused me to positively update my belief.
Now if you don’t like that argument, then please tell me how can anyone become an atheist via Bayesian updating? Can your posterior really go to a point mass at zero (belief in God)? If so, please tell me what prior you were using. If not, please tell me how you define atheism.
@ Paul Murray
There is no straw man. You’ve presumed that I meant that Christian = “Pr(god)=1“. That was never my claim. It had seemed that atheist was being used as Atheist=”Pr(god)=0”, but E. clarified his position. I think that agnostic (in the literal sense) is always a better term than atheist, but that’s just semantics.
The real issue (to me) is what Christians (or other “people of faith”) think of the atheistic position, and vice versa. Christians are often derided here as uneducated or un-Bayesian.
My point is not to convince you to believe, but to ask whether you think that a rational Bayesian can ever become a Christian (or person of other faith), given that we have different life experiences and different priors? Can it be so? And if so, then why the derision? Is that not an irrational bias?
I’ll leave it up to God to care about space-time location of your dinner.
Eliezer, that’s a John McCarthy quote.