The Meditation on Curiosity
The first virtue is curiosity.
As rationalists, we are obligated to criticize ourselves and question our beliefs . . . are we not?
Consider what happens to you, on a psychological level, if you begin by saying: “It is my duty to criticize my own beliefs.” Roger Zelazny once distinguished between “wanting to be an author” versus “wanting to write.” Mark Twain said: “A classic is something that everyone wants to have read and no one wants to read.” Criticizing yourself from a sense of duty leaves you wanting to have investigated, so that you’ll be able to say afterward that your faith is not blind. This is not the same as wanting to investigate.
This can lead to motivated stopping of your investigation. You consider an objection, then a counterargument to that objection, then you stop there. You repeat this with several objections, until you feel that you have done your duty to investigate, and then you stop there. You have achieved your underlying psychological objective: to get rid of the cognitive dissonance that would result from thinking of yourself as a rationalist, and yet knowing that you had not tried to criticize your belief. You might call it purchase of rationalist satisfaction—trying to create a “warm glow” of discharged duty.
Afterward, your stated probability level will be high enough to justify your keeping the plans and beliefs you started with, but not so high as to evoke incredulity from yourself or other rationalists.
When you’re really curious, you’ll gravitate to inquiries that seem most promising of producing shifts in belief, or inquiries that are least like the ones you’ve tried before. Afterward, your probability distribution likely should not look like it did when you started out—shifts should have occurred, whether up or down; and either direction is equally fine to you, if you’re genuinely curious.
Contrast this to the subconscious motive of keeping your inquiry on familiar ground, so that you can get your investigation over with quickly, so that you can have investigated, and restore the familiar balance on which your familiar old plans and beliefs are based.
As for what I think true curiosity should look like, and the power that it holds, I refer you to “A Fable of Science and Politics” in the first book of this series, Map and Territory. The fable showcases the reactions of different characters to an astonishing discovery, with each character’s response intended to illustrate different lessons. Ferris, the last character, embodies the power of innocent curiosity: which is lightness, and an eager reaching forth for evidence.
Ursula K. LeGuin wrote: “In innocence there is no strength against evil. But there is strength in it for good.”1 Innocent curiosity may turn innocently awry; and so the training of a rationalist, and its accompanying sophistication, must be dared as a danger if we want to become stronger. Nonetheless we can try to keep the lightness and the eager reaching of innocence.
As it is written in “The Twelve Virtues of Rationality”:
If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer.
There just isn’t any good substitute for genuine curiosity. A burning itch to know is higher than a solemn vow to pursue truth. But you can’t produce curiosity just by willing it, any more than you can will your foot to feel warm when it feels cold. Sometimes, all we have is our mere solemn vows.
So what can you do with duty? For a start, we can try to take an interest in our dutiful investigations—keep a close eye out for sparks of genuine intrigue, or even genuine ignorance and a desire to resolve it. This goes right along with keeping a special eye out for possibilities that are painful, that you are flinching away from—it’s not all negative thinking.
It should also help to meditate on “Conservation of Expected Evidence.” For every new point of inquiry, for every piece of unseen evidence that you suddenly look at, the expected posterior probability should equal your prior probability. In the microprocess of inquiry, your belief should always be evenly poised to shift in either direction. Not every point may suffice to blow the issue wide open—to shift belief from 70% to 30% probability—but if your current belief is 70%, you should be as ready to drop it to 69% as raise it to 71%. You should not think that you know which direction it will go in (on average), because by the laws of probability theory, if you know your destination, you are already there. If you can investigate honestly, so that each new point really does have equal potential to shift belief upward or downward, this may help to keep you interested or even curious about the microprocess of inquiry.
If the argument you are considering is not new, then why is your attention going here? Is this where you would look if you were genuinely curious? Are you subconsciously criticizing your belief at its strong points, rather than its weak points? Are you rehearsing the evidence?
If you can manage not to rehearse already known support, and you can manage to drop down your belief by one tiny bite at a time from the new evidence, you may even be able to relinquish the belief entirely—to realize from which quarter the winds of evidence are blowing against you.
Another restorative for curiosity is what I have taken to calling the Litany of Tarski, which is really a meta-litany that specializes for each instance (this is only appropriate). For example, if I am tensely wondering whether a locked box contains a diamond, then rather than thinking about all the wonderful consequences if the box does contain a diamond, I can repeat the Litany of Tarski:
If the box contains a diamond,
I desire to believe that the box contains a diamond;
If the box does not contain a diamond,
I desire to believe that the box does not contain a diamond;
Let me not become attached to beliefs I may not want.
Then you should meditate upon the possibility that there is no diamond, and the subsequent advantage that will come to you if you believe there is no diamond, and the subsequent disadvantage if you believe there is a diamond. See also the Litany of Gendlin.
If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.
1Ursula K. Le Guin, The Farthest Shore (Saga Press, 2001).
- An Alien God by 2 Nov 2007 6:57 UTC; 213 points) (
- Leave a Line of Retreat by 25 Feb 2008 23:57 UTC; 211 points) (
- Dissolving the Question by 8 Mar 2008 3:17 UTC; 145 points) (
- The Curse Of The Counterfactual by 1 Nov 2019 18:34 UTC; 138 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 118 points) (
- Belief in Self-Deception by 5 Mar 2009 15:20 UTC; 100 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Executable philosophy as a failed totalizing meta-worldview by 4 Sep 2024 22:50 UTC; 93 points) (
- Avoid misinterpreting your emotions by 14 Feb 2012 23:51 UTC; 92 points) (
- No One Knows What Science Doesn’t Know by 25 Oct 2007 23:47 UTC; 91 points) (
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- You’re Entitled to Arguments, But Not (That Particular) Proof by 15 Feb 2010 7:58 UTC; 88 points) (
- Use curiosity by 25 Feb 2011 22:23 UTC; 85 points) (
- The Sacred Mundane by 25 Mar 2009 9:53 UTC; 73 points) (
- Get Curious by 24 Feb 2012 5:10 UTC; 71 points) (
- Fake Utility Functions by 6 Dec 2007 16:55 UTC; 69 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- LA-602 vs. RHIC Review by 19 Jun 2008 10:00 UTC; 62 points) (
- My Kind of Reflection by 10 Jul 2008 7:21 UTC; 61 points) (
- Which rationality posts are begging for further practical development? by 23 Jul 2023 22:22 UTC; 58 points) (
- Zen and the Art of Rationality by 24 Dec 2007 4:36 UTC; 53 points) (
- Against Devil’s Advocacy by 9 Jun 2008 4:15 UTC; 50 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- “Infohazard” is a predominantly conflict-theoretic concept by 2 Dec 2021 17:54 UTC; 45 points) (
- What Curiosity Looks Like by 6 Jan 2012 21:28 UTC; 44 points) (
- Buy Insurance—Bet Against Yourself by 26 Nov 2010 4:48 UTC; 42 points) (
- Lighthaven Sequences Reading Group #2 (Tuesday 09/17) by 8 Sep 2024 21:23 UTC; 40 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- Zen and Rationality: Just This Is It by 20 Sep 2020 22:31 UTC; 35 points) (
- (Summary) Sequence Highlights—Thinking Better on Purpose by 2 Aug 2022 17:45 UTC; 33 points) (
- SotW: Avoid Motivated Cognition by 28 May 2012 15:57 UTC; 33 points) (
- The Neglected Virtue of Curiosity by 28 Jan 2012 17:28 UTC; 32 points) (
- How to signal curiosity? by 11 Jan 2013 22:47 UTC; 32 points) (
- Zen and Rationality: Equanimity by 16 Aug 2021 16:51 UTC; 30 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- You Don’t Have To Click The Links by 11 Sep 2022 18:13 UTC; 25 points) (
- Zen and Rationality: Trust in Mind by 11 Aug 2020 20:23 UTC; 25 points) (
- Real Reality by 5 Feb 2022 4:13 UTC; 22 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- Four ways I’ve made bad decisions by 14 Jul 2024 22:18 UTC; 18 points) (
- Lighthaven Sequences Reading Group #15 (Tuesday 12/17) by 14 Dec 2024 6:40 UTC; 17 points) (
- 26 Jan 2018 22:13 UTC; 15 points) 's comment on What are the Best Hammers in the Rationalist Community? by (
- 6 Jul 2009 20:02 UTC; 14 points) 's comment on The Dangers of Partial Knowledge of the Way: Failing in School by (
- Tarski Statements as Rationalist Exercise by 17 Mar 2009 19:47 UTC; 12 points) (
- [SEQ RERUN] The Meditation on Curiosity by 18 Sep 2011 4:07 UTC; 11 points) (
- 8 Oct 2013 23:16 UTC; 8 points) 's comment on The best 15 words by (
- Rationality Reading Group: Part K: Letting Go by 8 Oct 2015 2:32 UTC; 8 points) (
- 27 Jun 2011 3:15 UTC; 8 points) 's comment on What can we gain from rationality? by (
- 18 Mar 2012 12:01 UTC; 7 points) 's comment on Muehlhauser-Goertzel Dialogue, Part 1 by (
- 17 May 2012 21:38 UTC; 6 points) 's comment on The Wonder of Evolution by (
- Rationality Book Club: Week 3 by 17 Jan 2024 4:15 UTC; 5 points) (EA Forum;
- Norfolk Social—VA Rationalists by 10 Oct 2022 0:04 UTC; 4 points) (
- You’d Want The Right Answer by 4 Mar 2022 2:33 UTC; 4 points) (
- 3 Feb 2010 8:33 UTC; 4 points) 's comment on Rationality Quotes: February 2010 by (
- Climbing the Horseshoe by 23 Nov 2016 2:54 UTC; 3 points) (
- 5 Apr 2011 20:44 UTC; 2 points) 's comment on Just Try It: Quantity Trumps Quality by (
- 30 Apr 2022 1:45 UTC; 1 point) 's comment on Preregistration: Air Conditioner Test by (
- 8 May 2009 21:37 UTC; 1 point) 's comment on Framing Consciousness by (
- 26 Apr 2021 19:08 UTC; 1 point) 's comment on nim’s Shortform by (
- Litany of Instrumentarski by 9 Apr 2013 15:07 UTC; 0 points) (
- 7 Dec 2010 23:32 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 8 Dec 2010 2:49 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 4 May 2009 21:32 UTC; 0 points) 's comment on Bead Jar Guesses by (
- 1 Aug 2013 2:03 UTC; 0 points) 's comment on Meetup : Durham NC/Triangle Area: Value of Information/Curiosity by (
- 16 May 2020 6:12 UTC; -5 points) 's comment on Craving, suffering, and predictive processing (three characteristics series) by (
This is especially well written, btw.
It just seems so old fashioned to think that it is courageous to be willing to doubt any of your beliefs. Here’s a nice reflection on the matter with regard to the epistemic propriety of religious belief:
http://comp.uark.edu/~senor/wrong.html
An OB post from November 2006 is a useful counterpoint to van Inwagen’s paper, and there’s been other discussion of van Inwagen’s claims, generally in the context of the Aumann agreement theorem.
I think van Inwagen is wrong; if he really considers that David Lewis’s disagreement with his position has enough evidential force that his continued holding of it is ill-supported by the evidence, then he should stop holding it.
van Inwagen doesn’t really argue against this; he just says that it seems obvious to him that he’s entitled to hold whatever opinions he finds himself holding, with whatever confidence he finds himself attaching to them. And in one sense he certainly is entitled to; he is also entitled to believe that the government of the USA has been taken over in secret by alien lizard-creatures. But he isn’t entitled to do that and still be thoroughly rational.
Well, van Inwagen does offer one pseudo-argument: he “doesn’t want to be forced” to adopt a position of “general philosophical skepticism”, which he thinks accepting Clifford’s rule of evidence would commit him to. Well, OK, but it seems a bit poor for a philosopher to be so openly embracing wishful thinking. I don’t want to be a philosophical skeptic, and neither do other philosophers; therefore philosophical skepticism must be rejected (bah!); therefore Clifford’s rule of evidence must be rejected.
Someone with more intellectual self-respect would say not “I must have some mysterious incommunicable philosophical insight unavailable to Lewis” but “I think Lewis is missing these specific points, and here is why he is wrong in what he’s said about them”.
I have a dark suspicion that deep down, van Inwagen is a general philosophical skeptic (after all, he’s said that adopting a policy of basing one’s beliefs on the evidence would lead to that position), but he finds it more congenial to go on making confident assertions on the basis of insufficient evidence.
Incidentally: whether something “seems old-fashioned” has very little to do with whether it’s true.
The Earth revolves around the Sun? Why, how old-fashioned!
G,
Welp, I’ve only been reading this blog for 2007. Silly me. I just read the post and all the comments. I have to say that Philip Bricker has the upper hand.
Bricker suggested the option that you advocate, by the way. But he dismisses it. Here’s why, I think: If you suspend judgment in response to reasonable disagreement, you’re going to have to suspend judgment about basically all philosophical theses. By doing so, you’re going to run yourself into quite a few problems.
Note: By ‘old-fashioned’, I meant that the view advocated in the post relies on epistemological ideas that most epistemologists reject. I sure hope that has something to do with whether it’s true. Although, maybe it doesn’t.
I’ve only been reading OB for a month or thereabouts myself, but I had a little trawl through the archives looking for interesting things.
If epistemologists-as-a-class take any particular stand on whether a general willingness to doubt all one’s beliefs is courageous, then that’s the first I’ve heard of it. But I’m not an expert on epistemology, still less on epistemologists, so maybe that wouldn’t be too surprising. Anyway: What epistemological ideas, generally rejected by epistemologists these days, are being relied on by those who say things like “It is courageous to be prepared to revise any of your ideas, if the balance of evidence turns out to be against them”?
(I expect a lot of epistemologists would insist that you probably have some ideas for which you’ll never be able to find yourself in that position, because they’re so firmly built into the structure of your brain or of the reasoning processes you’re using. But that’s quite separate from whether a willingness to doubt anything you do get good evidence against is either courageous or wise, and doesn’t seem to me to have anything much to do with what Eliezer is saying here.)
Isn’t your explanation of why Bricker dismisses “the option [I] advocate” just “If I adopt this policy then I’ll have to do a lot of judgement-suspending, and I don’t want to”? Or does he (or do you) have some specific problems in mind, that one would run into by doing this? (Being uncertain about some questions one would rather be confident about isn’t, in my view, a “problem”.)
For the avoidance of doubt: I am not proposing (though I think there are contributors here who would) that when considering any philosophical problem it’s illegitimate to have opinions of one’s own that differ from the majority view among philosophers. (Or among the very best philosophers, or whatever.) But I do think it’s a sign of something probably wrong if you find yourself in disagreement with others who (at least on the face of it) are better placed to understand the matter clearly, and don’t have anything to say in favour of your position other than that it seems right to you. Because when you do that, you’re basically appealing to the quality of your intuition, and ex hypothesi those disagreeing others have intuitions likely to be at least as good as yours.
Strangely, following this behavior leads me to attack my most “rational” beliefs. If I am holding an irrational belief I find it less likely that it will shift. The way I have to dig these out is to keep hacking away at the foundations that built the irrational beliefs. If my inner wannabe rational is using A, B, or C to defend an irrational belief, I need to start firing at A, B, C. This leads me down a path of silly beliefs until I finally find something that is likely to change. I am not arguing that this is good or correct; on the contrary, it is the source of many, many problems with my Map.
Going after the irrational beliefs directly doesn’t do anything. They are in their little walled areas and are immune to mere arguments and inquiries. I have to knock down the walls first.
Instead of halting all development until I get the walls down I let my curiosity roam in the free territories, allowing it to grow stronger. It gains ground and traction and I can already see its effect on the walls around my evil, cherished beliefs.
All this being said, I get the feeling that something is Terribly Wrong when I start poking around on the map and asking questions about the territory. These feelings are not being repressed and one day I expect to turn around and wonder how the wall was able to stand so long.
Is it a fair summary that you have theistic beliefs now, but you expect that in the future you will not have these theistic beliefs, and that your modified beliefs without theism will better correspond with reality?
If so, I would suggest as an exercise, that you consider how you would explain to a theist who expects to maintain his own theistic beliefs, why you expect to lose yours.
I don’t expect to lose mine. How could I? If I thought I would lose them I would have an area with more promise of shifting beliefs. I can imagine scenarios where I would lose my beliefs but that is completely different than predicting the loss. If I actually thought I would lose by beliefs I would be attacking them voraciously.
I assume that if I did lose my theism that it would only happen in a circumstance where the new beliefs better corresponded with reality. Essentially:
I have not always built beliefs within the confines of the Map/Territory and Beliefs are Predictors of Reality concepts
I now build beliefs with those concepts
Old beliefs not based on those concepts are still in the network
To replace an old belief with a new belief, the new belief must use the new concepts
So, if I replaced Theism with Atheism, Atheism had better match reality or I have not improved my belief making process from years ago when I was putting beliefs all over the place because it seemed like a good idea. Atheism isn’t attractive in and of itself. If it were, I would be starting at the bottom line.
What good is it to believe the Truth if you are believing incorrectly?
That being said, even though I don’t expect to lose my theism, it sure as hell* better update once Curiosity gets ahold of it. I don’t expect my beliefs to stay the same but I am unable to predict where they are going to end up.
* Hehe, that’s funny, given the context
If you don’t expect to lose it, why are you so scared of critically examining it?
Good question.
I don’t feel as if I am scared of losing it to critical examination. I more feel like critical examination isn’t going to do anything useful at this point. But I will have to think more about that and get back to you because I am catching a lot of invalid and doublethinky thoughts running through my head.
If I don’t post a response by the end of tomorrow, start pestering me because I apparently decided to avoid the topic. I don’t trust my future self enough to follow through on this.
I’d love to know what they are, if you’d be willing to catch them and write them down.
Here’s a mind dump. I don’t have a lot of time right now, but here goes.
Err… I’m not scared?
Than examine it.
No. I decided not to do that.
Why?
Hmm… what have I said on that subject…
Okay, sure that makes sense, but what if the wall is merely a creation of fear?
Okay, do I have any fear of changing away from Theism.
I want to say no...
But I have to say yes because I feel fear.
What is the fear of?
Potentials:
Fear of losing a belief
Fear of social implications
Fear of the unknown
Fear of judgement/punishment
Fear of being wrong
Fear of admitting mistakes
Let’s go down the list: Fear of losing a belief.
I don’t fear losing a belief.
A belief or any belief?
Mmm… most beliefs? I don’t know.
Can I think of a belief I would fear losing?
Can I think of a belief I don’t fear losing?
Sure, that’s easy.
Than name it.
Uh… I guess I need a list of beliefs…
My name is my name
2 + 2 = 4
The show tonight will be a success
I am getting more rational
The first two have no fear.
The third has more emotional attachment, but I don’t fear losing that belief. I’d rather the show tonight be a success, but losing that belief doesn’t scare me.
The last… well, it’s true or not. I would rather lose that belief if it were incorrect so I could change what I needed to become more rational. So no, I don’t fear losing it.
Is it more accurate to say that I fear keeping it when I shouldn’t?
Yes.
Is this a good fear?
Yes, in as much as fear can ever be good.
Can I think of a more valid fear?
We are getting off subject.
Okay. Do I fear losing Theism?
Which part?
All of it.
Uh… I don’t see how that can happen as of yet.
So? It doesn’t matter if you can imagine it. Does it scare you?
This wasn’t the original question:
Okay. But this answer matters.
Why?
Because it eliminates a potential cause for being scared of critically examining it.
Okay, what are the other causes?
Fear of losing Theism
Time wasted on other things
Fear of confirming Theism and dealing with the social consequences
Preemptive rejection of Rationality and/or Reality
Okay. So do I fear losing Theism?
I don’t know.
You don’t know or you don’t want to know?
Well, what would be the point in not wanting to know?
Meta-belief
Belief in belief
Convenient ignorance
(Ooh, Convenient Ignorance may be a good subject for a top-level post...)
Okay… so do I believe in my belief of Theism?
Sure, in the sense that I believe I believe in Theism.
Is that the same thing?
Err… no, I guess not.
So, do I believe in my belief?
What is the definition again?
Okay, no, I do believe Theism.
Do you believe in your belief of Theism?
I don’t think so, since I don’t begrudge others their disbelief.
You match the description: “It is good and virtuous and beneficial to believe God exists.”
Only in the sense that if it is true it is good to believe.
So if it wasn’t true, you wouldn’t want to believe?
Correct.
So go find out if it is true.
Yeah, okay, show me how.
Critically examine it.
I can’t.
Why not?
There is a wall. That belief isn’t accessible through critical examination.
If it were, would you examine it?
I don’t know.
You don’t know, or you don’t want to know?
What difference does it make if I can’t examine it anyway?
Because you may be able to examine it and you are lying to yourself about not being able to.
Oh.
And that’s all the time I have. I’ll try to add more tomorrow. If there is a better place to do this or people would rather me post a summary I am more than willing to comply.
EDIT: Part 2. (It isn’t as interesting.)
I find this self-dialog very interesting; in certain aspects it resembles the sort of self-dialog I teach people to use to throw off more mundane fears and mental/emotional blocks, outdated moral injunctions, etc.
There are a few places in what you’re doing where a more focused approach would be helpful, though. For example, I would define an outcome and a test procedure: what are you attempting to change, and how will you know if you changed it? This alone will help you trim distractions a bit.
Also, generally speaking, the key to getting rid of an irrational belief is to clearly identify the past negative consequences associated with disbelief in that belief. Your expectations of what will happen in the future are usually either an irrelevant speculative extrapolation by your logical mind, or a simple projection from emotional memory… And it’s the latter category that’s relevant, as long as you focus on identifying the “near”, sensory memory of the events your future prediction is based on.
In particular, you are looking for memories involving the loss of either personal status/significance, the loss of connective bonding, the loss of perceived safety, or the loss of available novelty/stimulation, (with these latter two being far less common), associated with either you or someone else failing to believe (or act upon) the belief in question.
The neurological phenomenon known as “reconsolidation” explains why access to the original memory is useful; it’s simplest to remove an emotional attachment to a thought or belief by reinterpreting the original memory that triggers the emotions, rather than to build elaborate reroutings of thought “downstream” of the source.
Once you’ve identified the specific memory you’re using to form your emotional/intuitive judgment (creating the fear), you can use further questioning to cast doubt on your original interpretation of events, consider other possible interpretations, wonder whether the situation is different, etc… and in the process, this sets up alternative lines of thought linked from the original memory, allowing you to have a different emotional probability distribution, so to speak.
I’m being necessarily terse here, as I know of at least two whole books that have been written on minor variations of this basic process: “Loving What Is” by Byron Katie, and “Recreating Your Life” by Morty Lefkoe, each proposing a different sequence and set of questions, but essentially following the same general process I’ve just described. I’ve also done workshops on my own set of variations, with slightly different scopes of applicability than either of their methods.
Either book, however, is quite good with respect to having lots of example dialogues to show how to apply their processes in practice, and either one would, I think be helpful in focusing your approach to this, or any other attempt to change an emotional belief or judgment.
Agreed. I posted Easy Predictors as an attempt to get input from the community about easy to test predictor beliefs but didn’t get much of a response. I am keeping track of smaller things that have easy turnaround times to see if it is possible to do this sort of thing informally.
This does not apply to outcomes of belief creation, however. Is there a good way to test things like that? Or am I misinterpreting your suggestion? Or… ?
The rest of your comment is interesting to me because it directly focuses on the prediction of trauma due to dropping Theism (and related subjects). I hadn’t really thought about the details of the fallout beyond key trouble spots. Is this a fair two-sentence reduction of your suggestions?
Am I close?
I mean that if you’re going to go digging around your head to change something, it would be best to have a criterion by which you can judge whether or not you’ve succeeded. Otherwise, you can rummage around in there forever. ;-)
An example criterion in this case might be “Thinking about not believing in God no longer causes an emotional reaction, as evidenced by my physical response to a specific thought about that.”
Defining a test in this way -- i.e., observing whether your (repeatable) physical reaction to a thought has changed—allows you to determine whether any particular approach has succeeded or failed. I suggested the two books I did because I have found it relatively easy to produce such repeatable, testable results with their techniques, once I got the hang of paying attention to my sensory responses to the questions asked, and ignoring my logical/abstract ones. (Since changing one’s logical beliefs isn’t the hard part.)
No, what I’m saying is that your projection is based on some specific, sensory experience(s) you had, like for example your parents speaking disparagingly about atheists, or other non-followers of your parents’ belief system. At some point, to feel threatened by being outcast, you had to learn who the outgroups were, and this learning is primarily experiential/emotional, rather than intellectual, and happens on a level that bypassed critical thought (e.g. because of your age, or because of the degree of emotion in the situation).
Identifying this experience and processing it through critical thought, weakens the emotional response triggered by the thought, then gives you the ability to think rationally about the subject again… thereby leading to potential solutions. Right now, the fear response paralyzes your critical and creative thinking, making it very hard to see what solutions may be in front of you.
IOW, your prediction of trauma comes from a past trauma—our brains don’t come with a built-in prior probability distribution for what beliefs will cause people to like or not like us. ;-) If you want to switch off the fear, you have to change the prediction, which means changing the probability data in your memory… which means accessing and reinterpreting the original sensory experience data.
In order to find this information, you focus on the sensory portion of your prediction, prior to verbalization. That is, when you ask, “What bad thing is going to happen?” refrain from verbalizing and pay attention to the images, feelings, and general impressions that arise. Then, let your mind drift back to when you first saw/felt/experienced something like that.
A recent personal example: I discovered yesterday that the reason I never gave my software projects a “1.0” version is because I was afraid to declare anything “finished” or “complete”… but the specific reason, was that when I did chores as a kid, or cleaned my room, my mother found faults and yelled at me. Emotionally, I learned that as long as someone else could possibly find a way to improve it, I was not allowed to call it “finished”, or I would be shamed (status reduction).
Until I uncovered this specific way in which I came by my emotional response, all my conscious efforts to overcome this bad habit were without effect. The emotion biased my conscious thoughts in such a way that I really and truly sincerely believed that my projects were not “finished”… because the definition I was unconsciously using for “finished” didn’t allow me to be the one who declared them so.
But having specifically identified the source of this learning, it was trivial to drop the emotional response that drove the behavior… and immediately after doing so, I realized that there were a wide variety of other areas in my life affected by this bias, that I hadn’t noticed before.
Most psychological discussion of fears tends to focus on the abstract level, i.e. obviously I was afraid to declare things finished, for “fear of criticism”. But that abstract knowledge is almost entirely useless for actually changing the feelings, and therefore removing the bias. Mostly, what such abstract knowledge does is sometimes allow people to spend a lifetime trying to work around or compensate for their feeling-driven biases, rather than actually changing them.
And that’s why I urge you to focus on specific sensory experience information in your dialoging, and treat all abstract, logical, or verbally sophisticated thoughts that arise in response to your questions as being lies, rumor, and distraction. If your logical abstract thoughts were actually in charge of your feelings, you’d already be done. Save ’em till the bias has been repaired.
The brain doesn’t need past trauma in this instance. Our brains do come with a built-in prior probability distribution for what will happen when you become an apostate, rejecting the beliefs of the tribe in which you were raised.
Ahem. We are adaptation executers, not fitness maximizers. Our brains come with a moral mechanism that’s been shaped by that probability distribution, but they don’t come with that specific prediction built in at an object level.
Instead, we simply learn what behaviors cause shaming, denunciation, etc., and this then triggers all the conscious shame/guilt/etc., as well as the idealizing, moralizing, punishing others, and punishing of non-punishers… with all of these actions being more highly-motivated in cases where the behavior is desirable to the individual involved.
Professing or failing to profess certain beliefs is just one minor case of “behavior” that can be regulated by this mechanism. I have not observed anything that suggests there is a mechanism specific to religious beliefs or even beliefs per se, distinct from other kinds of behavior. There is litle difference between an injunction to say some belief is true or good, and an injunction to always say thank you, or to never brag about yourself. (Or my recently discovered injunction not to say something is finished!)
All of these are just examples of verbal behavior that can regulated by the same mechanism. (In any case, MrHen has already pointed out that the fear is less about him stating new beliefs, than it would be about acting on them.)
Anyway, it seems to me that we have only one “moral injunction” apparatus that is applied generically, and the feelings that it generates do not contain any information about being kicked out of the tribe or failure to mate, etc. Instead, the memory of a shaming event is itself the bad prediction or negative reinforcer. Adaptation execution FTW, or more like FTL in this case at least.
That isn’t the issue here. Yes, adaptation execution, Woohoo!! Obviously the probability distribution for expected consequences isn’t built in to the amygdala.
I nevertheless assert that the universal human aversion to changing our fundamental signalling beliefs is more than just Mommy Issues filtered through PCT. Human instinctive responses are sophisticated and a whole lot of them are built in, no shaming required. We’re scared of spiders, snakes and apostasy. They’re adaptations right there in the DNA.
Er, research please. Everything I’ve seen shows that even monkeys have to learn to fear snakes and spiders—it has to be triggered by observing other monkeys being afraid of them first.
Occam’s razor says you’re more likely to be wrong than I am: a general purpose mechanism for conditioning verbal behavior is more than sufficient to produce the results we observe, especially if you consider internal verbal thinking a form of verbal behavior—which it pretty plainly is.
For example, this provides a simpler mechanism for “belief in belief”, than your proposal of a distinct mechanism. It allows us to “believe”—i.e. consistently say we believe (even to ourselves on the inside), when in fact we don’t.
[edited to delete unfair rhetoric of my own]
FWIW I said nothing about PCT, nor did I say that a parent had to be the one delivering the shame. If your own personal bias about me is such that you can’t avoid engaging in this type of rhetorics, perhaps you should consider giving yourself some cooling off time before you reply.
Proslepsis!
Oops. I actually intended to delete that, because I felt it was the same sort of unfair rhetoric as I was accusing wedrifid of. Thanks for bringing it to my attention.
Now now, you can’t have points for that twice!
But it worked so well the first time! Aww.
I was quoting Steven Pinker but my copy is an audio book so I can’t give you the specific references to the study he mentions. A simple google search brings up plenty of references. (Google gives popularised summaries. Follow the links provided therein to find the actual research.)
Your claim mentions ‘everything you have seen’. Given that contradictory reports are so freely available and your confidence in the model your are asserting I would have expected you to have a somewhat more broad exposure to the relevant science.
Skinner had a similar ‘simple’ theory. But he was wrong. Not wrong because the mechanisms he described weren’t important parts of human psychology but wrong because he asserted them to the exclusion of all else.
I believe you can make testable behavior changes and your work with clients impresses me. I also believe you could change people to be less afraid of, for example, heights. Nevertheless, I would not necessarily believe your report on how these anxieties came into being. People can be afraid of heights even if they didn’t make a habit of falling off cliffs in their childhood.
I have a strong bias for you PJ, in all but your tendency to be quite rigidly minded when it comes to forcing reality into your simple models. I allow myself to vocally reject the parts of your comments that I disagree with because that way I will not be dismissed as a fan boy when I speak in your defense. You aren’t, for example, a quack and your advice, experience and willingness to share it are invaluable. I also, for what it is worth, find PCT to be a useful way of describing the dynamics of human behavior much of the time.
Perhaps I’m missing something, but I don’t see where it says we’re all automatically afraid of snakes. I have seen research that monkeys have an inbuilt ability to learn to fear snakes, but the mechanism has to be switched on via learning, and my understanding is that humans are the same way… unless you are arguing that individual variations in fear of snakes is purely determined by genetics.
[Edit to add: one of the first papers you linked to includes this quote: “For studies of captive primates, King did not find consistent evidence of snake fear.” And the second page goes on to describe the very “they have to learn to fear snakes” research that I previously spoke of.]
I think perhaps we are miscommunicating: I do not deny that primate brains contain snake detectors. I do deny that said detectors are unaffected by learning: humans and monkeys can and do learn which snakes to fear, or not fear.
We seem to be miscommunicating again. What mechanism is it that you think I am asserting “to the exclusion of all else”? The model I personally use contains several mechanisms, and the moral injunctions aspect I spoke of here is only one such mechanism. It is certainly not the only relevant mechanism in human behavior, even in the relatively narrow field of applicability where I use it.
I don’t do classical phobia work, actually, so I wouldn’t have a valid opinon on that one, one way or the other. ;-)
It’s certainly true that, In order to reach scientific standards, I would need to find a way to double-blindly substitute a placebo version of childhood memories for the real thing in order to prove that it’s the modification of the memory that makes it work. (I have occasionally tested single-blind placebo substitutions on other things, but not this, as I have no idea what I could substitute.)
Mainly, what I do to test alternative hypotheses regarding a change technique is to see what parts of it I can remove, without affecting the results. Whatever’s left, I assume has some meaning. (Side note: most published descriptions of actually-working self-help techniques contain superfluous steps, that, when removed, tend to make each technique sound like a mere minor variation on one of a handful of major themes… which I expect to correspond to mechanisms in the brain.)
In the instant discussion of moral injunctions, examining the memory of the learning or imprint experience appears to be indispensable, and therefore I conclude (hypothesize, if you prefer) that these memories are an integral part of the process of formation of moral injunction-regulated behavior.
FWIW, I do not claim universal applicability of my models outside their target domain. However, within that target domain, most discussions here tend to have only vaporous speculation weighing against many, many tests and observations. When someone proposes a speculative and more complex model than one I am already using, I want to see what their model can predict that mine cannot, or vice versa.
If you have a more parsimonious model for “belief in belief” than simple moral injunctions regarding spoken behavior, I’d love to see it. But since “belief in belief” cleanly falls out as a side effect of my model, I don’t see a reason to go looking for a more complicated, special-purpose belief module, just because there could be one. Should I encounter a client who needs a belief-in-belief fixed, and find that my existing model can’t fix it, then I will have reason to go looking for an updated model.
Now, when I do see a more parsimonious model here than one I’m already using, I adopt it wholeheartedly. For all that people seem to frame me as having brought PCT to Lesswrong.com, the reverse is actually true:
lesswrong is where I heard about PCT in the first place!
And I adopted it because it fit very neatly into my existing model… it was as though my model was a graph with lots of edges, but no nodes, and PCT gave me a paradigm for what I should expect “nodes” to look like. (And incorporating it into my model also subsequently allowed me to discover a new kind of “edge” that I hadn’t spotted previously.)
So actually, I don’t consider PCT to be a comprehensive model in itself either, because it lacks the “edges” that my own model contains!
Which makes it a bit frustrating any time anyone acts as though I 1) brought PCT to LW, and 2) think it’s a cure-all or even a remotely complete model of human behavior… it’s just better than its competitors, such as the aforementioned Skinnerian model you mentioned.
Great. I would appreciate it, though, if you not use boo lights like “mommy issues” and “PCT” (which sadly, seems to have become one around these parts), especially when the first is a denigratory caricature and the second not even relevant. (Moral injunctions are an “edge” in my own model, not a “node” from PCT.)
I agree on this note. I do not agree that Occam suggests that fear of snakes, spiders and heights is the sole result of learned associations. I also do not agree that aversion to fundamental belief switching is purely the result of learning from trauma.
Of course not. I never claimed they were. I only make the claim that learning is an essential component of the moral injunction mechanism. You have to learn which beliefs not to switch, at the very least!
I’ve also described a variety of apparently built-in behaviors triggered by the mechanism: proselytizing, gossip, denouncing others, punishing non-punishers, feelings of guilt, etc. These are just as much built-in mechanisms as “snake detectors”… and monkeys appear to have some of them.
What I say is that, just like the snake detectors, these mechanisms require some sort of learning in order to be activated… and that evolutionarily, applying these mechanisms to behavior would be of primary importance; applying them to beliefs would have to come later, after language.
And at that point, it’s far more parsimonious to assume evolution would reuse the same basic behavior-control mechanism, rather than implementing a new one specifically for “beliefs”… especially since, to the naive mind, “beliefs” are transparent. There’s simply “how things are”.
To an unsophisticated mind, someone who thinks things are different than “how things are” is obviously either crazy, or a member of an enemy tribe.
Not an “apostate”.
Most of the behavior mechanisms involved are there for the establishment and maintenance of tribe behavioral norms, and were later memetically co-opted by religion. I quite doubt that religion or anything we’d consider a “belief system” (i.e., a set of non-reality-linked beliefs used for signalling) were what the mechanism was meant for.
IOW, ISTM the support systems for reality-linked belief systems had to have evolved first.
This is not a claim of exclusivity of mechanism, so I don’t really know where you’re getting that from. I’m only saying that I don’t see the necessity for an independent belief-in-belief system to evolve, when the conditions that make use of it would not have arrived until well after a “group identity behavioral norms control enforcement” system was already in place, and the parsimonious assumption is that non-reality-linked beliefs would be at most a minor modification to the existing system.
No. I’m talking about apostasy. I’m not talking about someone who is crazy. I am not talking about a member of an enemy tribe. I am talking about someone from within the tribe who is, or is considering, changing their identifying beliefs to something that no longer matches the in-group belief system. This change in beliefs may be to facilitate joining a different tribe. It may be a risky play at power within the tribe. It may be to splinter off a new tribe from the current one.
Since we are talking in the context of religious beliefs the word apostate fits perfectly.
In order for any of those things to be advantageous (and thus need countermeasures), you first have to have tribes… which means you already need behavior-based signaling, not just non-reality-linked “belief” signaling.
So I still don’t see why postulating an entirely new, separate mechanism is more parsimonious than assuming (at most) a mild adaptation of the old, existing mechanisms… especially since the output behaviors don’t seem different in any important way.
Can you explain why you think a moral injunction of “Don’t say or even think bad things about the Great Spirit” is fundamentally any different from “Don’t say ‘no’, that’s rude. Say ‘jalaan’ instead,” or “Don’t eat with your left hand, that’s dirty?”
In particular, I’d like to know why you think these injunctions would need different mechanisms to carry out such behaviors as disgust at violators, talking up the injunction as an ideal to conceal one’s desire for non-compliance, etc.
In fairness, the “left hand” thing has to do with toilet hygiene pre-toilet-paper, so at one time it had actual health implications.
That’s why I brought it up—it’s in the category of “reality-based behavior norms enforcement”, which has much greater initial selection pressure (or support) than non-reality-based behavior norms enforcement.
Animals without language are capable of behavioral norms enforcement, even learned norms enforcement. It’s not parsimonious to presume that religion-like beliefs would not evolve as a subset of speech-behavior norms enforcement, in turn as a subset of general behavior norms enforcement.
[Edit: removed “enfrorcement” typo]
I guess I was just pointing out that it seemed to be in a different category (“reality-based behavior norms enforcement” is as good a name as any) than the other examples.
If I were God I would totally refactor the code for humans and make it more DRY.
You seem to be confusing “simplicity of design” with “simplicity of implementation”. Evolution finds solutions that are easily reached incrementally -- those which provide an advantage immediately, rather than requiring many interconnecting pieces to work. This makes reuse of existing machinery extremely common in evolution.
It is also improbable that any selection pressure for non-reality-based belief-system enforcement would exist, until some other sort of reality-based behavioral norms system existed first, within which pure belief signaling would then offer a further advantage.
Ergo, the path of least resistance for incremental implementation simplicity, supports the direction I have proposed: first behavioral enforcement, followed by belief enforcement using the same machinery—assuming there’s actually any difference between the two.
I could be wrong, but it’s improbable, unless you or someone else has some new information to add, or some new doubt to shed upon one of the steps in this reasoning.
I’m not and I know.
Earlier in this conversation you made the claim:
This suggested that if “everything you have seen” didn’t include the many contrary findings then either you hadn’t seen much or what you had seen was biased.
I really do not think new information will help us. Mostly because approximately 0 information is being successfully exchanged in this conversation.
I still don’t see what “contrary” findings you’re talking about, because the first paper you linked to explicitly references the part where monkeys that grow up in cages don’t learn to fear snakes. Ergo, fear of snakes must be learned to be activated, even though there appears to be machinery that biases learning in favor of associating aversion to snakes.
This supports the direction of my argument, because it shows how evolution doesn’t create a whole new “aversive response to snakes” mechanism, when it can simply add a bias to the existing machinery for learning aversive stimuli.
In the same way, I do not object to the idea that we have machinery to bias learning in favor of mouthing the same beliefs as everyone else. I simply say it’s not parsimonious to presume it’s an entirely independent mechanism.
At this point, it seems to me that perhaps this discussion has consisted entirely of “violent agreement”, i.e. both of us failing to notice that we are not actually disagreeing with each other in any significant way. I think that you have overestimated what I’m claiming: that childhood learning is an essential piece in moral and other signaling behavior, not the entirety of it… and I in turn may have misunderstood you to be arguing that an independent inbuilt mechanism is the entirety of it.
When in fact, we are both saying that both learning and inbuilt mechanisms are involved.
So, perhaps we should just agree to agree, and move on? ;-)
We differ in our beliefs on what evidence is available. I assert that it varies from ‘a bias to learn to fear snakes’ to ‘snake naive monkeys will even scream with terror and mob a hose if you throw it in with them’. This depends somewhat on which primates are the subject of the study.
It does seem, however, that our core positions are approximately compatible, which leaves us with a surprisingly pleasant conclusion.
We also disagree in how much relevance that has to the position you’ve been arguing (or at least the one I think you’ve been arguing).
I’ve seen some people claim that humans have only two inborn fears (loud noises and falling) on the basis that those are the only things that make human babies display fear responses. Which, even if true, wouldn’t necessarily mean we didn’t have instinctive fears kick in later than life!
And that’s why I don’t think any of that is actually relevant to the specific case; it’s really the specifics of the case that count.
And in the specific case of beliefs, we don’t get built-in protein coding for which beliefs we should be afraid to violate. We have to learn them, which makes learning an essential piece of the puzzle.
And from my own perspective, the fact that there’s a learned piece means that it’s the part I’m going to try to exploit first. If it can be learned, then it can be unlearned, or relearned differently.
As I said in another post, I can’t make my brain stop seeking SASS (status, affiliation, safety, and stimulation). But I can teach it to interpret different things as meaning I’ve got them.
Clearly, we can still learn such things later in life. After all, how long did it take most contributors’ brains to learn that “karma” represents a form of status, approval, or some combination thereof, and begin motivating them based on it?
That being, “We don’t need a past traumatic experience to have an aversive reaction when considering rejecting the beliefs of the tribe in which we were raised.”
I agree with the remainder of your post and, in particular, this is exactly the kind of reasoning I use when working out how to handle situations like this:
I don’t recall claiming that a traumatic experience was required. Observing an aversive event, yes. But in my experience, that event could be as little as hearing your parents talking derisively about someone who’s not living up to their norms… not too far removed, really, from seeing another monkey act afraid of a snake.
Aversion, however, (in the form of a derogatory, shocked, or other emotional reaction) seems to be required in order to distinguish matters of of taste (“I can’t believe she wore white after Labor Day”) and matters of import (“I can’t believe she spoke out against the One True God… kill her now!”). We can measure how tightly a particular belief or norm is enforced by the degree of emotion used by others in response to either the actual situation, or the described situation.
So it appears that this is where we miscommunicated or misunderstood, as I interpreted you to be saying that aversive learning was not required, while you appear to have interpreted what I’m saying as having some sort of personal trauma being required that directly links to an individual belief.
It’s true that most of the beliefs I work with tend to be rooted in direct personal experience, but a small number are based on something someone said about something someone else did. Even there, though, the greater the intensity of the emotional surrounding the event (e.g. a big yelling fight or people throwing things), the greater the impact.
Like other species of monkeys, we learn to imitate what the monkeys around us do while we’re growing up; we just have language and conceptual processing capabilities that let us apply our imitation to more abstract categories of behavior than they do, and learn from events that are not physically present and happening at that moment.
Btw, the Iowa Gambling Task is an example of a related kind of unconscious learning that I’m talking about here. In it, people learn to feel fear about choosing cards from a certain deck, long before their conscious mind notices or accounts for the numerical probabilities involved. Then, their conscious minds often make up explanations which have little if any connection to the “irrational” (but accurate) feeling of fear.
So if you seem to irrationally fear something, it’s an indication that your subconscious picked up on raw probability data. And this raw probability data can’t be overrided by reasoning unless you integrate the reasoning with the specific experiences, so that a different interpretation is applied.
For example, suppose there’s someone who always looks away from you and leaves the room when you enter. You begin to think that person doesn’t like you… and then you hear they actually have a crush on you. You have the same sensory data, but a different interpretation, and your felt-response to the same thoughts is now different. Voila… memory reconsolidation, and your thoughts are now biased in a different, happier way. ;-)
Okay, that makes sense. My initial reaction is that the fear has less to do with people’s reactions to me and more the amount of change in the actions I take. Their responses to these new actions is more severe than their expected actions as a result of my dropping Theism.
But the more I think about it the more I think that this is just semantics. I’ll give your suggestion a shot and see what happens. I am not expecting much but we’ll see. The main criticism that I have at this point is that my “fears” are essentially predictions of behavior. I do not consider them irrational fears...
Ah, okay, this part relates to the trigger of dealing with the initial reaction to the questions being asked.
My personal solutions for this style of fear (which is separate from the fear of future social reactions, which I can understand may not have been obvious) is the same as my pattern of behavior relating to pain tolerance. It goes away if I focus on it just the right way.
By the end of the week I expect to be able to return to the topic without any overt hinderances. I take this to mean the fear is gone or I am so completely self-deluded that the magic question no longer means the same thing as it did when it was first asked. I prefer to think it is the former.
I was just giving an example. The key questions are:
What is the trigger stimulus? and
What is the repeatable, observable reaction you wish to change?
In what you said above, the trigger is “thinking about what I’d do if I were not a theist”, and you are using the word “fear” to describe the automatic reaction.
I’m saying that you should precisely identify what you mean by “fear”—does your pulse race? Palms sweat? Do you clench your teeth, feel like you’re curling into a ball, what? There are many possible physical autonomic reactions to the emotion of fear… which one are you doing automatically, without conscious intent, every time you contemplate “what I’d do if I were not a theist”?
This will serve as your test—a control condition against which any attempted change can be benchmarked. You will know you have arrived at a successful conclusion to your endeavor when the physiological reaction is extinguished—i.e., it will cease to bias your conscious thought.
I consider this a litmus test for any psychological change technique: if it can’t make an immediate change (by which I mean abrupt, rather than gradual) in a previously persistent automatic response to a thought, it’s not worth much, IMO.
Focus on what the stimulus and response are, and that will keep you from wandering into semantic questions… which operate in the verbal “far” mind, not the nonverbal “near” mind that you’re trying to tap into and fix.
This is one of those “simple, but not easy” things… not because it isn’t easy to do, but because it’s hard to stop doing the verbal overshadowing part.
We all get so used to following our object-level thoughts, running in the emotionally-biased grooves laid down by our feeling-level systems, that the idea of ignoring the abstract thoughts to look at the grooves themselves seems utterly weird, foreign, and uncomfortable. It is, I find, the most difficult part of mindhacking to teach.
But once you get used to the idea that you simply cannot trust the output of your verbal mind while you’re trying to debug your pre-verbal biases, it gets easier. During the early stages though, it’s easy to be thinking in your verbal mind that you’re not thinking in your verbal mind, simply because you’re telling yourself that you’re not… which in hindsight should be a really obvious clue that you’re doing it wrong. ;-)
Bear in mind that your unconcious mind does not require complex verbalizations (above simple if-then noun-verb constructs) to represent its thought processes. If you are trying to describe something that can’t be reduced to “(sensory experience X) is followed by (sensory experience Y)”, you are using the wrong part of your brain—i.e., not the one that actually contains the fear (or other emotional response).
Mr. Hen, I’m going to break custom and say something that may be regarded as poisoning the well. It’s my conclusion that P.J. Eby is more or less a quack trying to drum up support for his psychological services, and that (in such an important matter as this) you shouldn’t be trying to understand his jargon, let alone trying to take his advice.
His persistent trumpeting of perceptual control theory, which couples grandiose claims of precision with a complete lack of experimental support, is telling, and it’s not the only red flag I’ve seen...
Right, that’s why I recommended two books written by other people. You have brilliantly exposed my clever scheme:
Offer assistance, while recommending books by other authors
????
Profit!!!
I should note, now that the parent is at −1, that my vote does not represent disapproval of well poisoning, just disagreement in this instance. Pjeby’s practical advice seems well founded to me and I believe it will benefit those willing to receive it.
I probably agree with you when it comes to the rigid use of PCT models and of his custom jargon. I find PJ’s practical experience more useful than his abstract theorizing. I would not vote except, as you say, the matter is important. Even more so when someone’s reputation is at stake.
I am still willing to at least listen and dialog with pjeby, but I find it interesting that this comment is at +3 so quickly. Thank you for the warning (and concern). It does have an impact. (The karma swing helped.)
I value this data. Keep commenting.
I am glad. What do you find most valuable about it? Is there a way I could make it more valuable?
Okay, I finished it tonight. I should warn you that the rest of this is significantly less entertaining. It is longer and less focused/more rambling. Since I read all of your replies it was hard to keep you guys out of my head… there is one part I self-censor and a few places I drift off track. There were a few interruptions as well. They are easily marked. As it is with interruptions, things don’t pick up exactly where they left off. (At least one had extremely unfortunate timing.)
If there was a spoiler tag so I could auto-collapse this that would be great. If not, such a feature would be nifty. (Or possibly auto-collapsing comments after a certain length.)
Hopefully someone gets some use out of this. There is a single paragraph summary near the end if all you care about is the result.
It may take a few edits to find all the formatting typos. If you notice one let me know.
So… am I able to examine the wall around Theism?
Let’s start with Theism. Ignore the wall.
Okay, but first we need to decide how much of this is public.
Hmm… okay. What wouldn’t be?
Event X.
Okay… anything else?
Specific beliefs, I suppose.
Okay, start with Theism. What in Theism is private?
Should we even bother keeping this private?
Honestly, this is a waste of time. Why is Theism inaccessible?
Because of event X.
And that’s it? Is that the only thing?
Well, yeah.
So imagine event X disappearing. It is gone; event X never happened.
Okay...
Are you scared?
No.
Why not?
Well, event X is why my emotions are even here… without X, why would I fear anything?
Okay. Imagine event X and still believing in Theism. Is it possible?
Huh. Okay, that will take awhile.
No rush.
...
No. It doesn’t make sense.
Why not?
Undoing event X precludes abandoning Theism.
No it doesn’t; it is just the most likely result of Theism if you undo event X.
Well, okay, sure, but if I undo X and keep Theism...
It would suck.
It would suck.
So… what does that say about Theism?
Nothing. It says something about X.
Bah, we are way off topic.
And we did this once.
Okay, starting again, why is Theism inaccessible?
Man, this sucks. I don’t see how we can do this without talking about X.
X doesn’t matter.
Yes it does. And no one is going to want to read this.
So? This isn’t for them. It’s for you and they asked for it.
They didn’t ask for this-
Anyway, this is irrelevant. Stay on topic.
The topic is X!
No it isn’t. The topic is Theism.
...
I don’t even know how to explain X-
Theism!
Grr...
If you tried, right now, to critically examine Theism without undoing X, what would happen?
[interruption from wife]
We still aren’t getting anywhere. If you tried, right now, to critically examine Theism without undoing X, what would happen?
*sigh*
Okay, are all areas of Theism inaccessible?
No.
Name an area that is accessible.
The omni- attributes.
So critically examine those.
Here?
Well, no. But does it make you scared?
No.
Have you critically examined them?
Yeah. But not a whole lot.
Why not?
Because they don’t matter that much.
Matter… how?
My behaviors won’t change.
Why not?
Because I don’t treat God as if he has any of those attributes.
Why not?
Because they failed the critical examination.
Okay… so how much have you examined?
Enough to know I cannot proceed unless I deal with X.
Argh!
Look, it’s not my fault. You know why.
Yeah, but how do we tell them that?
We don’t. Why do we need to tell them anything?
...
No, seriously, we don’t need to tell them anything. And none of this has anything to do with fearing critical examination.
So do you fear critical examination?
Not the examination I have done.
Can you do more?
Absolutely.
Than why don’t you?
Because my tools suck. I want better tools.
And when you get better tools?
Then I work on the framework of belief.
And then?
I make sure the new beliefs coming in are solid and useful.
And then?
Then I look at my old beliefs.
Which ones?
The ones affecting everyday behavior; then the ones affecting monthly choices, yearly, and so on.
Why not start with the bigger ones?
Because they are built on smaller ones.
Really?
Uh, yes?
How do you know?
Where is this going?
Answer the question.
Hmm...
...
Okay, something has to drive the bigger choices.
Like Theism.
No, Theism is a bigger belief.
That’s what I meant.
Oh, okay. Yeah, like Theism. Theism is something that affects a larger scope of actions than others.
So why focus on the small stuff?
Because the small stuff is easier to attach to Reality.
Okay, that makes sense. Give me an example.
Assuming my tools work well, the way I spend my daily time.
Sure, that makes sense. And then?
The subjects to spend the time on.
Okay.
I suspect that Theism will hit at this point.
Right. And are you scared of that?
No.
Why not?
Because it is so far out I cannot predict anything about it. Even if I feared losing Theism, I have no reason to think I will drop theism from critical examination.
Okay. But do you fear losing Theism?
Well, sure. What was the original question?
[interrupted by the show]
Okay, so I fear losing Theism but the remaining question is whether I am scared of critically examining it.
First, do I even accept the first part of this question: “If you don’t expect to lose it...”
Yes, I said that clearly.
So if you were to lose it, would it be through critical examination?
Yeah, probably.
So critical examination is the most likely way to lose Theism.
Yes.
And I fear losing Theism.
In the sense that I fear not having it.
So the most likely path to this end is through critical examination.
Yes.
Does that make you fear critical examination?
No. If anything, I fear what it might do.
Would that prevent you from the examination?
If the fear was strong enough… sure.
Is it strong enough?
No. I have critically examined areas of my Theism.
But those really weren’t core aspects. They would never attack Theism, only particular beliefs inside of Theism.
Which brings us back to the wall around Theism.
Right, so we are back in the same place.
Well, what have we learned?
I fear not having Theism for various reasons
I am not ready to critically examine Theism
—Event X
—Higher priorities (better tools, incoming beliefs, beliefs that are “closer” to Reality)
Theism will eventually be critically examined
When this happens, I do not expect Theism to fall
If Theism is untrue I will want to know it is untrue
I still fear not having Theism even if it is untrue
The fear has little to do with belief and more to do with the fallout of not believing
[interrupted]
So the direct answer to the question is that I am not critically examining Theism because (a) I don’t expect significant progress and (b) doing other things will likely improve my ability to critically examine things which will eventually be useful with Theism.
Followup questions for a future time:
Completing analysis of the list of potential fears. I only looked at one.
Looking at the list of reasons I might fear critical examination. I ended up taking a completely different route to the conclusion… so most of this was extraneous.
Convenient Ignorance is still an interesting topic. Is there a full post here?
How does Belief in Belief work with beliefs that are self-referential and dictate morality? Should it be a red flag when a belief includes the clause, “And believing this belief is good”? Hunches say yes.
I didn’t really define the wall around Theism.
At some point, I will probably need to explain and define Event X. I expect this to be troublesome and slightly awkward. I apologize if this vagueness annoys you; I do not apologize for being vague.
This question was never directly answered: “If you tried, right now, to critically examine Theism without undoing X, what would happen?” It would be good to revisit.
The actual priority list could use a good examination.
This sentence may be touching a bigger topic: “Because the small stuff is easier to attach to Reality.” Something connected to that would provide enough material for a full post. It is likely someone has already posted it… so start with a search.
In the meantime, whilst not examining Theism, what is the correct way to act?
One thing that occurs to me while reading this is that for most people, their religion consists nearly entirely of cached beliefs. Things they believe because they were told, not because they derived the result themselves.
This makes any truly critical examination of one’s religious beliefs rather a daunting task. To start with, you’re going to have to recompute potentially thousands of years of received wisdom for yourself. That’s… A lot of work. There’s a reason we cache beliefs, otherwise it would take a lifetime just to be minimally educated.
And then there’s the bigger one that I think most of the other commenters have glossed over. Recomputing your religion into self-consistency can be scary because if you recognize that previous generations were no less intelligent and no less searching for truth than you are, then there is a not-insignificant chance that your recalculations will introduce more errors than they correct. That would be bad.
On the other hand, if nobody ever grinds through all the equations again, then any bad values that slipped in somewhere never get caught. At some point the balance of old mistakes vs. potential new mistakes tips in your favor.
My personal strategy is that, when there’s a contradiction, recompute until it is resolved without creating any new ones. If you can’t, flag it as a hole in your model and keep your eyes open for a better fit.
Obviously, any religion that prohibits honest questioning should be laughed off the face of the Earth.
Why?
How has this affected your thinking?
There are impacts from not having Theism. The most obvious are social. Most of the others are easy enough to deal with. There is also a really, really vague one that I haven’t figured out how to do talk about yet.
Sorry there isn’t more information being offered here.
I don’t understand your second question.
Would your belief in theism be different if you did not have a fear of losing the belief even if not true? To what extent does this fear compete with your desire for accurate beliefs?
Ah, okay. Bullet point answers:
IF Assuming Theism was not true
AND Assuming no fear of losing Theism if Theism was not true, then
THEN I would drop Theism as soon as I convinced myself it wasn’t true
Other variations on the above format:
IF Assuming Theism was not true
AND Assuming there is fear of losing Theism if Theism was not true, then
THEN I would drop Theism as soon as I convinced myself it wasn’t true
ONLY IF I overcame my fear of losing Theism.
I would expect convincing myself Theism isn’t true would be harder than overcoming my fear of losing Theism. This leads into your question:
You are implying a scenario more like the following:
IF Assuming Theism was not true
AND Assuming there is fear of losing Theism if Theism was not true, then
THEN I would convince myself Theism wasn’t true
ONLY IF I overcame my fear of losing Theism.
Which is a subtle but important difference. I like to think that my fear wouldn’t cloud my ability to perceive the truth… but I don’t actually know how to verify that. Signs seem to point the exact opposite way, in fact.
I suppose one solution would be to lesson my fear in losing Theism, which seems to be the route pjeby suggested in another comment.
I find this really interesting to read and would love to see more, although it’s kind of carriage-return intensive and might be better hosted offsite somewhere. I can offer space if you don’t have a place to put it.
Me too.
I just posted it. Thanks for the offer, though.
All that sounds like natural rambling free-association to me, and more like fear of double-think than any actual double-think.
Are you reluctant to “critically examine” your beliefs because it just sounds like a lot of work? (Counselors will say, ‘let’s work on this’ and then an hour later when you feel like an exposed mess of emoted goo and they’ll say, ‘OK, see you next week.’)
Given that you’re comfortable with your beliefs, perhaps you’re reluctant to expose your beliefs because it’ll be like throwing them to the wolves. If not indiscriminate slaughter (no offense to the more militant atheists here), it’ll still be something like 12 to 1.
Well, if you ever decide to do this, if it helps, I offer to help you defend your views to the extent that I can competently do so.
For me, free association clears up doublethink. If I write my thought into a sentence, the sentence has a strict meaning in the English language. I can write the other side of doublethink as a second sentence and let them duke it out over a conversation with myself.
Also, by the time I had responded with the rambling I had mostly sorted out the initial emotional response. I was very surprised that I had one. (It wasn’t big; but any at all is a BIG RED FLAG.)
No. At least, not how I think of “a lot of work.” I certainly avoid some topics because they are a lot of work but this isn’t one of them.
Nah. I am reluctant to expose my beliefs because that is a lot of work. I am too verbose for my own good and have a hard time not responding to every single comment or question.
Hmm… how is this different than the clever arguer in The Bottom Line? Honestly, I won’t need help defending my views. If I cannot defend them, why should you? The goal in talking about my beliefs wouldn’t be defense and offense oriented (at least, not for me). Seeking the truth is not (or shouldn’t be) a war.
OK, you don’t sound afraid or like you’ll want help.
You seem more self-possessed than I am. (This could be related to gender.) When I was arguing for theism, I felt like the inferential distance was great and that there were too many angles to parry at once. I would have been grateful for an interpreter/mediator.
I was most uncomfortable when people speculated about my motives, often with motives I couldn’t relate to. I felt more flubbed by identity issues than atheist arguments (which I find I like well enough when they’re relevant).
I think there is one, out there. A war of world views. LW is a sandbox where we can see how different angles and themes will play out once physical materialism becomes more mainstream.
My impression of the origin of due process is that the designers of the legal system were well aware of “the clever arguer” and thought the only remedy was to even the playing field.
I wouldn’t sell your gender short. I have been doing this sort of arguing for a long time so I kind of know what to expect. The idea of an interpreter is actually significantly more interesting to me than a defender. Perhaps I misunderstood your original intent.
I can understand that. I think I am approaching this from a different angle than you did; we’ll see how it goes. :)
I think people are fighting each other and they keep trying to dig up a war so they can tell other people to fight for them. Christianity loves to talk about this war of ideas. I am not convinced such a war needs to exist and have decided not to partake. When it comes to the bottom line, I choose what I believe. I take the evidence and come to a conclusion and move forward. The war just isn’t interesting to me.
Near the end of What Evidence Filtered Evidence?, EY says something similar.
My impressions of the community so far have been good. The vague confession didn’t really draw a lot of heat and people were very kind when asking for more details. So all signs point to good things ahead.
That being said, I would still love your input when the time comes. I just don’t want you to feel like you have to pick sides. I’m not picking a side and it’ll be my beliefs on the table.
“Theism” is something of a catch-all term that can include lots of different things. I think that it is indeed possible that our universe has a Creator, but I’ll bet my immortal soul that the God of Abraham isn’t it. ;)
Maybe you could simply pin down your beliefs instead of “critically examining” them?
It’s poor form to bet things you can’t pony up if you lose!
For me, “pinning down” means fine tuning definitions. This and “critically examining” use the same toolset. I essentially see them as one and the same. If I am mucking around and bothering with those pesky definitions I am going to see the inconsistencies.
I can describe how I act and that is how I generally translate my old belief system. Rationality encourages beliefs as predictors and I am taking new forming beliefs and entering them that way. The data hasn’t come back from those beliefs yet but I am eagerly awaiting.
And beneath the powers of the creator of a universe, a Type 2+ civilization should be able to seed new planets with life, which is one of the more important powers of the monotheistic God.
I’d love to read more, and I’m especially curious what it would mean to you to no longer identify as a theist, and how that would feel. I’m also curious about the last two:
Thanks for posting this!
It is a complicated feeling. It is hard to adequately explain without delving into detail explanations of (a) my particular beliefs (b) the society of friends and family I inhabit and (c) a heck of a lot of personal history. I am not ready to deal with all of that here. I suspect bits and pieces will leak out.
The one thing I will say now is that it would completely wreck almost every aspect of my life. I have everything invested in this.
Since, at this point, I don’t have much to think that critical examination will lead to me dropping Theism, it is still possible that it will strengthen Theism. I don’t think it is more likely but I expect it would provoke a stronger reaction than my confession did.
If I really were scared enough to dodge critical examination I would be smart enough to drop anything that threatened a critical examination. As in, it wouldn’t be given a foothold. I have enough power over my beliefs to choose what I want to believe. Right now, Rationality has my attention. If it scared me enough I would just leave and never return.
This hasn’t happened and I do not expect it to happen. But if the situation were that dire, I would want to hold off on the critical examination until it was less scary.
For that to even make sense you have to give me the benefit of the doubt in terms of how I argue with myself. I don’t expect it to translate well into other person’s belief system. Also, it is very late… so… I don’t promise anything and reserve the right to recant tomorrow. :)
I should also mention that, judging from the stories I’ve heard, it’s a lot easier to talk about your doubts with your spouse when they’re doubts. I presume you have a wife and kids and parents and siblings and local community who are all deeply religious? I don’t know about the others, but the sooner you start talking to your wife about your doubts, the more likely you are to stay together as you go down whatever path you go down.
This is good advice. Thank you.
In that case, you probably shouldn’t think about whether or not there is a God just now.
Rather, you should first think about what you’re going to do if you conclude there isn’t. In your case, the line of retreat is rather more literal for you than it is for other people. Who would you bring in on your thinking before it had reached a conclusion, to let them know you’re really wondering? What would you do to make the best of the situation, given how much you have invested? You’ll find it very hard to think about this rationally until you can really face the thought of it going either way.
If it came to the point where I began expecting to drop Theism I would tell my wife, my brother, and probably a good friend of mine in Minnesota. My wife because it affects her, my brother because he would probably have advice on how to deal with switching, and my friend because he has always had good advice before. And he’s the one I feel I could actually talk to about the subject.
Given the option, I would leave my current city and go back to school. I suppose everything else revolves around the conversation I have with my wife. I would prefer to stay together but I honestly don’t know what would happen. I don’t see us splitting up, but I am not confident in this.
As for personal and non-social impacts, I would start over again. I would take the beliefs I have built in the journey to dropping Theism and continue the process. I expect I would continue acting relatively the same but with an attempt at slowly replacing all of the habits and rituals I have grown accustomed to having.
Thanks for thinking about this and answering. I hope that you’re talking to these people now about the overall journey that you’re on with respect to rationality, whether or not you raise the specific subject of theism. I think you’ll have an easier conversation if you talk to them about the journey as it’s going on than if you suddenly find yourself having arrived at somewhere that was not where you set off before those closest to you knew you were even setting out.
Actually, I find it hard to talk about rationality because everyone I would want to talk to about it would think it was completely obvious. I talk about biases and the like, and particular examples, the but the basic concepts tend to get responses like, “Well… yeah? And?”
EDIT: Note that this is somewhat of a self-fulfilling prophecy. The people I would want to talk to about it are the most likely to have already thought about these subjects.
How about talking about the solution to determinism versus free will, or “if a tree falls in the woods does it make a sound?”
I had a snark here that I thought was amusing for like 2 minutes, and then I started to feel guilty. Taken out.
The solution? Everyone would get the concept of the topics involved. Most of them would get bored and move the conversation along.
Yeah. This is a hard mental exercise… and this area of thought experiments encounters a lot of resistance. Something is actively blocking this area and that is Very Bad. I have a hunch about what it is but don’t know how to explain it well.
Hmm...
But don’t delay. Whichever conclusion you come to, I can’t imagine you would ever turn around and think “I’m really glad I spent so long putting off really thinking hard about that”. You won’t enjoy it, and you’re unlikely to see it as time well spent.
I’m not saying rush to a conclusion; I am saying rush to thought.
Agreed. Today is not the day, however, due to other circumstances. If I don’t have at least two plausible options for both of the following questions by Saturday, February 6th feel free to pester me.
Answer is up, one day late.
Wow. Then it’s not at all surprising you feel this way. You’ve left out a lot of details of your life, so I can’t really comment on specifics (though please feel free to share them if you’re ever ready to do so here). But given that, it’s going to be almost impossible for you to change that belief.
I’m very confident that a detailed, unbiased examination of your theistic beliefs would reveal that there’s no evidence for them and you hold them for social reasons. Do you agree? That being the case, you may not want to try to engage in this kind of examination right now. It sounds like you need time to think about what you really want in your life, and what kind of life you want to lead, independent of your beliefs about theism. Do you want to uproot your life right now?
Blueberry, the human species has got to do this sometime. Please don’t get in the way.
I agree that humanity needs to do this sometime, and I agree that MrHen needs to do this sometime.
I don’t know enough about MrHen’s situation to know whether it’s in his best interest to suddenly uproot himself from every aspect of his life right now, or whether there are ways of creating support networks and easing the transition that would help him. I’m not saying he should hide from the truth; I’m saying he may need to lay the groundwork for finding the truth first.
AFAIK these things just get more difficult the longer you put them off. This is the usual rule, and it’s also the usual rule that people are heavily motivated on a cognitive level to find excuses to let things slide. Someone wrote about this very eloquently—I’m not sure who, possibly Tim Ferris or Robert Greene—with the notion that “hoping” things will get better isn’t really hope so much as a form of passivity, motivated more by fear of action and change than any positive hope. Any delay of this sort should have a definite deadline attached to it.
I’ve found a definite (and not necessarily complete) list of steps to be useful in the absence of a deadline, and I think that’s what Blueberry was getting at: MrHen might be best served by adding things to his to-do list that answer the question “what things do I need to do to get my personal life arranged in such a way that I would be able to be ‘out’ as an atheist without major repercussions?”
Can I have that list? Can you talk to the 12 years old AdeleneDawner if she still has it?
Are you talking about separate magesteria or something? How does one get correct beliefs without examining evidence and understanding arguments?
No. This is not separate magesteria.
Okay, I guess the first point is that “belief” for a majority of my belief network was not Predictor based. It is Action based. The concept of separate magesteria applies to a Predictor based belief system such as the Map/Territory concept promoted here. An Action based belief system has trouble with the concepts of magesteria.
The whole system is ridiculously complicated because I never bothered to sit down and sort it out. Theism is behind a wall of beliefs built on a system completely incompatible with Predictor based beliefs. “Incompatible,” here, means “untranslatable.”
If I am not making sense I can try another path of explanation. I am typing up a full explanation now, actually, so… yeah.
He may be scared of losing it but not expect that, just as someone can be scared of ghosts without actually expecting to meet one.
No, you should be as ready to drop it to 69% as raise it to ~70.98%. With rounding, obviously, the above isn’t numerically wrong, but that’s not my objection: it encourages the reader to think of probability updates in percentages as addative, which is wrong.
(edited: fixed my wrong numbers...)
Yes, yes, yes, yes, yes. Speaking as someone who keeps making this mistake despite knowing better, I appreciate the attempt to discourage me from it.
I take your point about ratios but there is a bigger issue. In many cases the expected change in probability is not symmetrical or uniform.
From the article on conservation of expected evidence: “If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. ”
Say I believed that the Sun went around the earth. Given a new piece of evidence it is likely that it will not change your probability much at all. But there is a slight chance that a new piece of evidence will radically change your probability. It is your weighted probabilities of a change in probability that need to balance.
Example, many people who lost their religious faith suddenly came upon a piece of evidence that caused a drastic change in their probability estimate for the existence of God. [in part this may be due to biases such as ignoring contrary evidence, but not entirely.]
Imagine my wife buys a lottery ticket. My estimate of her chance of winning is very low. My wife runs into the room looking excited and brandishing the ticket, my estimate suddenly goes up a lot. Then when I check the numbers it goes up a lot more. On the other hand if I see the ticked crumpled up in the garbage bin, my estimate goes down only a little (from 1/1000000 to 1/1000000000).
Your numbers are still wrong I’m afraid—guessing you mean ~70.98%...
Yes, fixed.
Only when I’m planning for things that are contingent upon facts related to the physical world.
Hey, sorry if someone in the comments already addressed this but where does Tarski actually pose this litany?
I always found this exemplified in the concept of the “empty cup” from varied middle eastern philosophies. A “full cup” is a heart that believes it already knows.
This post focuses on internal cultivation of curiosity, (which it does fantastically and why this post has such a strong reputation for being so widely loved). But as I read it, my thoughts move more naturally toward policy or project level changes. Some potential examples.
During the course of my work, if I’m not spending 5% of my efforts “just finding things out because I want to know the answer” then I should take this as a red flag that I need to change something in order to allow my natural curiosity to be present and helping.
Every day, ask yourself “what’s something about the world I really just want to know by the end of the day” and sit with it until my curiosity overcomes me and picks something. I might be busy and so I don’t commit to putting in the work to find out, but I should at least know what it is I’m curious about.
During the course of my personal life, also keep a track of how regularly I’m pursuing knowledge for its own sake. Is it too low? Then it’s time to find a way to get it higher.
I don’t know how well I’m doing on this at the minute. I’d like to reflect on it more.