This does not apply to outcomes of belief creation, however. Is there a good way to test things like that? Or am I misinterpreting your suggestion? Or… ?
I mean that if you’re going to go digging around your head to change something, it would be best to have a criterion by which you can judge whether or not you’ve succeeded. Otherwise, you can rummage around in there forever. ;-)
An example criterion in this case might be “Thinking about not believing in God no longer causes an emotional reaction, as evidenced by my physical response to a specific thought about that.”
Defining a test in this way -- i.e., observing whether your (repeatable) physical reaction to a thought has changed—allows you to determine whether any particular approach has succeeded or failed. I suggested the two books I did because I have found it relatively easy to produce such repeatable, testable results with their techniques, once I got the hang of paying attention to my sensory responses to the questions asked, and ignoring my logical/abstract ones. (Since changing one’s logical beliefs isn’t the hard part.)
The rest of your comment is interesting to me because it directly focuses on the prediction of trauma due to dropping Theism (and related subjects). I hadn’t really thought about the details of the fallout beyond key trouble spots. Is this a fair two-sentence reduction of your suggestions?
Looking at similar past events that carry the same emotional trauma due to dropped beliefs can give me the ability to question the validity of my fear of the future by comparing and contrasting the differences. In addition, this process may reveal a solution to the projected trauma by preventing it from happening or weakening its impact.
No, what I’m saying is that your projection is based on some specific, sensory experience(s) you had, like for example your parents speaking disparagingly about atheists, or other non-followers of your parents’ belief system. At some point, to feel threatened by being outcast, you had to learn who the outgroups were, and this learning is primarily experiential/emotional, rather than intellectual, and happens on a level that bypassed critical thought (e.g. because of your age, or because of the degree of emotion in the situation).
Identifying this experience and processing it through critical thought, weakens the emotional response triggered by the thought, then gives you the ability to think rationally about the subject again… thereby leading to potential solutions. Right now, the fear response paralyzes your critical and creative thinking, making it very hard to see what solutions may be in front of you.
IOW, your prediction of trauma comes from a past trauma—our brains don’t come with a built-in prior probability distribution for what beliefs will cause people to like or not like us. ;-) If you want to switch off the fear, you have to change the prediction, which means changing the probability data in your memory… which means accessing and reinterpreting the original sensory experience data.
In order to find this information, you focus on the sensory portion of your prediction, prior to verbalization. That is, when you ask, “What bad thing is going to happen?” refrain from verbalizing and pay attention to the images, feelings, and general impressions that arise. Then, let your mind drift back to when you first saw/felt/experienced something like that.
A recent personal example: I discovered yesterday that the reason I never gave my software projects a “1.0” version is because I was afraid to declare anything “finished” or “complete”… but the specific reason, was that when I did chores as a kid, or cleaned my room, my mother found faults and yelled at me. Emotionally, I learned that as long as someone else could possibly find a way to improve it, I was not allowed to call it “finished”, or I would be shamed (status reduction).
Until I uncovered this specific way in which I came by my emotional response, all my conscious efforts to overcome this bad habit were without effect. The emotion biased my conscious thoughts in such a way that I really and truly sincerely believed that my projects were not “finished”… because the definition I was unconsciously using for “finished” didn’t allow me to be the one who declared them so.
But having specifically identified the source of this learning, it was trivial to drop the emotional response that drove the behavior… and immediately after doing so, I realized that there were a wide variety of other areas in my life affected by this bias, that I hadn’t noticed before.
Most psychological discussion of fears tends to focus on the abstract level, i.e. obviously I was afraid to declare things finished, for “fear of criticism”. But that abstract knowledge is almost entirely useless for actually changing the feelings, and therefore removing the bias. Mostly, what such abstract knowledge does is sometimes allow people to spend a lifetime trying to work around or compensate for their feeling-driven biases, rather than actually changing them.
And that’s why I urge you to focus on specific sensory experience information in your dialoging, and treat all abstract, logical, or verbally sophisticated thoughts that arise in response to your questions as being lies, rumor, and distraction. If your logical abstract thoughts were actually in charge of your feelings, you’d already be done. Save ’em till the bias has been repaired.
IOW, your prediction of trauma comes from a past trauma—our brains don’t come with a built-in prior probability distribution for what beliefs will cause people to like or not like us.
The brain doesn’t need past trauma in this instance. Our brains do come with a built-in prior probability distribution for what will happen when you become an apostate, rejecting the beliefs of the tribe in which you were raised.
Our brains do come with a built-in prior probability distribution for what will happen when you become an apostate, rejecting the beliefs of the tribe in which you were raised.
Ahem. We are adaptation executers, not fitness maximizers. Our brains come with a moral mechanism that’s been shaped by that probability distribution, but they don’t come with that specific prediction built in at an object level.
Instead, we simply learn what behaviors cause shaming, denunciation, etc., and this then triggers all the conscious shame/guilt/etc., as well as the idealizing, moralizing, punishing others, and punishing of non-punishers… with all of these actions being more highly-motivated in cases where the behavior is desirable to the individual involved.
Professing or failing to profess certain beliefs is just one minor case of “behavior” that can be regulated by this mechanism. I have not observed anything that suggests there is a mechanism specific to religious beliefs or even beliefs per se, distinct from other kinds of behavior. There is litle difference between an injunction to say some belief is true or good, and an injunction to always say thank you, or to never brag about yourself. (Or my recently discovered injunction not to say something is finished!)
All of these are just examples of verbal behavior that can regulated by the same mechanism. (In any case, MrHen has already pointed out that the fear is less about him stating new beliefs, than it would be about acting on them.)
Anyway, it seems to me that we have only one “moral injunction” apparatus that is applied generically, and the feelings that it generates do not contain any information about being kicked out of the tribe or failure to mate, etc. Instead, the memory of a shaming event is itself the bad prediction or negative reinforcer. Adaptation execution FTW, or more like FTL in this case at least.
Adaptation execution FTW, or more like FTL in this case at least.
That isn’t the issue here. Yes, adaptation execution, Woohoo!! Obviously the probability distribution for expected consequences isn’t built in to the amygdala.
I nevertheless assert that the universal human aversion to changing our fundamental signalling beliefs is more than just Mommy Issues filtered through PCT. Human instinctive responses are sophisticated and a whole lot of them are built in, no shaming required. We’re scared of spiders, snakes and apostasy. They’re adaptations right there in the DNA.
Er, research please. Everything I’ve seen shows that even monkeys have to learn to fear snakes and spiders—it has to be triggered by observing other monkeys being afraid of them first.
I nevertheless assert that the universal human aversion to changing our fundamental signalling beliefs is more than just
Occam’s razor says you’re more likely to be wrong than I am: a general purpose mechanism for conditioning verbal behavior is more than sufficient to produce the results we observe, especially if you consider internal verbal thinking a form of verbal behavior—which it pretty plainly is.
For example, this provides a simpler mechanism for “belief in belief”, than your proposal of a distinct mechanism. It allows us to “believe”—i.e. consistently say we believe (even to ourselves on the inside), when in fact we don’t.
[edited to delete unfair rhetoric of my own]
Mommy Issues filtered through PCT.
FWIW I said nothing about PCT, nor did I say that a parent had to be the one delivering the shame. If your own personal bias about me is such that you can’t avoid engaging in this type of rhetorics, perhaps you should consider giving yourself some cooling off time before you reply.
I’ll gently ignore the part where I’ve logged a lot more time with a lot more people, working on this type of belief than you have, making testable behavior changes.
Oops. I actually intended to delete that, because I felt it was the same sort of unfair rhetoric as I was accusing wedrifid of. Thanks for bringing it to my attention.
Er, research please. Everything I’ve seen shows that even monkeys have to learn to fear snakes and spiders—it has to be triggered by observing other monkeys being afraid of them first.
I was quoting Steven Pinker but my copy is an audio book so I can’t give you the specific references to the study he mentions. A simple google search brings up plentyof references. (Google gives popularised summaries. Follow the links provided therein to find the actual research.)
Your claim mentions ‘everything you have seen’. Given that contradictory reports are so freely available and your confidence in the model your are asserting I would have expected you to have a somewhat more broad exposure to the relevant science.
For example, this provides a simpler mechanism for “belief in belief”, than your proposal of a distinct mechanism. It allows us to “believe”—i.e. consistently say we believe (even to ourselves on the inside), when in fact we don’t.
Skinner had a similar ‘simple’ theory. But he was wrong. Not wrong because the mechanisms he described weren’t important parts of human psychology but wrong because he asserted them to the exclusion of all else.
I’ll gently ignore the part where I’ve logged a lot more time with a lot more people, working on this type of belief than you have, making testable behavior changes.
I believe you can make testable behavior changes and your work with clients impresses me. I also believe you could change people to be less afraid of, for example, heights. Nevertheless, I would not necessarily believe your report on how these anxieties came into being. People can be afraid of heights even if they didn’t make a habit of falling off cliffs in their childhood.
If your own personal bias about me is such that you can’t avoid engaging in this type of rhetorics, perhaps you should consider giving yourself some cooling off time before you reply.
I have a strong bias for you PJ, in all but your tendency to be quite rigidly minded when it comes to forcing reality into your simple models. I allow myself to vocally reject the parts of your comments that I disagree with because that way I will not be dismissed as a fan boy when I speak in your defense. You aren’t, for example, a quack and your advice, experience and willingness to share it are invaluable. I also, for what it is worth, find PCT to be a useful way of describing the dynamics of human behavior much of the time.
I was quoting Steven Pinker but my copy is an audio book so I can’t give you the specific references to the study he mentions. A simple google search brings up plenty of references. (Google gives popularised summaries. Follow the links provided therein to find the actual research.)
Perhaps I’m missing something, but I don’t see where it says we’re all automatically afraid of snakes. I have seen research that monkeys have an inbuilt ability to learn to fear snakes, but the mechanism has to be switched on via learning, and my understanding is that humans are the same way… unless you are arguing that individual variations in fear of snakes is purely determined by genetics.
[Edit to add: one of the first papers you linked to includes this quote: “For studies of captive primates, King did not find consistent evidence of snake fear.” And the second page goes on to describe the very “they have to learn to fear snakes” research that I previously spoke of.]
Given that contradictory reports are so freely available and your confidence in the model your are asserting I would have expected you to have a somewhat more broad exposure to the relevant science.
I think perhaps we are miscommunicating: I do not deny that primate brains contain snake detectors. I do deny that said detectors are unaffected by learning: humans and monkeys can and do learn which snakes to fear, or not fear.
Skinner had a similar ‘simple’ theory. But he was wrong. Not wrong because the mechanisms he described weren’t important parts of human psychology but wrong because he asserted them to the exclusion of all else.
We seem to be miscommunicating again. What mechanism is it that you think I am asserting “to the exclusion of all else”? The model I personally use contains several mechanisms, and the moral injunctions aspect I spoke of here is only one such mechanism. It is certainly not the only relevant mechanism in human behavior, even in the relatively narrow field of applicability where I use it.
People can be afraid of heights even if they didn’t make a habit of falling off cliffs in their childhood.
I don’t do classical phobia work, actually, so I wouldn’t have a valid opinon on that one, one way or the other. ;-)
Nevertheless, I would not necessarily believe your report on how these anxieties came into being.
It’s certainly true that, In order to reach scientific standards, I would need to find a way to double-blindly substitute a placebo version of childhood memories for the real thing in order to prove that it’s the modification of the memory that makes it work. (I have occasionally tested single-blind placebo substitutions on other things, but not this, as I have no idea what I could substitute.)
Mainly, what I do to test alternative hypotheses regarding a change technique is to see what parts of it I can remove, without affecting the results. Whatever’s left, I assume has some meaning. (Side note: most published descriptions of actually-working self-help techniques contain superfluous steps, that, when removed, tend to make each technique sound like a mere minor variation on one of a handful of major themes… which I expect to correspond to mechanisms in the brain.)
In the instant discussion of moral injunctions, examining the memory of the learning or imprint experience appears to be indispensable, and therefore I conclude (hypothesize, if you prefer) that these memories are an integral part of the process of formation of moral injunction-regulated behavior.
I have a strong bias for you PJ, in all but your tendency to be quite rigidly minded when it comes to forcing reality into your simple models.
FWIW, I do not claim universal applicability of my models outside their target domain. However, within that target domain, most discussions here tend to have only vaporous speculation weighing against many, many tests and observations. When someone proposes a speculative and more complex model than one I am already using, I want to see what their model can predict that mine cannot, or vice versa.
If you have a more parsimonious model for “belief in belief” than simple moral injunctions regarding spoken behavior, I’d love to see it. But since “belief in belief” cleanly falls out as a side effect of my model, I don’t see a reason to go looking for a more complicated, special-purpose belief module, just because there could be one. Should I encounter a client who needs a belief-in-belief fixed, and find that my existing model can’t fix it, then I will have reason to go looking for an updated model.
Now, when I do see a more parsimonious model here than one I’m already using, I adopt it wholeheartedly. For all that people seem to frame me as having brought PCT to Lesswrong.com, the reverse is actually true:
lesswrong is where I heard about PCT in the first place!
And I adopted it because it fit very neatly into my existing model… it was as though my model was a graph with lots of edges, but no nodes, and PCT gave me a paradigm for what I should expect “nodes” to look like. (And incorporating it into my model also subsequently allowed me to discover a new kind of “edge” that I hadn’t spotted previously.)
So actually, I don’t consider PCT to be a comprehensive model in itself either, because it lacks the “edges” that my own model contains!
Which makes it a bit frustrating any time anyone acts as though I 1) brought PCT to LW, and 2) think it’s a cure-all or even a remotely complete model of human behavior… it’s just better than its competitors, such as the aforementioned Skinnerian model you mentioned.
I allow myself to vocally reject the parts of your comments that I disagree with because that way I will not be dismissed as a fan boy when I speak in your defense.
Great. I would appreciate it, though, if you not use boo lights like “mommy issues” and “PCT” (which sadly, seems to have become one around these parts), especially when the first is a denigratory caricature and the second not even relevant. (Moral injunctions are an “edge” in my own model, not a “node” from PCT.)
I think perhaps we are miscommunicating: I do not deny that primate brains contain snake detectors. I do deny that said detectors are unaffected by learning: humans and monkeys can and do learn which snakes to fear, or not fear.
I agree on this note. I do not agree that Occam suggests that fear of snakes, spiders and heights is the sole result of learned associations. I also do not agree that aversion to fundamental belief switching is purely the result of learning from trauma.
I do not agree that Occam suggests that fear of snakes, spiders and heights is the sole result of learned associations. I also do not agree that aversion to fundamental belief switching is purely the result of learning from trauma.
Of course not. I never claimed they were. I only make the claim that learning is an essential component of the moral injunction mechanism. You have to learn which beliefs not to switch, at the very least!
I’ve also described a variety of apparently built-in behaviors triggered by the mechanism: proselytizing, gossip, denouncing others, punishing non-punishers, feelings of guilt, etc. These are just as much built-in mechanisms as “snake detectors”… and monkeys appear to have some of them.
What I say is that, just like the snake detectors, these mechanisms require some sort of learning in order to be activated… and that evolutionarily, applying these mechanisms to behavior would be of primary importance; applying them to beliefs would have to come later, after language.
And at that point, it’s far more parsimonious to assume evolution would reuse the same basic behavior-control mechanism, rather than implementing a new one specifically for “beliefs”… especially since, to the naive mind, “beliefs” are transparent. There’s simply “how things are”.
To an unsophisticated mind, someone who thinks things are different than “how things are” is obviously either crazy, or a member of an enemy tribe.
Not an “apostate”.
Most of the behavior mechanisms involved are there for the establishment and maintenance of tribe behavioral norms, and were later memetically co-opted by religion. I quite doubt that religion or anything we’d consider a “belief system” (i.e., a set of non-reality-linked beliefs used for signalling) were what the mechanism was meant for.
IOW, ISTM the support systems for reality-linked belief systems had to have evolved first.
This is not a claim of exclusivity of mechanism, so I don’t really know where you’re getting that from. I’m only saying that I don’t see the necessity for an independent belief-in-belief system to evolve, when the conditions that make use of it would not have arrived until well after a “group identity behavioral norms control enforcement” system was already in place, and the parsimonious assumption is that non-reality-linked beliefs would be at most a minor modification to the existing system.
To an unsophisticated mind, someone who thinks things are different than “how things are” is obviously either crazy, or a member of an enemy tribe.
Not an “apostate”.
No. I’m talking about apostasy. I’m not talking about someone who is crazy. I am not talking about a member of an enemy tribe. I am talking about someone from within the tribe who is, or is considering, changing their identifying beliefs to something that no longer matches the in-group belief system. This change in beliefs may be to facilitate joining a different tribe. It may be a risky play at power within the tribe. It may be to splinter off a new tribe from the current one.
Since we are talking in the context of religious beliefs the word apostate fits perfectly.
I am talking about someone from within the tribe who is, or is considering, changing their identifying beliefs to something that no longer matches the in-group belief system. This change in beliefs may be to facilitate joining a different tribe. It may be a risky play at power within the tribe. It may be to splinter off a new tribe from the current one.
In order for any of those things to be advantageous (and thus need countermeasures), you first have to have tribes… which means you already need behavior-based signaling, not just non-reality-linked “belief” signaling.
So I still don’t see why postulating an entirely new, separate mechanism is more parsimonious than assuming (at most) a mild adaptation of the old, existing mechanisms… especially since the output behaviors don’t seem different in any important way.
Can you explain why you think a moral injunction of “Don’t say or even think bad things about the Great Spirit” is fundamentally any different from “Don’t say ‘no’, that’s rude. Say ‘jalaan’ instead,” or “Don’t eat with your left hand, that’s dirty?”
In particular, I’d like to know why you think these injunctions would need different mechanisms to carry out such behaviors as disgust at violators, talking up the injunction as an ideal to conceal one’s desire for non-compliance, etc.
In fairness, the “left hand” thing has to do with toilet hygiene pre-toilet paper, so at one time it had actual health implications.
That’s why I brought it up—it’s in the category of “reality-based behavior norms enforcement”, which has much greater initial selection pressure (or support) than non-reality-based behavior norms enforcement.
Animals without language are capable of behavioral norms enforcement, even learned norms enforcement. It’s not parsimonious to presume that religion-like beliefs would not evolve as a subset of speech-behavior norms enforcement, in turn as a subset of general behavior norms enforcement.
I guess I was just pointing out that it seemed to be in a different category (“reality-based behavior norms enforcement” is as good a name as any) than the other examples.
If I were God I would totally refactor the code for humans and make it more DRY.
You seem to be confusing “simplicity of design” with “simplicity of implementation”. Evolution finds solutions that are easily reached incrementally -- those which provide an advantage immediately, rather than requiring many interconnecting pieces to work. This makes reuse of existing machinery extremely common in evolution.
It is also improbable that any selection pressure for non-reality-based belief-system enforcement would exist, until some other sort of reality-based behavioral norms system existed first, within which pure belief signaling would then offer a further advantage.
Ergo, the path of least resistance for incremental implementation simplicity, supports the direction I have proposed: first behavioral enforcement, followed by belief enforcement using the same machinery—assuming there’s actually any difference between the two.
I could be wrong, but it’s improbable, unless you or someone else has some new information to add, or some new doubt to shed upon one of the steps in this reasoning.
You seem to be confusing “simplicity of design” with “simplicity of implementation”. Evolution finds solutions that are easily reached incrementally—those which provide an advantage immediately, rather than requiring many interconnecting pieces to work. This makes reuse of existing machinery extremely common in evolution.
I’m not and I know.
I could be wrong, but it’s improbable, unless you or someone else has some new information to add, or some new doubt to shed upon one of the steps in this reasoning.
Earlier in this conversation you made the claim:
Er, research please. Everything I’ve seen shows that even monkeys have to learn to fear snakes and spiders—it has to be triggered by observing other monkeys being afraid of them first.
This suggested that if “everything you have seen” didn’t include the many contrary findings then either you hadn’t seen much or what you had seen was biased.
I really do not think new information will help us. Mostly because approximately 0 information is being successfully exchanged in this conversation.
This suggested that if “everything you have seen” didn’t include the many contrary findings then either you hadn’t seen much or what you had seen was biased. I really do not think new information will help us.
I still don’t see what “contrary” findings you’re talking about, because the first paper you linked to explicitly references the part where monkeys that grow up in cages don’t learn to fear snakes. Ergo, fear of snakes must be learned to be activated, even though there appears to be machinery that biases learning in favor of associating aversion to snakes.
This supports the direction of my argument, because it shows how evolution doesn’t create a whole new “aversive response to snakes” mechanism, when it can simply add a bias to the existing machinery for learning aversive stimuli.
In the same way, I do not object to the idea that we have machinery to bias learning in favor of mouthing the same beliefs as everyone else. I simply say it’s not parsimonious to presume it’s an entirely independent mechanism.
At this point, it seems to me that perhaps this discussion has consisted entirely of “violent agreement”, i.e. both of us failing to notice that we are not actually disagreeing with each other in any significant way. I think that you have overestimated what I’m claiming: that childhood learning is an essential piece in moral and other signaling behavior, not the entirety of it… and I in turn may have misunderstood you to be arguing that an independent inbuilt mechanism is the entirety of it.
When in fact, we are both saying that both learning and inbuilt mechanisms are involved.
So, perhaps we should just agree to agree, and move on? ;-)
We differ in our beliefs on what evidence is available. I assert that it varies from ‘a bias to learn to fear snakes’ to ‘snake naive monkeys will even scream with terror and mob a hose if you throw it in with them’. This depends somewhat on which primates are the subject of the study.
It does seem, however, that our core positions are approximately compatible, which leaves us with a surprisingly pleasant conclusion.
We differ in our beliefs on what evidence is available. I assert that it varies from ‘a bias to learn to fear snakes’ to ‘snake naive monkeys will even scream with terror and mob a hose if you throw it in with them’. This depends somewhat on which primates are the subject of the study.
We also disagree in how much relevance that has to the position you’ve been arguing (or at least the one I think you’ve been arguing).
I’ve seen some people claim that humans have only two inborn fears (loud noises and falling) on the basis that those are the only things that make human babies display fear responses. Which, even if true, wouldn’t necessarily mean we didn’t have instinctive fears kick in later than life!
And that’s why I don’t think any of that is actually relevant to the specific case; it’s really the specifics of the case that count.
And in the specific case of beliefs, we don’t get built-in protein coding for which beliefs we should be afraid to violate. We have to learn them, which makes learning an essential piece of the puzzle.
And from my own perspective, the fact that there’s a learned piece means that it’s the part I’m going to try to exploit first. If it can be learned, then it can be unlearned, or relearned differently.
As I said in another post, I can’t make my brain stop seeking SASS (status, affiliation, safety, and stimulation). But I can teach it to interpret different things as meaning I’ve got them.
Clearly, we can still learn such things later in life. After all, how long did it take most contributors’ brains to learn that “karma” represents a form of status, approval, or some combination thereof, and begin motivating them based on it?
We also disagree in how much relevance that has to the position you’ve been arguing (or at least the one I think you’ve been arguing).
That being, “We don’t need a past traumatic experience to have an aversive reaction when considering rejecting the beliefs of the tribe in which we were raised.”
I agree with the remainder of your post and, in particular, this is exactly the kind of reasoning I use when working out how to handle situations like this:
And from my own perspective, the fact that there’s a learned piece means that it’s the part I’m going to try to exploit first. If it can be learned, then it can be unlearned, or relearned differently.
That being, “We don’t need a past traumatic experience to have an aversive reaction when considering rejecting the beliefs of the tribe in which we were raised.”
I don’t recall claiming that a traumatic experience was required. Observing an aversive event, yes. But in my experience, that event could be as little as hearing your parents talking derisively about someone who’s not living up to their norms… not too far removed, really, from seeing another monkey act afraid of a snake.
Aversion, however, (in the form of a derogatory, shocked, or other emotional reaction) seems to be required in order to distinguish matters of of taste (“I can’t believe she wore white after Labor Day”) and matters of import (“I can’t believe she spoke out against the One True God… kill her now!”). We can measure how tightly a particular belief or norm is enforced by the degree of emotion used by others in response to either the actual situation, or the described situation.
So it appears that this is where we miscommunicated or misunderstood, as I interpreted you to be saying that aversive learning was not required, while you appear to have interpreted what I’m saying as having some sort of personal trauma being required that directly links to an individual belief.
It’s true that most of the beliefs I work with tend to be rooted in direct personal experience, but a small number are based on something someone said about something someone else did. Even there, though, the greater the intensity of the emotional surrounding the event (e.g. a big yelling fight or people throwing things), the greater the impact.
Like other species of monkeys, we learn to imitate what the monkeys around us do while we’re growing up; we just have language and conceptual processing capabilities that let us apply our imitation to more abstract categories of behavior than they do, and learn from events that are not physically present and happening at that moment.
IOW, your prediction of trauma comes from a past trauma—our brains don’t come with a built-in prior probability distribution for what beliefs will cause people to like or not like us. ;-) If you want to switch off the fear, you have to change the prediction, which means changing the probability data in your memory… which means accessing and reinterpreting the original sensory experience data.
Btw, the Iowa Gambling Task is an example of a related kind of unconscious learning that I’m talking about here. In it, people learn to feel fear about choosing cards from a certain deck, long before their conscious mind notices or accounts for the numerical probabilities involved. Then, their conscious minds often make up explanations which have little if any connection to the “irrational” (but accurate) feeling of fear.
So if you seem to irrationally fear something, it’s an indication that your subconscious picked up on raw probability data. And this raw probability data can’t be overrided by reasoning unless you integrate the reasoning with the specific experiences, so that a different interpretation is applied.
For example, suppose there’s someone who always looks away from you and leaves the room when you enter. You begin to think that person doesn’t like you… and then you hear they actually have a crush on you. You have the same sensory data, but a different interpretation, and your felt-response to the same thoughts is now different. Voila… memory reconsolidation, and your thoughts are now biased in a different, happier way. ;-)
No, what I’m saying is that your projection is based on some specific, sensory experience(s) you had, like for example your parents speaking disparagingly about atheists, or other non-followers of your parents’ belief system. At some point, to feel threatened by being outcast, you had to learn who the outgroups were, and this learning is primarily experiential/emotional, rather than intellectual, and happens on a level that bypassed critical thought (e.g. because of your age, or because of the degree of emotion in the situation).
Okay, that makes sense. My initial reaction is that the fear has less to do with people’s reactions to me and more the amount of change in the actions I take. Their responses to these new actions is more severe than their expected actions as a result of my dropping Theism.
But the more I think about it the more I think that this is just semantics. I’ll give your suggestion a shot and see what happens. I am not expecting much but we’ll see. The main criticism that I have at this point is that my “fears” are essentially predictions of behavior. I do not consider them irrational fears...
So if you seem to irrationally fear something, it’s an indication that your subconscious picked up on raw probability data. And this raw probability data can’t be overrided by reasoning unless you integrate the reasoning with the specific experiences, so that a different interpretation is applied.
Ah, okay, this part relates to the trigger of dealing with the initial reaction to the questions being asked.
My personal solutions for this style of fear (which is separate from the fear of future social reactions, which I can understand may not have been obvious) is the same as my pattern of behavior relating to pain tolerance. It goes away if I focus on it just the right way.
By the end of the week I expect to be able to return to the topic without any overt hinderances. I take this to mean the fear is gone or I am so completely self-deluded that the magic question no longer means the same thing as it did when it was first asked. I prefer to think it is the former.
My initial reaction is that the fear has less to do with people’s reactions to me and more the amount of change in the actions I take. Their responses to these new actions is more severe than their expected actions as a result of my dropping Theism.
I was just giving an example. The key questions are:
What is the trigger stimulus? and
What is the repeatable, observable reaction you wish to change?
In what you said above, the trigger is “thinking about what I’d do if I were not a theist”, and you are using the word “fear” to describe the automatic reaction.
I’m saying that you should precisely identify what you mean by “fear”—does your pulse race? Palms sweat? Do you clench your teeth, feel like you’re curling into a ball, what? There are many possible physical autonomic reactions to the emotion of fear… which one are you doing automatically, without conscious intent, every time you contemplate “what I’d do if I were not a theist”?
This will serve as your test—a control condition against which any attempted change can be benchmarked. You will know you have arrived at a successful conclusion to your endeavor when the physiological reaction is extinguished—i.e., it will cease to bias your conscious thought.
I consider this a litmus test for any psychological change technique: if it can’t make an immediate change (by which I mean abrupt, rather than gradual) in a previously persistent automatic response to a thought, it’s not worth much, IMO.
But the more I think about it the more I think that this is just semantics.
Focus on what the stimulus and response are, and that will keep you from wandering into semantic questions… which operate in the verbal “far” mind, not the nonverbal “near” mind that you’re trying to tap into and fix.
This is one of those “simple, but not easy” things… not because it isn’t easy to do, but because it’s hard to stop doing the verbal overshadowing part.
We all get so used to following our object-level thoughts, running in the emotionally-biased grooves laid down by our feeling-level systems, that the idea of ignoring the abstract thoughts to look at the grooves themselves seems utterly weird, foreign, and uncomfortable. It is, I find, the most difficult part of mindhacking to teach.
But once you get used to the idea that you simply cannot trust the output of your verbal mind while you’re trying to debug your pre-verbal biases, it gets easier. During the early stages though, it’s easy to be thinking in your verbal mind that you’re not thinking in your verbal mind, simply because you’re telling yourself that you’re not… which in hindsight should be a really obvious clue that you’re doing it wrong. ;-)
Bear in mind that your unconcious mind does not require complex verbalizations (above simple if-then noun-verb constructs) to represent its thought processes. If you are trying to describe something that can’t be reduced to “(sensory experience X) is followed by (sensory experience Y)”, you are using the wrong part of your brain—i.e., not the one that actually contains the fear (or other emotional response).
I mean that if you’re going to go digging around your head to change something, it would be best to have a criterion by which you can judge whether or not you’ve succeeded. Otherwise, you can rummage around in there forever. ;-)
An example criterion in this case might be “Thinking about not believing in God no longer causes an emotional reaction, as evidenced by my physical response to a specific thought about that.”
Defining a test in this way -- i.e., observing whether your (repeatable) physical reaction to a thought has changed—allows you to determine whether any particular approach has succeeded or failed. I suggested the two books I did because I have found it relatively easy to produce such repeatable, testable results with their techniques, once I got the hang of paying attention to my sensory responses to the questions asked, and ignoring my logical/abstract ones. (Since changing one’s logical beliefs isn’t the hard part.)
No, what I’m saying is that your projection is based on some specific, sensory experience(s) you had, like for example your parents speaking disparagingly about atheists, or other non-followers of your parents’ belief system. At some point, to feel threatened by being outcast, you had to learn who the outgroups were, and this learning is primarily experiential/emotional, rather than intellectual, and happens on a level that bypassed critical thought (e.g. because of your age, or because of the degree of emotion in the situation).
Identifying this experience and processing it through critical thought, weakens the emotional response triggered by the thought, then gives you the ability to think rationally about the subject again… thereby leading to potential solutions. Right now, the fear response paralyzes your critical and creative thinking, making it very hard to see what solutions may be in front of you.
IOW, your prediction of trauma comes from a past trauma—our brains don’t come with a built-in prior probability distribution for what beliefs will cause people to like or not like us. ;-) If you want to switch off the fear, you have to change the prediction, which means changing the probability data in your memory… which means accessing and reinterpreting the original sensory experience data.
In order to find this information, you focus on the sensory portion of your prediction, prior to verbalization. That is, when you ask, “What bad thing is going to happen?” refrain from verbalizing and pay attention to the images, feelings, and general impressions that arise. Then, let your mind drift back to when you first saw/felt/experienced something like that.
A recent personal example: I discovered yesterday that the reason I never gave my software projects a “1.0” version is because I was afraid to declare anything “finished” or “complete”… but the specific reason, was that when I did chores as a kid, or cleaned my room, my mother found faults and yelled at me. Emotionally, I learned that as long as someone else could possibly find a way to improve it, I was not allowed to call it “finished”, or I would be shamed (status reduction).
Until I uncovered this specific way in which I came by my emotional response, all my conscious efforts to overcome this bad habit were without effect. The emotion biased my conscious thoughts in such a way that I really and truly sincerely believed that my projects were not “finished”… because the definition I was unconsciously using for “finished” didn’t allow me to be the one who declared them so.
But having specifically identified the source of this learning, it was trivial to drop the emotional response that drove the behavior… and immediately after doing so, I realized that there were a wide variety of other areas in my life affected by this bias, that I hadn’t noticed before.
Most psychological discussion of fears tends to focus on the abstract level, i.e. obviously I was afraid to declare things finished, for “fear of criticism”. But that abstract knowledge is almost entirely useless for actually changing the feelings, and therefore removing the bias. Mostly, what such abstract knowledge does is sometimes allow people to spend a lifetime trying to work around or compensate for their feeling-driven biases, rather than actually changing them.
And that’s why I urge you to focus on specific sensory experience information in your dialoging, and treat all abstract, logical, or verbally sophisticated thoughts that arise in response to your questions as being lies, rumor, and distraction. If your logical abstract thoughts were actually in charge of your feelings, you’d already be done. Save ’em till the bias has been repaired.
The brain doesn’t need past trauma in this instance. Our brains do come with a built-in prior probability distribution for what will happen when you become an apostate, rejecting the beliefs of the tribe in which you were raised.
Ahem. We are adaptation executers, not fitness maximizers. Our brains come with a moral mechanism that’s been shaped by that probability distribution, but they don’t come with that specific prediction built in at an object level.
Instead, we simply learn what behaviors cause shaming, denunciation, etc., and this then triggers all the conscious shame/guilt/etc., as well as the idealizing, moralizing, punishing others, and punishing of non-punishers… with all of these actions being more highly-motivated in cases where the behavior is desirable to the individual involved.
Professing or failing to profess certain beliefs is just one minor case of “behavior” that can be regulated by this mechanism. I have not observed anything that suggests there is a mechanism specific to religious beliefs or even beliefs per se, distinct from other kinds of behavior. There is litle difference between an injunction to say some belief is true or good, and an injunction to always say thank you, or to never brag about yourself. (Or my recently discovered injunction not to say something is finished!)
All of these are just examples of verbal behavior that can regulated by the same mechanism. (In any case, MrHen has already pointed out that the fear is less about him stating new beliefs, than it would be about acting on them.)
Anyway, it seems to me that we have only one “moral injunction” apparatus that is applied generically, and the feelings that it generates do not contain any information about being kicked out of the tribe or failure to mate, etc. Instead, the memory of a shaming event is itself the bad prediction or negative reinforcer. Adaptation execution FTW, or more like FTL in this case at least.
That isn’t the issue here. Yes, adaptation execution, Woohoo!! Obviously the probability distribution for expected consequences isn’t built in to the amygdala.
I nevertheless assert that the universal human aversion to changing our fundamental signalling beliefs is more than just Mommy Issues filtered through PCT. Human instinctive responses are sophisticated and a whole lot of them are built in, no shaming required. We’re scared of spiders, snakes and apostasy. They’re adaptations right there in the DNA.
Er, research please. Everything I’ve seen shows that even monkeys have to learn to fear snakes and spiders—it has to be triggered by observing other monkeys being afraid of them first.
Occam’s razor says you’re more likely to be wrong than I am: a general purpose mechanism for conditioning verbal behavior is more than sufficient to produce the results we observe, especially if you consider internal verbal thinking a form of verbal behavior—which it pretty plainly is.
For example, this provides a simpler mechanism for “belief in belief”, than your proposal of a distinct mechanism. It allows us to “believe”—i.e. consistently say we believe (even to ourselves on the inside), when in fact we don’t.
[edited to delete unfair rhetoric of my own]
FWIW I said nothing about PCT, nor did I say that a parent had to be the one delivering the shame. If your own personal bias about me is such that you can’t avoid engaging in this type of rhetorics, perhaps you should consider giving yourself some cooling off time before you reply.
Proslepsis!
Oops. I actually intended to delete that, because I felt it was the same sort of unfair rhetoric as I was accusing wedrifid of. Thanks for bringing it to my attention.
Now now, you can’t have points for that twice!
But it worked so well the first time! Aww.
I was quoting Steven Pinker but my copy is an audio book so I can’t give you the specific references to the study he mentions. A simple google search brings up plenty of references. (Google gives popularised summaries. Follow the links provided therein to find the actual research.)
Your claim mentions ‘everything you have seen’. Given that contradictory reports are so freely available and your confidence in the model your are asserting I would have expected you to have a somewhat more broad exposure to the relevant science.
Skinner had a similar ‘simple’ theory. But he was wrong. Not wrong because the mechanisms he described weren’t important parts of human psychology but wrong because he asserted them to the exclusion of all else.
I believe you can make testable behavior changes and your work with clients impresses me. I also believe you could change people to be less afraid of, for example, heights. Nevertheless, I would not necessarily believe your report on how these anxieties came into being. People can be afraid of heights even if they didn’t make a habit of falling off cliffs in their childhood.
I have a strong bias for you PJ, in all but your tendency to be quite rigidly minded when it comes to forcing reality into your simple models. I allow myself to vocally reject the parts of your comments that I disagree with because that way I will not be dismissed as a fan boy when I speak in your defense. You aren’t, for example, a quack and your advice, experience and willingness to share it are invaluable. I also, for what it is worth, find PCT to be a useful way of describing the dynamics of human behavior much of the time.
Perhaps I’m missing something, but I don’t see where it says we’re all automatically afraid of snakes. I have seen research that monkeys have an inbuilt ability to learn to fear snakes, but the mechanism has to be switched on via learning, and my understanding is that humans are the same way… unless you are arguing that individual variations in fear of snakes is purely determined by genetics.
[Edit to add: one of the first papers you linked to includes this quote: “For studies of captive primates, King did not find consistent evidence of snake fear.” And the second page goes on to describe the very “they have to learn to fear snakes” research that I previously spoke of.]
I think perhaps we are miscommunicating: I do not deny that primate brains contain snake detectors. I do deny that said detectors are unaffected by learning: humans and monkeys can and do learn which snakes to fear, or not fear.
We seem to be miscommunicating again. What mechanism is it that you think I am asserting “to the exclusion of all else”? The model I personally use contains several mechanisms, and the moral injunctions aspect I spoke of here is only one such mechanism. It is certainly not the only relevant mechanism in human behavior, even in the relatively narrow field of applicability where I use it.
I don’t do classical phobia work, actually, so I wouldn’t have a valid opinon on that one, one way or the other. ;-)
It’s certainly true that, In order to reach scientific standards, I would need to find a way to double-blindly substitute a placebo version of childhood memories for the real thing in order to prove that it’s the modification of the memory that makes it work. (I have occasionally tested single-blind placebo substitutions on other things, but not this, as I have no idea what I could substitute.)
Mainly, what I do to test alternative hypotheses regarding a change technique is to see what parts of it I can remove, without affecting the results. Whatever’s left, I assume has some meaning. (Side note: most published descriptions of actually-working self-help techniques contain superfluous steps, that, when removed, tend to make each technique sound like a mere minor variation on one of a handful of major themes… which I expect to correspond to mechanisms in the brain.)
In the instant discussion of moral injunctions, examining the memory of the learning or imprint experience appears to be indispensable, and therefore I conclude (hypothesize, if you prefer) that these memories are an integral part of the process of formation of moral injunction-regulated behavior.
FWIW, I do not claim universal applicability of my models outside their target domain. However, within that target domain, most discussions here tend to have only vaporous speculation weighing against many, many tests and observations. When someone proposes a speculative and more complex model than one I am already using, I want to see what their model can predict that mine cannot, or vice versa.
If you have a more parsimonious model for “belief in belief” than simple moral injunctions regarding spoken behavior, I’d love to see it. But since “belief in belief” cleanly falls out as a side effect of my model, I don’t see a reason to go looking for a more complicated, special-purpose belief module, just because there could be one. Should I encounter a client who needs a belief-in-belief fixed, and find that my existing model can’t fix it, then I will have reason to go looking for an updated model.
Now, when I do see a more parsimonious model here than one I’m already using, I adopt it wholeheartedly. For all that people seem to frame me as having brought PCT to Lesswrong.com, the reverse is actually true:
lesswrong is where I heard about PCT in the first place!
And I adopted it because it fit very neatly into my existing model… it was as though my model was a graph with lots of edges, but no nodes, and PCT gave me a paradigm for what I should expect “nodes” to look like. (And incorporating it into my model also subsequently allowed me to discover a new kind of “edge” that I hadn’t spotted previously.)
So actually, I don’t consider PCT to be a comprehensive model in itself either, because it lacks the “edges” that my own model contains!
Which makes it a bit frustrating any time anyone acts as though I 1) brought PCT to LW, and 2) think it’s a cure-all or even a remotely complete model of human behavior… it’s just better than its competitors, such as the aforementioned Skinnerian model you mentioned.
Great. I would appreciate it, though, if you not use boo lights like “mommy issues” and “PCT” (which sadly, seems to have become one around these parts), especially when the first is a denigratory caricature and the second not even relevant. (Moral injunctions are an “edge” in my own model, not a “node” from PCT.)
I agree on this note. I do not agree that Occam suggests that fear of snakes, spiders and heights is the sole result of learned associations. I also do not agree that aversion to fundamental belief switching is purely the result of learning from trauma.
Of course not. I never claimed they were. I only make the claim that learning is an essential component of the moral injunction mechanism. You have to learn which beliefs not to switch, at the very least!
I’ve also described a variety of apparently built-in behaviors triggered by the mechanism: proselytizing, gossip, denouncing others, punishing non-punishers, feelings of guilt, etc. These are just as much built-in mechanisms as “snake detectors”… and monkeys appear to have some of them.
What I say is that, just like the snake detectors, these mechanisms require some sort of learning in order to be activated… and that evolutionarily, applying these mechanisms to behavior would be of primary importance; applying them to beliefs would have to come later, after language.
And at that point, it’s far more parsimonious to assume evolution would reuse the same basic behavior-control mechanism, rather than implementing a new one specifically for “beliefs”… especially since, to the naive mind, “beliefs” are transparent. There’s simply “how things are”.
To an unsophisticated mind, someone who thinks things are different than “how things are” is obviously either crazy, or a member of an enemy tribe.
Not an “apostate”.
Most of the behavior mechanisms involved are there for the establishment and maintenance of tribe behavioral norms, and were later memetically co-opted by religion. I quite doubt that religion or anything we’d consider a “belief system” (i.e., a set of non-reality-linked beliefs used for signalling) were what the mechanism was meant for.
IOW, ISTM the support systems for reality-linked belief systems had to have evolved first.
This is not a claim of exclusivity of mechanism, so I don’t really know where you’re getting that from. I’m only saying that I don’t see the necessity for an independent belief-in-belief system to evolve, when the conditions that make use of it would not have arrived until well after a “group identity behavioral norms control enforcement” system was already in place, and the parsimonious assumption is that non-reality-linked beliefs would be at most a minor modification to the existing system.
No. I’m talking about apostasy. I’m not talking about someone who is crazy. I am not talking about a member of an enemy tribe. I am talking about someone from within the tribe who is, or is considering, changing their identifying beliefs to something that no longer matches the in-group belief system. This change in beliefs may be to facilitate joining a different tribe. It may be a risky play at power within the tribe. It may be to splinter off a new tribe from the current one.
Since we are talking in the context of religious beliefs the word apostate fits perfectly.
In order for any of those things to be advantageous (and thus need countermeasures), you first have to have tribes… which means you already need behavior-based signaling, not just non-reality-linked “belief” signaling.
So I still don’t see why postulating an entirely new, separate mechanism is more parsimonious than assuming (at most) a mild adaptation of the old, existing mechanisms… especially since the output behaviors don’t seem different in any important way.
Can you explain why you think a moral injunction of “Don’t say or even think bad things about the Great Spirit” is fundamentally any different from “Don’t say ‘no’, that’s rude. Say ‘jalaan’ instead,” or “Don’t eat with your left hand, that’s dirty?”
In particular, I’d like to know why you think these injunctions would need different mechanisms to carry out such behaviors as disgust at violators, talking up the injunction as an ideal to conceal one’s desire for non-compliance, etc.
In fairness, the “left hand” thing has to do with toilet hygiene pre-toilet-paper, so at one time it had actual health implications.
That’s why I brought it up—it’s in the category of “reality-based behavior norms enforcement”, which has much greater initial selection pressure (or support) than non-reality-based behavior norms enforcement.
Animals without language are capable of behavioral norms enforcement, even learned norms enforcement. It’s not parsimonious to presume that religion-like beliefs would not evolve as a subset of speech-behavior norms enforcement, in turn as a subset of general behavior norms enforcement.
[Edit: removed “enfrorcement” typo]
I guess I was just pointing out that it seemed to be in a different category (“reality-based behavior norms enforcement” is as good a name as any) than the other examples.
If I were God I would totally refactor the code for humans and make it more DRY.
You seem to be confusing “simplicity of design” with “simplicity of implementation”. Evolution finds solutions that are easily reached incrementally -- those which provide an advantage immediately, rather than requiring many interconnecting pieces to work. This makes reuse of existing machinery extremely common in evolution.
It is also improbable that any selection pressure for non-reality-based belief-system enforcement would exist, until some other sort of reality-based behavioral norms system existed first, within which pure belief signaling would then offer a further advantage.
Ergo, the path of least resistance for incremental implementation simplicity, supports the direction I have proposed: first behavioral enforcement, followed by belief enforcement using the same machinery—assuming there’s actually any difference between the two.
I could be wrong, but it’s improbable, unless you or someone else has some new information to add, or some new doubt to shed upon one of the steps in this reasoning.
I’m not and I know.
Earlier in this conversation you made the claim:
This suggested that if “everything you have seen” didn’t include the many contrary findings then either you hadn’t seen much or what you had seen was biased.
I really do not think new information will help us. Mostly because approximately 0 information is being successfully exchanged in this conversation.
I still don’t see what “contrary” findings you’re talking about, because the first paper you linked to explicitly references the part where monkeys that grow up in cages don’t learn to fear snakes. Ergo, fear of snakes must be learned to be activated, even though there appears to be machinery that biases learning in favor of associating aversion to snakes.
This supports the direction of my argument, because it shows how evolution doesn’t create a whole new “aversive response to snakes” mechanism, when it can simply add a bias to the existing machinery for learning aversive stimuli.
In the same way, I do not object to the idea that we have machinery to bias learning in favor of mouthing the same beliefs as everyone else. I simply say it’s not parsimonious to presume it’s an entirely independent mechanism.
At this point, it seems to me that perhaps this discussion has consisted entirely of “violent agreement”, i.e. both of us failing to notice that we are not actually disagreeing with each other in any significant way. I think that you have overestimated what I’m claiming: that childhood learning is an essential piece in moral and other signaling behavior, not the entirety of it… and I in turn may have misunderstood you to be arguing that an independent inbuilt mechanism is the entirety of it.
When in fact, we are both saying that both learning and inbuilt mechanisms are involved.
So, perhaps we should just agree to agree, and move on? ;-)
We differ in our beliefs on what evidence is available. I assert that it varies from ‘a bias to learn to fear snakes’ to ‘snake naive monkeys will even scream with terror and mob a hose if you throw it in with them’. This depends somewhat on which primates are the subject of the study.
It does seem, however, that our core positions are approximately compatible, which leaves us with a surprisingly pleasant conclusion.
We also disagree in how much relevance that has to the position you’ve been arguing (or at least the one I think you’ve been arguing).
I’ve seen some people claim that humans have only two inborn fears (loud noises and falling) on the basis that those are the only things that make human babies display fear responses. Which, even if true, wouldn’t necessarily mean we didn’t have instinctive fears kick in later than life!
And that’s why I don’t think any of that is actually relevant to the specific case; it’s really the specifics of the case that count.
And in the specific case of beliefs, we don’t get built-in protein coding for which beliefs we should be afraid to violate. We have to learn them, which makes learning an essential piece of the puzzle.
And from my own perspective, the fact that there’s a learned piece means that it’s the part I’m going to try to exploit first. If it can be learned, then it can be unlearned, or relearned differently.
As I said in another post, I can’t make my brain stop seeking SASS (status, affiliation, safety, and stimulation). But I can teach it to interpret different things as meaning I’ve got them.
Clearly, we can still learn such things later in life. After all, how long did it take most contributors’ brains to learn that “karma” represents a form of status, approval, or some combination thereof, and begin motivating them based on it?
That being, “We don’t need a past traumatic experience to have an aversive reaction when considering rejecting the beliefs of the tribe in which we were raised.”
I agree with the remainder of your post and, in particular, this is exactly the kind of reasoning I use when working out how to handle situations like this:
I don’t recall claiming that a traumatic experience was required. Observing an aversive event, yes. But in my experience, that event could be as little as hearing your parents talking derisively about someone who’s not living up to their norms… not too far removed, really, from seeing another monkey act afraid of a snake.
Aversion, however, (in the form of a derogatory, shocked, or other emotional reaction) seems to be required in order to distinguish matters of of taste (“I can’t believe she wore white after Labor Day”) and matters of import (“I can’t believe she spoke out against the One True God… kill her now!”). We can measure how tightly a particular belief or norm is enforced by the degree of emotion used by others in response to either the actual situation, or the described situation.
So it appears that this is where we miscommunicated or misunderstood, as I interpreted you to be saying that aversive learning was not required, while you appear to have interpreted what I’m saying as having some sort of personal trauma being required that directly links to an individual belief.
It’s true that most of the beliefs I work with tend to be rooted in direct personal experience, but a small number are based on something someone said about something someone else did. Even there, though, the greater the intensity of the emotional surrounding the event (e.g. a big yelling fight or people throwing things), the greater the impact.
Like other species of monkeys, we learn to imitate what the monkeys around us do while we’re growing up; we just have language and conceptual processing capabilities that let us apply our imitation to more abstract categories of behavior than they do, and learn from events that are not physically present and happening at that moment.
Btw, the Iowa Gambling Task is an example of a related kind of unconscious learning that I’m talking about here. In it, people learn to feel fear about choosing cards from a certain deck, long before their conscious mind notices or accounts for the numerical probabilities involved. Then, their conscious minds often make up explanations which have little if any connection to the “irrational” (but accurate) feeling of fear.
So if you seem to irrationally fear something, it’s an indication that your subconscious picked up on raw probability data. And this raw probability data can’t be overrided by reasoning unless you integrate the reasoning with the specific experiences, so that a different interpretation is applied.
For example, suppose there’s someone who always looks away from you and leaves the room when you enter. You begin to think that person doesn’t like you… and then you hear they actually have a crush on you. You have the same sensory data, but a different interpretation, and your felt-response to the same thoughts is now different. Voila… memory reconsolidation, and your thoughts are now biased in a different, happier way. ;-)
Okay, that makes sense. My initial reaction is that the fear has less to do with people’s reactions to me and more the amount of change in the actions I take. Their responses to these new actions is more severe than their expected actions as a result of my dropping Theism.
But the more I think about it the more I think that this is just semantics. I’ll give your suggestion a shot and see what happens. I am not expecting much but we’ll see. The main criticism that I have at this point is that my “fears” are essentially predictions of behavior. I do not consider them irrational fears...
Ah, okay, this part relates to the trigger of dealing with the initial reaction to the questions being asked.
My personal solutions for this style of fear (which is separate from the fear of future social reactions, which I can understand may not have been obvious) is the same as my pattern of behavior relating to pain tolerance. It goes away if I focus on it just the right way.
By the end of the week I expect to be able to return to the topic without any overt hinderances. I take this to mean the fear is gone or I am so completely self-deluded that the magic question no longer means the same thing as it did when it was first asked. I prefer to think it is the former.
I was just giving an example. The key questions are:
What is the trigger stimulus? and
What is the repeatable, observable reaction you wish to change?
In what you said above, the trigger is “thinking about what I’d do if I were not a theist”, and you are using the word “fear” to describe the automatic reaction.
I’m saying that you should precisely identify what you mean by “fear”—does your pulse race? Palms sweat? Do you clench your teeth, feel like you’re curling into a ball, what? There are many possible physical autonomic reactions to the emotion of fear… which one are you doing automatically, without conscious intent, every time you contemplate “what I’d do if I were not a theist”?
This will serve as your test—a control condition against which any attempted change can be benchmarked. You will know you have arrived at a successful conclusion to your endeavor when the physiological reaction is extinguished—i.e., it will cease to bias your conscious thought.
I consider this a litmus test for any psychological change technique: if it can’t make an immediate change (by which I mean abrupt, rather than gradual) in a previously persistent automatic response to a thought, it’s not worth much, IMO.
Focus on what the stimulus and response are, and that will keep you from wandering into semantic questions… which operate in the verbal “far” mind, not the nonverbal “near” mind that you’re trying to tap into and fix.
This is one of those “simple, but not easy” things… not because it isn’t easy to do, but because it’s hard to stop doing the verbal overshadowing part.
We all get so used to following our object-level thoughts, running in the emotionally-biased grooves laid down by our feeling-level systems, that the idea of ignoring the abstract thoughts to look at the grooves themselves seems utterly weird, foreign, and uncomfortable. It is, I find, the most difficult part of mindhacking to teach.
But once you get used to the idea that you simply cannot trust the output of your verbal mind while you’re trying to debug your pre-verbal biases, it gets easier. During the early stages though, it’s easy to be thinking in your verbal mind that you’re not thinking in your verbal mind, simply because you’re telling yourself that you’re not… which in hindsight should be a really obvious clue that you’re doing it wrong. ;-)
Bear in mind that your unconcious mind does not require complex verbalizations (above simple if-then noun-verb constructs) to represent its thought processes. If you are trying to describe something that can’t be reduced to “(sensory experience X) is followed by (sensory experience Y)”, you are using the wrong part of your brain—i.e., not the one that actually contains the fear (or other emotional response).