Someone I know has reported something similar. She had both negative and positive beliefs of another person, and felt that the negative beliefs were wrong. After trying to do reconsolidation, she found that the negative beliefs only got stronger. Not only was this an unwanted result, it also didn’t feel more true, but also felt really distressing. She did get it eventually fixed, and is still using the technique, but is now more cautious about it.
Personally I haven’t had this kind of an issue: I find that if I’m in a stance where I have already decided that a certain belief is wrong and am trying to force my brain to update on that, the update process just won’t go through, or produces a brief appearance of going through but doesn’t really change anything. This seems fortunate, since it forces me to switch to more of a mode of exploration: is this belief false, or might it in fact be true? (Note that UtEB also explicitly cautions against trying to explicitly argue against or disprove a belief.)
If you go through a belief update process and it feels like the wrong belief got confirmed, the fact that you feel like the wrong belief won means that there’s still some other belief in your brain disagreeing with that winner. In those kinds of situations, if I am approaching this from a stance of open exploration, I can then ask “okay, so I did this update but some part of my mind still seems to disagree with the end result; what’s the evidence behind that disagreement, and can I integrate that”?
In my experience, if I find myself really strongly insisting that a belief must be false and disproven, then that may actually be because a part of my mind thinks that it would be really bad for the belief to be true. Maybe it would be really unpleasant to believe in existential risk being a serious issue, and then I get blended with the part that really doesn’t want it to be true. Then I try to prove x-risk concerns false, which repeatedly fails because the issue isn’t them being false, the issue is me not wanting to believe them true. mr-hire has a good piece of advice relating to this:
For every belief schema you’re working with, there’s (at least) two belief schema’s at play. There’s the side that believes a particular thing, and then there’s a side that wants you to question the belief in that thing. As a general rule, you should always start with the side that’s more cognitively fused.
As an example, I was working with someone who was having issues going to bed on time, and wanted to change that. Before we started looking at the schema of “I should avoid ruminating by staying up late,” We first examined the schema of “I should get more sleep.”
By starting with the schema that you’re more cognitively fused with, you avoid confirmation bias and end up with more accurate beliefs at the end.
Note also that it may be the case that you really want some belief to be false, and it is in fact false. But the above bit is good advice even in that situation: even if the belief is false, you are less likely to be able to update it if your mind is stuck on wanting to disprove it, because you need to experience it as genuinely true in order to make progress. As I’ve mentioned:
Something that been useful to me recently has been remembering that according to memory reconsolidation principles, experiencing an incorrect emotional belief as true is actually necessary for revising it. Then, when I get an impulse to push the wrong-feeling belief out of my mind, I instead take the objecting part or otherwise look for counterevidence and let the counterbelief feel simultaneously true as well. That has caused rapid updates the way Unlocking the Emotional Brain describes.
I think that basically the same kind of thing (don’t push any part out of your mind without giving it a say) has already been suggested in IDC, IFS etc.; but in those, I’ve felt like the framing has been more along the lines of “consider that the irrational-seeming belief may still have an important point”, which has felt hard to apply in cases where I feel very strongly that one of the beliefs is actually just false. Thinking in terms of “even if this belief is false, letting myself experience it as true allows it to be revised” has been useful for those situation
All of that said, I do agree that there is always the risk of more extensive integration actually leading to incorrect beliefs. In expectation, learning more about the world is going to make you smarter, but there’s always the chance of buying into a crazy theory that makes you dumber and integrating your beliefs to be more consistent with it—or even buying into a correct theory that makes you dumber. But of course, if you don’t try to learn or integrate your models more, you’re not going to have very good models either.
If you go through a belief update process and it feels like the wrong belief got confirmed, the fact that you feel like the wrong belief won means that there’s still some other belief in your brain disagreeing with that winner. In those kinds of situations, if I am approaching this from a stance of open exploration, I can then ask “okay, so I did this update but some part of my mind still seems to disagree with the end result; what’s the evidence behind that disagreement, and can I integrate that”?
I sometimes find that memories and the beliefs about the world that they power are “stacked” several layers deep. It’s rare to find a memory directly connected to a mistaken ground belief, and it’s more normal that 2, 3, 4, or even 5 memories are all interacting through twists and turns to produce whatever knotted and confused sense of the world I have.
Someone I know has reported something similar. She had both negative and positive beliefs of another person, and felt that the negative beliefs were wrong. After trying to do reconsolidation, she found that the negative beliefs only got stronger. Not only was this an unwanted result, it also didn’t feel more true, but also felt really distressing. She did get it eventually fixed, and is still using the technique, but is now more cautious about it.
Personally I haven’t had this kind of an issue: I find that if I’m in a stance where I have already decided that a certain belief is wrong and am trying to force my brain to update on that, the update process just won’t go through, or produces a brief appearance of going through but doesn’t really change anything. This seems fortunate, since it forces me to switch to more of a mode of exploration: is this belief false, or might it in fact be true? (Note that UtEB also explicitly cautions against trying to explicitly argue against or disprove a belief.)
If you go through a belief update process and it feels like the wrong belief got confirmed, the fact that you feel like the wrong belief won means that there’s still some other belief in your brain disagreeing with that winner. In those kinds of situations, if I am approaching this from a stance of open exploration, I can then ask “okay, so I did this update but some part of my mind still seems to disagree with the end result; what’s the evidence behind that disagreement, and can I integrate that”?
In my experience, if I find myself really strongly insisting that a belief must be false and disproven, then that may actually be because a part of my mind thinks that it would be really bad for the belief to be true. Maybe it would be really unpleasant to believe in existential risk being a serious issue, and then I get blended with the part that really doesn’t want it to be true. Then I try to prove x-risk concerns false, which repeatedly fails because the issue isn’t them being false, the issue is me not wanting to believe them true. mr-hire has a good piece of advice relating to this:
Note also that it may be the case that you really want some belief to be false, and it is in fact false. But the above bit is good advice even in that situation: even if the belief is false, you are less likely to be able to update it if your mind is stuck on wanting to disprove it, because you need to experience it as genuinely true in order to make progress. As I’ve mentioned:
All of that said, I do agree that there is always the risk of more extensive integration actually leading to incorrect beliefs. In expectation, learning more about the world is going to make you smarter, but there’s always the chance of buying into a crazy theory that makes you dumber and integrating your beliefs to be more consistent with it—or even buying into a correct theory that makes you dumber. But of course, if you don’t try to learn or integrate your models more, you’re not going to have very good models either.
I sometimes find that memories and the beliefs about the world that they power are “stacked” several layers deep. It’s rare to find a memory directly connected to a mistaken ground belief, and it’s more normal that 2, 3, 4, or even 5 memories are all interacting through twists and turns to produce whatever knotted and confused sense of the world I have.