Of course, we can’t know for sure. It could be that the interventions actually worked by a different method than they seemed to.
But consider e.g. the first story. Here was a person who started out entirely convinced that the belief in free will was an intrinsically hardwired need that they had. It had had a significant impact on their entire life, to the point of making them suicidally depressed when they couldn’t believe it. I had a theory of how the mind works which made a different prediction, and I only needed to briefly suggest it for them to surface compatible evidence without me needing to make any more leading comments. After that, I only needed to suggest a single intervention which my model predicted would cause a change, and it did, causing a long-term and profound change in the other person.
Because I do expect it to be a permanent change rather than just a short-term effect. Of course, the first two examples are both from this year—I didn’t ask Sampo when exactly his example happened—so in principle it’s still possible that these will reverse themselves. But that’s not my general experience with these things—rather, these interventions tend to produce permanent and lasting change. The longest-term effect I have personal data for is from June 2017; this follow-up from December 2018 still remains a good summary of what that intervention ended up fixing in the long term. (As noted in that follow-up, it’s still possible for some issues to come back in a subtler form, or for some of the issues to also have other causes; but that’s distinct from the original issue coming back in its original strength.)
So it’s possible that my model is mistaken about the exact causality—but that by treating the model as if it was true, you’re still able to cause lasting and deep changes in people’s psychology. If my model is wrong, then we need another model that would explain the same observations. Currently I think that the kinds of models that I’ve outlined would explain those observations pretty well while being theoretically plausible, but I’m certainly open to alternative ones.
I don’t think that e.g. just “hearing the right emotional story can produce relief” is a very good alternative theory. I’ve certainly also had experience of superficial emotional stories that sounded compelling for a little while and whose effect then faded out, but over time I’ve learned that a heuristic of “do these effects last for longer than a month” is pretty good for telling those apart from the ones that have a real effect. The permanent ones may also have an effect on things you didn’t even realize were related beforehand—e.g. the person in the first example analyzing the things that they realized about it in retrospect—whereas in my experience, the short-term ones mostly just include effects that are obviously and directly derivable from the story.
So some compelling stories seem to produce relatively minor short-term effects while other interventions cause much broader and longer-lasting ones, and just the hypothesis of “emotional stories can be compelling” doesn’t explain why some emotional stories work better than others. Nor would it have predicted that suggesting the specific intervention that I offered would have been particularly useful.
All of that said, I do admit that the third story has more interacting pieces and that the overall evidence for that one is weaker. We can only be relatively sure that telling the client to imagine a different kind of mother was the final piece in resolving the issue; it’s possible that the other inferences about the mother’s beliefs are incorrect. I still wanted to include it, in the spirit of learning soft skills, because I think that many beliefs that affect our behavior aren’t nice and clear-cut ones where you can just isolate a single key belief and be relatively sure of what happened because you can observe the immediate effects. Rather there’s much more behavior that’s embedded in an interacting web of beliefs like I outlined there. Even if the details of that particular story were off, enough of it resonates in my inner simulator that I’m pretty sure that something like that story could be true and often is true. But for that one I can’t offer a more convincing argument than “load it up in your own inner sim and see whether it resonates”.
Of course, we can’t know for sure. It could be that the interventions actually worked by a different method than they seemed to.
But consider e.g. the first story. Here was a person who started out entirely convinced that the belief in free will was an intrinsically hardwired need that they had. It had had a significant impact on their entire life, to the point of making them suicidally depressed when they couldn’t believe it. I had a theory of how the mind works which made a different prediction, and I only needed to briefly suggest it for them to surface compatible evidence without me needing to make any more leading comments. After that, I only needed to suggest a single intervention which my model predicted would cause a change, and it did, causing a long-term and profound change in the other person.
Because I do expect it to be a permanent change rather than just a short-term effect. Of course, the first two examples are both from this year—I didn’t ask Sampo when exactly his example happened—so in principle it’s still possible that these will reverse themselves. But that’s not my general experience with these things—rather, these interventions tend to produce permanent and lasting change. The longest-term effect I have personal data for is from June 2017; this follow-up from December 2018 still remains a good summary of what that intervention ended up fixing in the long term. (As noted in that follow-up, it’s still possible for some issues to come back in a subtler form, or for some of the issues to also have other causes; but that’s distinct from the original issue coming back in its original strength.)
So it’s possible that my model is mistaken about the exact causality—but that by treating the model as if it was true, you’re still able to cause lasting and deep changes in people’s psychology. If my model is wrong, then we need another model that would explain the same observations. Currently I think that the kinds of models that I’ve outlined would explain those observations pretty well while being theoretically plausible, but I’m certainly open to alternative ones.
I don’t think that e.g. just “hearing the right emotional story can produce relief” is a very good alternative theory. I’ve certainly also had experience of superficial emotional stories that sounded compelling for a little while and whose effect then faded out, but over time I’ve learned that a heuristic of “do these effects last for longer than a month” is pretty good for telling those apart from the ones that have a real effect. The permanent ones may also have an effect on things you didn’t even realize were related beforehand—e.g. the person in the first example analyzing the things that they realized about it in retrospect—whereas in my experience, the short-term ones mostly just include effects that are obviously and directly derivable from the story.
So some compelling stories seem to produce relatively minor short-term effects while other interventions cause much broader and longer-lasting ones, and just the hypothesis of “emotional stories can be compelling” doesn’t explain why some emotional stories work better than others. Nor would it have predicted that suggesting the specific intervention that I offered would have been particularly useful.
All of that said, I do admit that the third story has more interacting pieces and that the overall evidence for that one is weaker. We can only be relatively sure that telling the client to imagine a different kind of mother was the final piece in resolving the issue; it’s possible that the other inferences about the mother’s beliefs are incorrect. I still wanted to include it, in the spirit of learning soft skills, because I think that many beliefs that affect our behavior aren’t nice and clear-cut ones where you can just isolate a single key belief and be relatively sure of what happened because you can observe the immediate effects. Rather there’s much more behavior that’s embedded in an interacting web of beliefs like I outlined there. Even if the details of that particular story were off, enough of it resonates in my inner simulator that I’m pretty sure that something like that story could be true and often is true. But for that one I can’t offer a more convincing argument than “load it up in your own inner sim and see whether it resonates”.