I don’t have a complete model of what exactly is going on either. My current guess is that there are something like two different layers of motivation in the brain. One calculates expected utilities in a relatively unbiased manner and meditation doesn’t really affect that one much, but then there’s another layer on top of that which notices particularly high-utility (positive or negative) scenarios and gives them disproportionate weight. That second one tends to mess things up and is the one that meditation seems to weaken.
It looks to me like weakening the second thing tends to make one’s decisions purely better, and more likely for the brain to just do the correct expected utility calculations. I acknowledge that this is very weird and implausible-sounding, because why would the brain develop a second layer of motivation that just messes things up?
My strong suspicion at the moment is that it has to do with social strategies. Calculating expected utilities wrong is normally just bad, but it can be beneficial if other agents are modeling you and making decisions based on their models of you. So if you end up believing that an actually impossible outcome is possible, you may not be able to ever achieve that outcome. But your opponents who see that you are impossible to reason with may still give in, letting you get at least somewhat closer to that outcome than as if you’d been reasonable.
I have some posts with more speculation about these things here and here.
I don’t have a complete model of what exactly is going on either. My current guess is that there are something like two different layers of motivation in the brain. One calculates expected utilities in a relatively unbiased manner and meditation doesn’t really affect that one much, but then there’s another layer on top of that which notices particularly high-utility (positive or negative) scenarios and gives them disproportionate weight. That second one tends to mess things up and is the one that meditation seems to weaken.
It looks to me like weakening the second thing tends to make one’s decisions purely better, and more likely for the brain to just do the correct expected utility calculations. I acknowledge that this is very weird and implausible-sounding, because why would the brain develop a second layer of motivation that just messes things up?
My strong suspicion at the moment is that it has to do with social strategies. Calculating expected utilities wrong is normally just bad, but it can be beneficial if other agents are modeling you and making decisions based on their models of you. So if you end up believing that an actually impossible outcome is possible, you may not be able to ever achieve that outcome. But your opponents who see that you are impossible to reason with may still give in, letting you get at least somewhat closer to that outcome than as if you’d been reasonable.
I have some posts with more speculation about these things here and here.