I’m not sure if I’ve had the same global insight as lsusr has, but I feel like I’ve had local experiences taking me more in that direction. My experience has been that the thing that’s being shed is more accurately described as “rationalization” than “desire”.
E.g. in Fabricated Options, Duncan talks about situations where all the options available to people have downsides they don’t like. So then some people think that there should be an option that had only upsides, refusing to accept that there might not be any such thing. So if you stop doing that, then you lose the desire that reality should be something else than what it is. And then you can actually achieve your desires better, since you see what reality is actually like. Even if this does also require you to acknowledge the fact that you do have to let go of some of your original “get me only the upsides” desires—but those were the kinds of desires that were always impossible to achieve anyway.
You still keep most of your ordinary human desires though. I’ve also seen various advanced meditation teachers say—and this matches my experience—that your natural personality (which includes all of your desires) starts to shine brighter since you also lose the belief relating to “my personality should be something else than what it is”. That doesn’t mean you can’t still work on anger management or whatever, just that you come to see it for what it really is rather than as something you’d want to see it as.
But then also there are various approaches within Buddhism, with some being more actively anti-desire (“renunciation”) than others. So what makes things confusing is that some teachers do say that you should also let go of the things we’d usually call “desires”, conflating those with the rationalization-type desires. Given that lsusr says you understood him perfectly, maybe he subscribes to those schools? That’s unclear to me from his post.
To clarify: I distinguish between desire-craving and preference-likes. Letting go of desire-craving leads to cessation of suffering, but preference-likes remain. I think that you [Kaj Sotala] are using the phrase “ordinary human desires” to refer to what I conceptualize of as non-desire “preference-likes”.
I’m trying to combine different explanations (and the fact that obviously some people think this is a positive change) into a single picture. Right now I have this model/hypothesis:
I have many values/wishes/desires that affect the reward system in my brain. Suppose, for the sake of simplicity, I have two: I want one million dollars (money) and I want a dragon in my garage (dragon). Also, suppose this desires have the same strength: I’m indifferent between “money, but no dragon” and “dragon, but no money”.
The probability that I will have money in the future if I work towards this goal is much much higher than the probability that I will have a dragon if I work towards this goal. So, to guide me in the direction of the maximization of the fullfilment of my desires, my reward system should give me much higher negative reward for the lack of money, than for the lack of dragon. But by default my reward system is poorly calibrated, so the negative rewards in this two cases are much closer to each other than they should be. As a result, I work towards money less and towards dragon more, and my expected utility is lower.
Meditation practices fix this bug by recalibrating the reward system. Since as a result, the non-fulfillment of some desires ceases to have a non-negligible effect on the output of the reward system, it sometimes is described as “let go of [some] desires”. But it does not mean that I will not create a dragon in my garage in a Glorious Post-Singularity Transhumanist Future when I have the opportunity to do so.
Personally, my understanding is based on what might be a fundamentally different theory of mind. I believe there’s two major optimization algorithms at work.
Optimizer 1 is a real-time world model prediction error minimizer. Think predictive coding.
That’s my theory of mind. You describe two competing reward systems. But reward systems belong in the domain of Optimizer 2. The way I look at things, meditation (temporarily?) shuts down Optimizer 2, which allows Optimizer 1 to self-optimize unimpeded.
I don’t have a complete model of what exactly is going on either. My current guess is that there are something like two different layers of motivation in the brain. One calculates expected utilities in a relatively unbiased manner and meditation doesn’t really affect that one much, but then there’s another layer on top of that which notices particularly high-utility (positive or negative) scenarios and gives them disproportionate weight. That second one tends to mess things up and is the one that meditation seems to weaken.
It looks to me like weakening the second thing tends to make one’s decisions purely better, and more likely for the brain to just do the correct expected utility calculations. I acknowledge that this is very weird and implausible-sounding, because why would the brain develop a second layer of motivation that just messes things up?
My strong suspicion at the moment is that it has to do with social strategies. Calculating expected utilities wrong is normally just bad, but it can be beneficial if other agents are modeling you and making decisions based on their models of you. So if you end up believing that an actually impossible outcome is possible, you may not be able to ever achieve that outcome. But your opponents who see that you are impossible to reason with may still give in, letting you get at least somewhat closer to that outcome than as if you’d been reasonable.
I have some posts with more speculation about these things here and here.
I’m not sure if I’ve had the same global insight as lsusr has, but I feel like I’ve had local experiences taking me more in that direction. My experience has been that the thing that’s being shed is more accurately described as “rationalization” than “desire”.
E.g. in Fabricated Options, Duncan talks about situations where all the options available to people have downsides they don’t like. So then some people think that there should be an option that had only upsides, refusing to accept that there might not be any such thing. So if you stop doing that, then you lose the desire that reality should be something else than what it is. And then you can actually achieve your desires better, since you see what reality is actually like. Even if this does also require you to acknowledge the fact that you do have to let go of some of your original “get me only the upsides” desires—but those were the kinds of desires that were always impossible to achieve anyway.
You still keep most of your ordinary human desires though. I’ve also seen various advanced meditation teachers say—and this matches my experience—that your natural personality (which includes all of your desires) starts to shine brighter since you also lose the belief relating to “my personality should be something else than what it is”. That doesn’t mean you can’t still work on anger management or whatever, just that you come to see it for what it really is rather than as something you’d want to see it as.
But then also there are various approaches within Buddhism, with some being more actively anti-desire (“renunciation”) than others. So what makes things confusing is that some teachers do say that you should also let go of the things we’d usually call “desires”, conflating those with the rationalization-type desires. Given that lsusr says you understood him perfectly, maybe he subscribes to those schools? That’s unclear to me from his post.
Well-put.
To clarify: I distinguish between desire-craving and preference-likes. Letting go of desire-craving leads to cessation of suffering, but preference-likes remain. I think that you [Kaj Sotala] are using the phrase “ordinary human desires” to refer to what I conceptualize of as non-desire “preference-likes”.
Cool. Yeah, that was what I meant.
I’m trying to combine different explanations (and the fact that obviously some people think this is a positive change) into a single picture. Right now I have this model/hypothesis:
I have many values/wishes/desires that affect the reward system in my brain. Suppose, for the sake of simplicity, I have two: I want one million dollars (money) and I want a dragon in my garage (dragon). Also, suppose this desires have the same strength: I’m indifferent between “money, but no dragon” and “dragon, but no money”.
The probability that I will have money in the future if I work towards this goal is much much higher than the probability that I will have a dragon if I work towards this goal. So, to guide me in the direction of the maximization of the fullfilment of my desires, my reward system should give me much higher negative reward for the lack of money, than for the lack of dragon. But by default my reward system is poorly calibrated, so the negative rewards in this two cases are much closer to each other than they should be. As a result, I work towards money less and towards dragon more, and my expected utility is lower.
Meditation practices fix this bug by recalibrating the reward system. Since as a result, the non-fulfillment of some desires ceases to have a non-negligible effect on the output of the reward system, it sometimes is described as “let go of [some] desires”. But it does not mean that I will not create a dragon in my garage in a Glorious Post-Singularity Transhumanist Future when I have the opportunity to do so.
Does this sound right?
Personally, my understanding is based on what might be a fundamentally different theory of mind. I believe there’s two major optimization algorithms at work.
Optimizer 1 is a real-time world model prediction error minimizer. Think predictive coding.
Optimizer 2 is is a operant reinforcement reward system. Optimizer 2 is parasitic on Optimizer 1. The conflict between Optimizer 1 and Optimizer 2 is a mathematical constraint inherent to embedded world optimizers.
That’s my theory of mind. You describe two competing reward systems. But reward systems belong in the domain of Optimizer 2. The way I look at things, meditation (temporarily?) shuts down Optimizer 2, which allows Optimizer 1 to self-optimize unimpeded.
I don’t have a complete model of what exactly is going on either. My current guess is that there are something like two different layers of motivation in the brain. One calculates expected utilities in a relatively unbiased manner and meditation doesn’t really affect that one much, but then there’s another layer on top of that which notices particularly high-utility (positive or negative) scenarios and gives them disproportionate weight. That second one tends to mess things up and is the one that meditation seems to weaken.
It looks to me like weakening the second thing tends to make one’s decisions purely better, and more likely for the brain to just do the correct expected utility calculations. I acknowledge that this is very weird and implausible-sounding, because why would the brain develop a second layer of motivation that just messes things up?
My strong suspicion at the moment is that it has to do with social strategies. Calculating expected utilities wrong is normally just bad, but it can be beneficial if other agents are modeling you and making decisions based on their models of you. So if you end up believing that an actually impossible outcome is possible, you may not be able to ever achieve that outcome. But your opponents who see that you are impossible to reason with may still give in, letting you get at least somewhat closer to that outcome than as if you’d been reasonable.
I have some posts with more speculation about these things here and here.