Craving, suffering, and predictive processing (three characteristics series)
This is the third post of the “a non-mystical explanation of insight meditation and the three characteristics of existence” series. I originally intended this post to more closely connect no-self and unsatisfactoriness, but then decided on focusing on unsatisfactoriness in this post and relating it to no-self in the next one.
Unsatisfactoriness
In the previous post, I discussed some of the ways that the mind seems to construct a notion of a self. In this post, I will talk about a specific form of motivation, which Buddhism commonly refers to as craving (taṇhā in the original Pali). Some discussions distinguish between craving (in the sense of wanting positive things) and aversion (wanting to avoid negative things); this article uses the definition where both desire and aversion are considered subtypes of craving.
My model is that craving is generated by a particular set of motivational subsystems within the brain. Craving is not the only form of motivation that a person has, but it normally tends to be the loudest and most dominant. As a form of motivation, craving has some advantages:
People tend to experience a strong craving to pursue positive states and avoid negative states. If they had less craving, they might not do this with an equal zeal.
To some extent, craving looks to me like a mechanism that shifts behaviors from exploration to exploitation.
In an earlier post, Building up to an Internal Family Systems model, I suggested that the human mind might incorporate mechanisms that acted as priority overrides to avoid repeating particular catastrophic events. Craving feels like a major component of how this is implemented in the mind.
Craving tends to be automatic and visceral. A strong craving to eat when hungry may cause a person to get food when they need it, even if they did not intellectually understand the need to eat.
At the same time, craving also has a number of disadvantages:
Craving superficially looks like it cares about outcomes. However, it actually cares about positive or negative feelings (valence). This can lead to behaviors that are akin to wireheading in that they suppress the unpleasant feeling while doing nothing about the problem. If thinking about death makes you feel unpleasant and going to the doctor reminds you of your mortality, you may avoid doctors—even if this actually increases your risk of dying.
Craving narrows your perception, making you only pay attention to things which seem immediately relevant for your craving. For example, if you have a craving for sex and go to a party with the goal of finding someone to sleep with, you may see everyone only in terms of “will sleep with me” or “will not sleep with me”. This may not be the best possible way of classifying everyone you meet.
Strong craving may cause premature exploitation. If you have a strong craving to achieve a particular goal, you may not want to do anything that looks like moving away from it, even if that would actually help you achieve it better. For example, if you intensely crave a feeling of accomplishment, you may get stuck playing video games that make you feel like you are accomplishing something, even if there was something else that you could do that was more fulfilling in the long term.
Multiple conflicting cravings may cause you to thrash around in an unsuccessful attempt to fulfill all of them. If you crave to get your toothache fixed, but also a craving to avoid dentists, you may put off the dentist visit even as you continue to suffer from your toothache.
Craving seems to act in part by creating self-fulfilling prophecies; making you strongly believe that you are going to achieve something, so as to cause you to do it. The stronger the craving, the stronger the false beliefs injected into your consciousness. This may warp your reasoning in all kinds of ways: updating to believe an unpleasant fact may subjectively feel like you are allowing that fact to become true by believing in it, incentivizing you to come up with ways to avoid believing in it.
Finally, although craving is often motivated by a desire to avoid unsatisfactory experiences, it is actually the very thing that causes dissatisfaction in the first place. Craving assumes that negative feelings are intrinsically unpleasant, when in reality they only become unpleasant when craving resists them.
Given all of these disadvantages, it may be a good idea to try to shift one’s motivation to be more driven by subsystems that are not motivated by craving. It seems to me that everything that can be accomplished via craving, can in principle be accomplished by non-craving-based motivation as well.
Fortunately, there are several ways of achieving this. For one, a craving for some outcome X tends to implicitly involve at least two assumptions:
achieving X is necessary for being happy or avoiding suffering
one cannot achieve X except by having a craving for it
Both of these assumptions are false, but subsystems associated with craving have a built-in bias to selectively sample evidence which supports these assumptions, making them frequently feel compelling. Still, it is possible to give the brain evidence which lets it know that these assumptions are wrong: that it is possible to achieve X without having craving for it, and that one can feel good regardless of achieving X.
Predictive processing and binocular rivalry
I find that a promising way of looking at unsatisfactoriness and craving and their impact on decision-making comes from the predictive processing (PP) model about the brain. My claim is not that craving would work exactly like this, but something roughly like this seems like a promising analogy.
Good introductions to PP include this book review as well as the actual book in question… but for the purposes of this discussion, you really only need to know two things:
According to PP, the brain is constantly attempting to find a model of the world (or hypothesis) that would both explain and predict the incoming sensory data. For example, if I upset you, my brain might predict that you are going to yell at me next. If the next thing that I hear is you yelling at me, then the prediction and the data match, and my brain considers its hypothesis validated. If you do not yell at me, then the predicted and experienced sense data conflict, sending off an error signal to force a revision to the model.
Besides changing the model, another way in which the brain can react to reality not matching the prediction is by changing reality. For example, my brain might predict that I am going to type a particular sentence, and then fulfill that prediction by moving my fingers so as to write that sentence. PP goes so far as to claim that this is the mechanism behind all of our actions: a part of your brain predicts that you are going to do something, and then you do it so as to fulfill the prediction.
Next I am going to say a few words about a phenomenon called binocular rivalry and how it is interpreted within the PP paradigm. I promise that this is going to be relevant for the topic of craving and suffering in a bit, so please stay with me.
Binocular rivalry, first discovered in 1593 and extensively studied since then, is what happens when your left eye is shown one picture (e.g. an image of Isaac Newton), and your right eye is shown another (e.g. an image of a house) in the right. People report that their experience keeps alternating between seeing Isaac Newton and seeing a house. They might also see a brief mashup of the two, but such Newton-houses are short-lived and quickly fall apart before settling to a stable image of either Newton or a house.
Image credit: Schwartz et al. (2012), Multistability in perception: binding sensory modalities, an overview. Philosophical Transactions of the Royal Society B, 367, 896-905.
Predictive processing explains what’s happening as follows. The brain is trying to form a stable hypothesis of what exactly the image data that the eyes are sending represents: is it seeing Newton, or is it seeing a house? Sometimes the brain briefly considers the hybrid hypothesis of a Newton-house mashup, but this is quickly rejected: faces and houses do not exist as occupying the same place at the same scale at the same time, so this idea is clearly nonsensical. (At least, nonsensical outside highly unnatural and contrived experimental setups that psychologists subject people to.)
Your conscious experience alternating between the two images reflects the brain switching between the hypotheses of “this is Isaac Newton” and “this is a house”; the currently-winning hypothesis is simply what you experience reality as.
Suppose that the brain ends up settling on the hypothesis of “I am seeing Isaac Newton”; this matches the input from the Newton-seeing eye. As a result, there is no error signal that would arise from a mismatch between the hypothesis and the Newton-seeing eye’s input. For a moment, the brain is satisfied that it has found a workable answer.
However, if one really was seeing Isaac Newton, then the other eye should not keep sending an image of a house. The hypothesis and the house-seeing eye’s input do have a mismatch, kicking off a strong error signal which lowers the brain’s confidence in the hypothesis of “I am seeing Isaac Newton”.
The brain goes looking for a hypothesis which would better satisfy the strong error signal… and then finds that the hypothesis of “I am seeing a house” serves to entirely quiet the error signal from the house-seeing eye. Success?
But even as the brain settles on the hypothesis of “I am seeing a house”, this then contradicts the input coming from the Newton-seeing eye.
The brain is again momentarily satisfied, before the incoming error signal from the hypothesis/Newton-eye mismatch drives down the probability of the “I am seeing a house” hypothesis, causing the brain to eventually go back to the “I am seeing Isaac Newton” hypothesis… and then back to seeing a house, and then to seeing a Newton, and...
One way of phrasing this is that there are two subsystems, each of which are transmitting a particular set of constraints (about seeing Newton and a house). The brain is then trying and failing to find a hypothesis which would fulfill both sets of constraints, while also respecting everything else that it knows about the world.
As I will explain next, my feeling is that something similar is going on with unsatisfactoriness. Craving creates constraints about what the world should be like, and the brain tries to find an action which would fulfill all of the constraints, while also taking into account everything else that it knows about the world. Suffering/unsatisfactoriness emerges when all of the constraints are impossible to fulfill, either because achieving them takes time, or because the brain is unable to find any scenario that could fulfill all of them even in theory.
Predictive processing and psychological suffering
There are two broad categories of suffering: mental and physical discomfort. Let’s start with the case of psychological suffering, as it seems most directly analogous to what we just covered.
Let’s suppose that I have broken an important promise that I have made to a friend. I feel guilty about this, and want to confess what I have done. We might say that I have a craving to avoid the feeling of guilt, and the associated craving subsystem sends a prediction to my consciousness: I will stop feeling guilty.
In the previous discussion, an inference mechanism in the brain was looking for a hypothesis that would satisfy the constraints imposed by the sensory data. In this case, the same thing is happening, but
the hypothesis that it is looking for is a possible action that I could take, that would lead to the constraint being fulfilled
the sensory data is not actually coming from the senses, but is internally generated by the craving and represents the outcome that the craving subsystem would like to see realized
My brain searches for a possible world that would fulfill the provided constraints, and comes up with the idea of just admitting the truth of what I have done. It predicts that if I were to do this, I would stop feeling guilty over not admitting my broken promise. This satisfies the constraint of not feeling guilty.
However, as my brain further predicts what it expects to happen as a consequence, it notes that my friend will probably get quite angry. This triggers another kind of craving: to not experience the feeling of getting yelled at. This generates its own goal/prediction: that nobody will be angry with me. This acts as a further constraint for the plan that the brain needs to find.
As the constraint of “nobody will be angry at me” seems incompatible with the plan of “I will admit the truth”, this generates an error signal, driving down the probability of this plan. My brain abandons this plan, and then considers the alternative plan of “I will just stay quiet and not say anything”. This matches the constraint of “nobody will be angry at me” quite well, driving down the error signal from that particular plan/constraint mismatch… but then, if I don’t say anything, I will continue feeling guilty.
The mismatch with the constraint of “I will stop feeling guilty” drives up the error signal, causing the “I will just stay quiet” plan to be abandoned. At worst, my mind may find it impossible to find any plan which would fulfill both constraints, keeping me in an endless loop of alternating between two unviable scenarios.
There are some interesting aspects about the phenomenology of such a situation, which feel like they fit the PP model quite well. In particular, it may feel like if I just focus on a particular craving enough, thinking about my desired outcome hard enough will make it true.
Recall that under the PP framework, goals happen because a part of the brain assumes that they will happen, after which it changes reality to make that belief true. So focusing really hard on a craving for X makes it feel like X will become true, because the craving is literally rewriting an aspect of my subjective reality to make me think that X will become true.
When I focus hard on the craving, I am temporarily guiding my attention away from the parts of my mind which are pointing out the obstacles in the way of X coming true. That is, those parts have less of a chance to incorporate their constraints into the plan that my brain is trying to develop. This momentarily reduces the motion away from this plan, making it seem more plausible that the desired outcome will in fact become real.
Conversely, letting go of this craving, may feel like it is literally making the undesired outcome more real, rather than like I am coming more to terms with reality. This is most obvious in cases where one has a craving for an outcome that is impossible for certain, such as in the case of grieving about a friend’s death. Even after it is certain that someone is dead, there may still be persistent thoughts of if only I had done X, with an implicit additional flavor of if I just want to have done X really hard, things will change, and I can’t stop focusing on this possibility because my friend needs to be alive.
In this form, craving may lead to all kinds of rationalization and biased reasoning: a part of your mind is literally making you believe that X is true, because it wants you to find a strategy where X is true. This hallucinated belief may constrain all of your plans and models about the world in the same sense as getting direct sensory evidence about X being true would constrain your brain’s models. For example, if I have a very strong urge to believe that someone is interested in me, then this may cause me to interpret any of his words and expressions in a way compatible with this belief, regardless of how implausible and far-spread of a distortion this requires.
The case of physical pain
Similar principles apply to the case of physical pain.
We should first note that pain does not necessarily need to be aversive: for example, people may enjoy the pain of exercise, hot spices or sexual masochism. Morphine may also have an effect where people report that they still experience the pain but no longer mind it.
And, relevant for our topic, people practicing meditation find that by shifting their attention towards pain, it can become less aversive. The meditation teacher Shinzen Young writes that
… pain is one thing, and resistance to the pain is something else, and when the two come together you have an experience of suffering, that is to say, ‘suffering equals pain multiplied by resistance.’ You’ll be able to see that’s true not only for physical pain, but also for emotional pain and it’s true not only for little pains but also for big pains. It’s true for every kind of pain no matter how big, how small, or what causes it. Whenever there is resistance there is suffering. As soon as you can see that, you gain an insight into the nature of “pain as a problem” and as soon as you gain that insight, you’ll begin to have some freedom. You come to realize that as long as we are alive we can’t avoid pain. It’s built into our nervous system. But we can certainly learn to experience pain without it being a problem. (Young, 1994)
What does it mean to say that resisting pain creates suffering?
In the discussion about binocular rivalry, we might have said that when the mind settled on a hypothesis of seeing Isaac Newton, this hypothesis was resisted by the sensory data coming from the house-seeing eye. The mind would have settled on the hypothesis of “I am seeing Isaac Newton”, if not for that resistance. Likewise, in the preceding discussion, the decision to admit the truth was resisted by the desire to not get yelled at.
Suppose that you have a sore muscle, which hurts whenever you put weight on it. Like sensory data coming from your eyes, this constrains the possible interpretations of what you might be experiencing: your brain might settle on the hypothesis of “I am feeling pain”.
But the experience of this hypothesis then triggers a resistance to that pain: a craving subsystem wired to detect pain and resist it by projecting a form of internally-generated sense data, effectively claiming that you are not in pain. There are now again two incompatible streams of data that need to be reconciled, one saying that you are in pain, and another which says that you are not.
In the case of binocular rivalry, both of the streams were generated by sensory information. In the discussion about psychological suffering, both of the streams were generated by craving. In this case, craving generates one of the streams and sensory information generates the other.
On the left, a persistent pain signal is strong enough to dominate consciousness. On the right, a craving for not being in pain attempts to constrain consciousness so that it doesn’t include the pain.
Now if you stop putting weight on the sore muscle, the pain goes away, fulfilling the prediction of “I am not in pain”. As soon as your brain figures this out, your motor cortex can incorporate the craving-generated constraint of “I will not be in pain” into its planning. It generates different plans of how to move your body, and whenever it predicts that one of them would violate the constraint of “I will not be in pain”, it will revise its plan. The end result is that you end up moving in ways that avoid putting weight on your sore muscle. If you miscalculate, the resulting pain will cause a rapid error signal that causes you to adjust your movement again.
What if the pain is more persistent, and bothers you no matter how much you try to avoid moving? Or if the circumstances force you to put weight on the sore muscle?
In that case, the brain will continue looking for a possible hypothesis that would fulfill the constraint of “I am not in pain”. For example, maybe you have previously taken painkillers that have helped with your pain. In that case, your mind may seize upon the hypothesis that “by taking painkillers, my pain will cease”.
As your mind predicts the likely consequences of taking painkillers, it notices that in this simulation, the constraint of “I am not in pain” gets fulfilled, driving down the error signal between the hypothesis and the “I am not in pain” constraint. However, if the brain could suppress the craving-for-pain-relief merely by imagining a scenario where the pain was gone, then it would never need to take any actions: it could just hallucinate pleasant states. Helping keep it anchored into reality is the fact that simply imagining the painkillers has not done anything to the pain signal itself: the imagined state does not match your actual sense data. There is still an error signal generated between the mismatch of the imagined “I have taken painkillers and am free of pain” scenario, and the fact that the pain is not gone yet.
Your brain imagines a possible experience: taking painkillers and being free of pain. This imagined scenario fulfills the constraint of “I have no pain”. However, it does not fulfill the constraint of actually matching your sense data: you have not yet taken painkillers and are still in pain.
Fortunately, if painkillers are actually available, your mind is not locked into a state where the two constraints of “I’m in pain” and “I’m not in pain” remain equally impossible to achieve. It can take actions—such as making you walk towards the medicine cabinet—that get you closer towards being able to fulfill both of these constraints.
There are studies suggesting that physical pain and psychological pain share similar neural mechanisms [citation]. And in meditation, one may notice that psychological discomfort and suffering involves avoiding unpleasant sensations in the same way as physical pain does; the same mechanism has been recruited for more abstract planning.
When the brain predicts that a particular experience would produce an unpleasant sensation, craving resists that prediction and tries to find another way. Similarly, if the brain predicts that something will not produce a pleasant sensation, craving may also resist that aspect of reality.
Now, this process as described has a structural equivalence to binocular rivalry, but as far as I know, binocular rivalry does not involve any particular discomfort. Suffering obviously does.
Being in pain is generally bad: it is usually better to try to avoid ending up in painful states, as well as try to get out of painful states once you are in them. This is also true for other states, such as hunger, that do not necessarily feel painful, but still have a negative emotional tone. Suppose that whenever craving generates a self-fulfilling prediction which resists your direct sensory experience, this generates a signal we might call “unsatisfactoriness”.
The stronger the conflict between the experience and the craving, the stronger the unsatisfactoriness—so that a mild pain that is easy to ignore only causes a little unsatisfactoriness, and an excruciating pain that generates a strong resistance causes immense suffering. The brain is then wired to use this unsatisfactoriness as a training signal, attempting to avoid situations that have previously included high levels of it, and to keep looking for ways out if it currently has a lot of it.
It is also worth noting what it means for you to be paralyzed by two strong, mutually opposing cravings. Consider again the situation where I am torn between admitting the truth to my friend, and staying quiet. We might think that this is a situation where the overall system is uncertain of the correct course of action: some subsystems are trying to force the action of confronting the situation, others are trying to force the action of avoiding it. Both courses of action are predicted to lead to some kind of loss.
In general, it is a bad thing if a system ends up in a situation where it has to choose between two different kinds of losses, and has high internal uncertainty of the right action. A system should avoid such dilemmas, either by avoiding the situations themselves or by finding a way to reconcile the conflicting priorities.
Craving-based and non-craving-based motivation
What I have written so far might be taken to suggest that craving is a requirement for all action and planning. However, the Buddhist claim is that craving is actually just one of at least two different motivational systems in the brain. Given that neuroscience suggests the existence of at least three different motivational systems, this should not seem particularly implausible.
Let’s take another look at the types of processes related to binocular rivalry versus craving.
Craving acts by actively introducing false beliefs into one’s reasoning. If craving could just do this completely uninhibited, rewriting all experience to match one’s desires, nobody would ever do anything: they would just sit still, enjoying a craving-driven hallucination of a world where everything was perfect.
In contrast, in the case of binocular rivalry, no system is feeding the reasoning process any false beliefs: all the constraints emerge directly from the sense data and previous life-experience. To the extent that the system can be said to have a preference over either the “I am seeing a house” or the “I am seeing Isaac Newton” hypothesis, it is just “if seeing a house is the most likely hypothesis, then I prefer to see a house; if seeing Newton is the most likely hypothesis, then I prefer to see Newton”. The computation does not have an intrinsic attachment to any particular outcome, nor will it hallucinate a particular experience if it has no good reason to.
Likewise, it seems like there are modes of doing and being which are similar in the respect that one is focused on process rather than outcome: taking whatever actions are best-suited for the situation at hand, regardless of what their outcome might be. In these situations, little unsatisfactoriness seems to be present.
In an earlier post, I discussed a proposal where an autonomously acting robot has two decision-making systems. The first system just figures out whatever actions would maximize its rewards and tries to take those actions. The second “Blocker” system tries to predict whether or not a human overseer would approve of any given action, and prevents the first system from doing anything that would be disapproved of. We then have two evaluation systems: “what would bring the maximum reward” (running on a lower priority) and “would a human overseer approve of a proposed action” (taking precedence in case of a disagreement).
It seems to me that there is something similar going on with craving. There are processes which are neutrally just trying to figure out the best action; and when those processes hit upon particularly good or bad outcomes, craving is formed in an attempt to force the system into repeating or avoiding those outcomes in the future.
Suppose that you are in a situation where the best possible course of action only has a 10% chance of getting you through alive. If you are in a non-craving-driven state, you may focus on getting at least that 10% chance together, since that’s the best that you can do.
In contrast, the kind of behavior that is typical for craving is realizing that you have a significant chance of dying, deciding that this thought is completely unacceptable, and refusing to go on before you have an approach where the thought of death isn’t so stark.
Both systems have their upsides and downsides. If it is true that a 10% chance of survival really is the best that you can do, then you should clearly just focus on getting the probability even that high. The craving which causes trouble by thrashing around is only going to make things worse. On the other hand, maybe this estimate is flawed and you could achieve a higher probability of survival by doing something else. In that case, the craving absolutely refusing to go on until you have figured out something better might be the right action.
There is also another major difference, in that craving does not really care about outcomes. Rather, it cares about avoiding positive or negative feelings. In the case of avoiding death, craving-oriented systems are primarily reacting to the thought of death… which may make them reject even plans which would reduce the risk of death, if those plans involved needing to think about death too much.
This becomes particularly obvious in the case of things like going to the dentist in order to have an operation you know will be unpleasant. You may find yourself highly averse to going, as you crave the comfort of not needing to suffer from the unpleasantness. At the same time, you also know that the operation will benefit you in the long term: any unpleasantness will just be a passing state of mind, rather than permanent damage. But avoiding unpleasantness—including the very thought of experiencing something unpleasant—is just what craving is about.
In contrast, if you are in a state of equanimity with little craving, you still recognize the thoughts of going to the dentist as having negative valence, but this negative valence does not bother you, because you do not have a craving to avoid it. You can choose whatever option seems best, regardless of what kind of content this ends up producing in your consciousness.
Of course, choosing correctly requires you to actually know what is best. Expert meditators have been known to sometimes ignore extreme physical pain that should have caused them to seek medical aid. And they probably would have sought help, if not for their ability to drop their resistance to pain and experience it with extreme equanimity.
Negative-valence states tend to correlate with states which are bad for the achievement of our goals. That is the reason why we are wired to avoid them. But the correlation is only partial, so if you focus too much on avoiding unpleasantness, you are falling victim to Goodhart’s Law: optimizing a measure so much that you sacrifice the goals that the measure was supposed to track. Equanimity gives you the ability to ignore your consciously experienced suffering, so you don’t need to pay additional mental costs for taking actions which further your goals. This can be useful, if you are strategic about actually achieving your goals.
But while Goodharting on a measure is a failure mode, so is ignoring the measure entirely. Unpleasantness does still correlate with things that make it harder to realize your values, and the need to avoid displeasure normally operates as an automatic feedback mechanism. It is possible to have high equanimity and weaken this mechanism, without being smart about it and doing nothing to develop alternative mechanisms. In that case you are just trading Goodhart’s Law for the opposite failure mode.
Some other disadvantages of craving
In the beginning of this post, I mentioned a few other disadvantages that craving has, which I have not yet mentioned explicitly. Let’s take a quick look at those.
Craving narrows your perception, making you only pay attention to things that seem immediately relevant for your craving.
In predictive processing, attention is conceptualized as giving increased weighting to those features of the sensory data that seem most useful for making successful predictions about the task at hand. If you have strong craving to achieve a particular outcome, your mind will focus on those aspects of the sensory data that seem useful for realizing your craving.
Strong craving may cause premature exploitation. If you have a strong craving to achieve a particular goal, you may not want to do anything that looks like moving away from it, even if that would actually help you achieve it better.
Suppose that you have a strong craving to experience a feeling of accomplishment: this means that the craving is strongly projecting a constraint of “I will feel accomplished” into your planning, causing an error signal if you consider any plan which does not fulfill the constraint. If you are thinking about a multistep plan which will take time before you feel accomplished, it will start out by you not feeling accomplished. This contradicts the constraint of “I will feel accomplished”, causing that plan to be rejected in favor of ones that bring you even some accomplishment right away.
Craving and suffering
We might summarize the unsatisfactoriness-related parts of the above as follows:
Craving tries to get us into pleasant states of consciousness.
But pleasant states of consciousness are those without craving.
Thus, there are subsystems which are trying to get us into pleasant states of consciousness by creating constant craving, which is the exact opposite of a pleasant state.
We can somewhat rephrase this as:
The default state of human psychology involves a degree of almost constant dissatisfaction with one’s state of consciousness.
This dissatisfaction is created by the craving.
The dissatisfaction can be ended by eliminating craving.
… which, if correct, might be interpreted to roughly equal the first three of Buddhism’s Four Noble Truths: the fourth is “Buddhism’s Noble Eightfold Path is a way to end craving”.
A more rationalist framing might be that the craving is essentially acting in a way that looks similar to wireheading: pursuing pleasure and happiness even if that sacrifices your ability to impact the world. Reducing the influence of the craving makes your motivations less driven by wireheading-like impulses, and more able to see the world clearly even if it is painful. Thus, reducing craving may be valuable even if one does not care about suffering less.
This gives rise to the question—how exactly does one reduce craving? And what does all of this have to do with the self, again?
We’ll get back to those questions in the next post.
This is the third post of the “a non-mystical explanation of insight meditation and the three characteristics of existence” series. The next post in the series is “From self to craving”.
- The Felt Sense: What, Why and How by 5 Oct 2020 15:57 UTC; 153 points) (
- A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble by 5 May 2020 19:09 UTC; 134 points) (
- A non-mystical explanation of “no-self” (three characteristics series) by 8 May 2020 10:37 UTC; 113 points) (
- Three characteristics: impermanence by 5 Jun 2020 7:48 UTC; 73 points) (
- From self to craving (three characteristics series) by 22 May 2020 12:16 UTC; 57 points) (
- Retrospective: November 10-day virtual meditation retreat by 23 Nov 2020 15:00 UTC; 50 points) (
- 12 Nov 2021 11:49 UTC; 7 points) 's comment on Investigating Fabrication by (
- Comparing AI Alignment Approaches to Minimize False Positive Risk by 30 Jun 2020 19:34 UTC; 5 points) (
- 2 Jan 2021 2:53 UTC; 3 points) 's comment on Review: LessWrong Best of 2018 – Epistemology by (
- 26 Mar 2024 19:17 UTC; 2 points) 's comment on Should rationalists be spiritual / Spirituality as overcoming delusion by (
One frame that’s been useful for me is explicitly noticing how different parts have different time horizons they are sampling over, and that that creates a sort of implicit tension since they are paid in different rewards but are competing for the same motivational system.
I like this version of Predictive Processing much better than the usual, in that you explicitly posit that warping beliefs toward success is only ONE of several motivation systems. I find this much more plausible than using it as the grand unifying theory.
That said, isn’t the observation that binocular rivalry doesn’t create suffering a pretty big point against the theory as you’ve described it?
Side note, I don’t experience the alternating images you described. I see both things superimposed, something like if you averaged the bitmaps together. Although that’s not /quite/ an accurate description. I attribute this to playing with crossing my eyes a lot at a young age, although the causality could be the other way. There’s a lot of variance in how people experience their visual field, you’ll find, if you ask people enough detailed questions about it. (Same with all sorts of aspects of cognition. Practically all cognitive studies of this kind focus on the typical response more than the variation, giving a false impression of unity of you only read summaries. I suspect a lot of the cognitive variation correlates with personality type (ie OCEAN).)
It does. I think that I’ve figured out a better explanation since writing this essay, but I’ve yet to write it up in a satisfying form...
Huh, that’s an interesting datapoint!
It seems like if you have to choose between bad options, the healthy thing is to declare that all your options are bad, and take the least bad one. This sometimes feels like “becoming resigned to your fate” maybe? The unhealthy thing is to fight against this, and not accept reality.
Why is the latter so tempting? I think it comes from the Temporal Difference Learning algorithm used by the brain’s reward system. I think the TD learning algorithm attaches a very strong negative reward to the moment where you start believing that your predicted reward is a lot lower than what you had thought it would be before. So that would create an exceptionally strong motivation to not accept that, even if it’s true.
This ties into my other comment that maybe craving is fundamentally the same as other motivations, but stronger, and in particular, so strong that it screws up our ability to think straight.
After reading this and lukeprog’s post you referenced, I’m still not convinced that there is fundamentally more than one motivational system—although I don’t have high confidence and still want to chase down the references.
(Well, I see a distinction between actions not initiated by the neocortex, like flinching away from a projectile, versus everything else—see here—but that’s not what we’re talking about here.)
It seems to me that what you call “craving” is what I would call “an unhealthily strong motivation”. The background picture in my head is this, where wishful thinking is a failure mode built into the deepest foundations of our thoughts. Wishful thinking stays under control mainly because the force of “What we can imagine and expect is constrained by past experience and world-knowledge” can usually defeat the force of Wishful thinking. But if we want something hard enough, it can break through those shackles, so that, for example, it doesn’t get immediately suppressed even if our better judgment declares that it cannot work.
Like, take your dentist example:
The thought of being at the dentist is aversive.
The thought of having clean healthy teeth is attractive.
We make the decision by weighing these against each other, I think. You are categorizing the former as a craving and the latter as motivation-that-is-not-craving (right?), but they seem like fundamentally the same type of thing to me. (After all, we can weigh them against each other.) It seems like the difference is that the former is exceptionally strong—so strong that it prevents us from thinking straight about it. The latter is a normal mild attraction, which is healthy and unproblematic. I see a continuum between the two.
(If this is right, it doesn’t undermine the idea that cravings exist and that we should avoid them. I still believe that. I’m just suggesting that maybe craving vs motivations-that-are-not-craving is a difference of degree not kind.)
I dunno, I’m just spitballing here :-D
I mostly used examples of aversion in the post, but to be clear, both desire and aversion can be forms of craving. As I noted in another comment, basically any goal can be either craving-based, non-craving-based, or (typically) a mixture of both.
Possible; subjectively they feel like differences in kind, but of course subjective experience is not strong evidence for how something is implemented neurally. Large enough quantitative differences can produce effects that feel like qualitative differences.
I wonder about the connection to the referenced motivational systems; based on a superficial description (e.g. the below excerpt from Surfing Uncertainty), it kinda sounds like the model-free motivational system in neuroscience could be craving, and the model-based system non-craving. (Or maybe not, since it’s suggested that model-free would involve more bottom-up influences, which sounds contrary to craving; I’m confused by that.) That discussion of how the brain learns which system to use in which situation, would be compatible with the model where one can gradually unlearn craving using various methods (I’ll get to that in a later post). But I would need to look into this more.
Yeah, I haven’t read any of these references, but I’ll elaborate on why I’m currently very skeptical that “model-free” vs “model-based” is a fundamental difference.
I’ll start with an example unrelated to motivation, to take it one step at a time.
Imagine that, every few hours, your whole field of vision turns bright blue for a couple seconds, then turns yellow, then goes back to normal. You have no idea why. But pretty soon, every time your field of vision turns blue, you’ll start expecting it to then turn yellow within a couple seconds. This expectation is completely divorced from everything else you know, since you have no idea why it’s happening, and indeed all your understanding of the world says that this shouldn’t be happening.
Now maybe there’s a temptation here to say that the expectation of yellow is model-free pattern recognition, and to contrast it with model-based pattern recognition, which would be something like expecting a chess master to beat a beginner, which is a pattern that you can only grasp using your rich contextual knowledge of the world.
But I would not draw that contrast. I would say that the kind of pattern recognition that makes us expect to see yellow after blue just from direct experience without understanding why, is exactly the same kind of pattern recognition that originally built up our entire world-model from scratch, and which continues to modify it throughout our lives.
For example, to a 1-year-old, the fact that the words “1 2 3 4...” is usually followed by “5″ is just an arbitrary pattern, a memorized sequence of sounds. But over time we learn other patterns, like seeing two things while someone says “two”, and we build connections between all these different patterns, and wind up with a rich web of memorized patterns that comprises our entire world-model.
Different bits of knowledge can be more or less integrated into this web. “I see yellow after blue, and I have no idea why” would be an extreme example—an island of knowledge isolated from everything else we know. But it’s a spectrum. For example, take everyone on Earth who knows the phrase “E=mc²”. There’s a continuum, from people who treat it as a memorized sequence of meaningless sounds in the same category as “yabba dabba doo”, to people who know that the E stands for energy but nothing else, to physics students who kinda get it, all the way to professional physicists who find E=mc² to be perfectly obvious and inevitable and then try to explain it on Quora because I guess I had nothing better to do on New Years Day 2014… :-)
So, I think model-based and model-free is not a fundamental distinction. But I do think that with different ways of acquiring knowledge, there are systematic trends in prediction strength, with first-hand experience leading to much stronger predictions than less-direct inferences. If I have repeated direct experience of my whole field of vision filling with yellow after blue, that will develop into a very very strong (confident) prediction. After enough times seeing blue-then-yellow, if I see blue-then-green I might literally jump out of my seat and scream!! By contrast, the kind of expectation that we arrive at indirectly via our world model tends to be a weaker prediction. If I see a chess master lose to a beginner, I’ll be surprised, but I won’t jump out of my seat and scream. Of course that’s appropriate: I only predicted the chess master would win via a long chain of uncertain probabilistic inferences, like “the master was trying to win”, “nobody cheated”, “the master was sober”, “chess is not the kind of game where you can win just by getting lucky”, etc. So it’s appropriate for me to be predicting the win with less confidence. As yet a third example, let’s say a professional chess commentator is watching the same match, in the context of a proper tournament. The commentator actually might jump out of her chair and scream when the master loses! For her, the sight of masters crushing beginners is something that she has repeatedly and directly experienced. Thus her prediction is much stronger than mine. (I’m not really into chess.)
All this is about perception, not motivation. Now, back to motivation. I think we are motivated to do things proportionally to our prediction of the associated reward.
I think we learn to predict reward in a similar way that we learn to predict anything else. So it’s the same idea. Some reward predictions will be from direct experience, and not necessarily well-integrated with the rest of our world-model: “Don’t know why, but it feels good when I do X”. It’s tempting to call these “model-free”. Other reward predictions will be more indirect, mediated by our understanding of how some plan will unfold. The latter will tend to be weaker reward predictions in general (as is appropriate since they rely on a longer chain of uncertain inferences), and hence they tend to be less motivating. It’s tempting to call these “model-based”. But I don’t think it’s a fundamental or sharp distinction. Even if you say “it feels good when I do X”, we have to use our world-model to construct the category X and classify things as X or not-X. Conversely, if you make a plan expecting good results, you implicitly have some abstract category of “plans of this type” and you do have previous direct experience of rewards coming from the objects in this abstract category.
Again, this is just my current take without having read the literature :-D
(Update 6 months later: I have read more of the relevant literature since writing this, but basically stand by what I said here.)
This reminds me of my discussion with johnswentworth, where I was the one arguing that model-free vs. model-based is a sliding scale. :)
So yes, it seems reasonable to me that these might be best understood as extreme ends of a spectrum… which was part of the reason why I copied that excerpt, as it included the concluding sentence of “‘Model-based’ and ‘model-free’ modes of valuation and response, if this is correct, simply name extremes along a single continuum and may appear in many mixtures and combinations determined by the task at hand” at the end. :)
I am not the party that used the terms but ot me “yellow then blue” reads as a very simple model and model based thinking.
The part of ” we have to use our world-model to construct the category X and classify things as X or not-X ” reads to me that you do not think that model-free thinking is possible.
You can be a situation and something in it elict you to respond in a way Y without you being aware what is the condition that makes that expereince fall within a triggering reference class. Now if you know you have such a reaction you can by experiment try to to inductive investigation by carefully varying the environment and check whether you do the react or not. Then you might reverse engineer the reflex and end up with a model how the reflex works.
The question of ineffabiolity of neural network might be relevant. If a neural network makes a mistake and tries to avoid doing that mistake in the future a lot of weights are adjusted none of which is easily expressible as a doing a different action in some discrete situation. Even if it is a simple model a model like “blue” seemss to point out a set of criteria how you could rule whether a novel experince falls wihtin the perfew of the model or not. But if you have a ill or fuzzily defined “this kind of situation” that is a completely different thing.
Really? My model has been that you can want something without really enjoying it or being happy to have it. (That comes mostly from reading Scott’s old post on wanting/liking/approving.) Or maybe you’re using “feelings (valence)” in a broader sense that encompasses “dopamine rush”? (I may be misunderstanding the exact meaning of “valence”; I haven’t dived deep into it, although I’ve been meaning to.)
Isn’t this kind of craving about avoiding negative valence? Having an addiction (a wanting) that’s not fulfilled is very unpleasant. My model of this is that the addiction starts from a model based place of choosing a behavior, then the pavlovian part takes offer as the behavior leads to a positive or avoiding a negative, then the model free system starts to get a handle on what’s happening with the palovian.
Ah yeah, this definitely describes my experiences with a lot of addiction-like behaviors. The behavior itself isn’t necessarily enjoyable, but not doing it feels aversive, and then there’s a craving to get rid of that aversive feeling.
Good question! My answer would be that craving is trying to get things that it expects will bring positive valence, but this prediction may or may not be accurate (though it may once have been). [EDIT: also, see mr-hire’s comment.]
I am a bit confused by the lines:
″...pursuing pleasure and happiness even if that sacrifices your ability to impact the world. Reducing the influence of the craving makes your motivations less driven by wireheading-like impulses, and more able to see the world clearly even if it is painful.”
Once we have deemed that wanting to pursue pleasure and happiness are wireheading-like impulses, why stop ourselves from saying that wanting to impact the world is a wireheading-like impulse?
You also talk about meditators ignoring pain, and how the desire to avoid pain is craving. Why isn’t a desire to avoid death craving? You clearly speak as if going to a dentist when you have a tooth ache is the right thing to do, but why? Once you distance your ‘self’ from pain, why not distance yourself from your rotting teeth?
All my intuitions about how to act are based on this flawed sense of self. And from what you are outlining, I don’t see how any intuition about the right way to act can possibly remain once we lose this flawed sense of self.
There’s a general discomfort I have with this series of posts that I’m not able to fully articulate, but the above questions seem related.
Fair question. One answer is: wanting to save the world can be a wireheading-like impulse, if it is generated by craving as opposed to some other form of motivation. Likewise, pursuing pleasure and happiness can also be non-wireheading-like, if you pursue them for reasons other than craving. Wanting to avoid death, too, is something that you can pursue either out of craving or for other reasons.
For example, you may pursue pleasure:
Because you value it for its own sake
Because experiencing pleasure makes your mind and body work better than if you were only experiencing unhappiness
Because it is useful for releasing craving
Or for some other reason.
The difference (or at least a difference) is more in how you react to the possibility of there being obstacles to that goal. Take the dentist example.
You might value pleasure and healthy teeth in a non-craving-based way; this leads you to conclude that even though the dentist visit might be unpleasant, overall there is going to be more pleasure if you just go to the dentist right away and get the source of discomfort fixed as soon as possible. You can think about how unpleasant the dentist visit is and weigh it appropriately, without instinctively flinching away from the very thought of that unpleasantness.
Or you might have a craving to pursue pleasure and avoid discomfort, in which case even thinking about the dentist visit is aversive. In third-person terms, you have a constraint “do not think about doing unpleasant things”, so as soon as you mentally simulate the dentist visit and the simulation includes discomfort, your mind is pushed to think about something else. I call this “wireheading-like” in the sense that you are taking actions which are superficially furthering the goal in the short term (by avoiding the thought of the dentist, you are avoiding some discomfort), but are actually hurting it in the long term (if you just went to the dentist right away, you’d end up with much less discomfort overall).
Because even when you let go of craving, you still have all of your other values.
I find it helpful to think of craving and non-craving as two layers of motivation: at the bottom there is one system of motivations which is doing things, and then on top there is craving, which sets a variety of its own goals. Decision-making tends to involve a mixture of motivations, some of them coming from craving and some of them coming from non-craving. But craving tends to be so “loud”, and frequently be the dominant form of motivation, that the other motivations can become hard to notice.
As an example, maybe you have had an experience where you are just trying out something for the first time, and don’t have any major expectations one way or the other; you have a relaxed time. Because you are so relaxed and non-stressed, things go well and it ends up being really enjoyable. Afterwards, you develop a desire to repeat the experience and ensure that it goes that well again; as a result, it doesn’t, because you are so focused on how to repeat it rather than on doing things in the relaxed way that actually got you the positive result the first time.
The first time you were acting without craving, which led to good results; then craving seized upon those good results and tried to repeat them, which did not go as well.
(For me, a particularly clear example of this is in the context of romantic relationships. If I’m meeting someone for the first time, I might be relaxed and not particularly focused on whether it will lead to an actual relationship or not. But then if it looks like we might actually end up in a relationship, I can get a major craving towards wanting things to go that way, and then make a mess of it.)
For Western, non-mystical contexts where people have picked up on the craving thing, the examples from this newsletter feel related:
In those terms, “Self 1” is associated with the construct of the self, as well as craving. “Self 2″ are the subsystems that just do stuff regardless, and may indeed often do better if the craving doesn’t get in the way.
Thank you for your reply, and it does clarify some things for me. If I may summarise in short, I think you are saying:
Craving is a bad sort of motivation because it makes you react badly to obstacles, but other sorts of motivation can be fine.
Self-conscious/ craving-filled states of mind can be unproductive when trying to act on these other sorts of motivations.
I still have some questions though.
You say you may pursue pleasure because you value it for its own sake. But what is the self (or subsystem?) that is doing this valuing? It feels like the valuer is a lot like a “Self 1”, the kind of self which meditation should expose to be some kind of delusion.
Here’s an attempt to put the question another way. Some one suggested in one of the previous comment threads about the topic that non-self was a bit like not identifying with your short term desires, and also your long term desires (and then eventually not identifying with anything). So why is identifying yourself with your values compatible with non-self?
EDIT: I reproduce here part of my response to Isusr, which I think is relevant, and is perhaps yet another way to ask the same question.
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a perfect meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I value my health, or maybe because unhealthiness is intrinsically bad. And if they asked me why I value my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
I kind of feel that the enlightened cannot provide any reasons for their actions at all.
Roughly, yes, though I would be a bit cautious about framing craving as outright bad, more like “the tradeoffs involved may make it better to let go of it in the end”; but of course, that depends on what exactly one is trying to achieve. As I noted in the post, it is also possible for one to weaken their craving with bad results, at least if we evaluate “results” from the point of view of achieving things.
Different subsystems make valuations all the time; that’s not an illusion. What’s illusory is the notion that all of the different valuations are coming from a single self, and that positive/negative valence are things that the system intrinsically has to pursue/avoid.
For instance, one part of the mechanism is that at any given moment, you may have conscious intentions about what to do next. If you have two conflicting intentions, then those conflicting intentions are generated by different subsystems. However, frequently the mind-system attributes all intentions to a single source: “the self”. Operating based on that assumption, the mind-system models itself as having a single decision-maker that generates all intentions and observes all experiences.
In The Apologist and the Revolutionary, Scott Alexander writes:
One way of explaining the construct of the self, is that there’s a reasoning module which constructs a story of there being a single decision-maker, “the self”, that’s deciding everything. In the case of the split-brain patient, a subsystem has decided to point at a shovel because it’s related to the sight of the snowed-in house that it saw; but the subsystem that is constructing the narrative of the self being in charge of everything, has only seen a chicken claw. So in order to fit the things that it knows into a coherent story, it creates a spurious narrative where the self saw the chicken claw, and shovels are needed for cleaning chicken sheds, so that’s the reason why the self picked the shovel.
But what actually made the decision was an independent subsystem that was cut off from the self-narrative subsystem, which happened to infer that a shovel is useful for digging your way out of a snowed-in house. The subsystem creating the construct of the self wasn’t responsible for the decision nor the implicit valuations involved in it, it merely happened to create a story that took the credit for what another subsystem had already done.
Seeing the nature of the self doesn’t stop you from making valuations, it just makes you see that they are not coming from the self. But many of the valuations themselves remain unchanged by that. (As the Zen proverb goes: “Before enlightenment, chop wood, carry water. After enlightenment, chop wood, carry water.”)
Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.
3. This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I’m not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.
Appreciated. :) Answering these in detail is also useful, in that it helps me figure out which things I should mention in my future posts—I might copy-paste some parts of my answers here, right into some of my next posts…
It might be helpful to notice that positive/negative valence is usually already one step removed from some underlying set of values. For example:
Appraisal theories of emotionhold that emotional responses (with their underlying positive or negative valence) are the result of subconscious evaluations about the significance of a situation, relative to the person’s goals. An evaluation saying that you have lost something important to you, for example, may trigger the emotion of sadness with its associated negative valence.
In the case of Richard, a subsystem within his brain had formed the prediction that if he were to express confidence, this would cause other people to dislike him. It then generated negative self-talk to prevent him from being confident. Presumably the self-talk had some degree of negative valence; in this case that served as a tool that the subsystem could use to block a particular action it deemed bad.
Consider a situation where you are successfully carrying out some physical activity; playing a fast-paced sport or video game, for example. This is likely to be associated with positive valence, which emerges from the fact that you are having success at the task. On the other hand, if you were failing to keep up and couldn’t get into a good flow, you would likely experience negative valence.
What I’m trying to point at here is that valence looks like a signal about whether or not some set of goals/values are being successfully attained. A subsystem may have a goal X which it pursues independently, and depending on how well it goes, valence is produced as a result; and subsystem A may also produce different levels of valence in order to affect the behavior of subsystem B, to cause subsystem B to act in the way that subsystem A values.
In this model, because valence tends to signal states that are good/bad for the achievement of an organism’s goals, craving acts as an additional mechanism that “grabs onto” states that seem to be particularly good/bad, and tries to direct the organism more strongly towards those. But the underlying machinery that is producing the valence, was always optimizing for some deeper set of values, which only produced valence as a byproduct.
Unfortunately a comprehensive answer to the question of “what is the decision criteria, if not valence” would require a complete theory of human motivation and values, and I don’t have one. :)
I am not making the claim that reasoning would always only be rationalization. Rather, the chicken claw story was intended to suggest that one particular reasoning module tends to generate a story of a self that acts as the decision-maker. I don’t even think that the module is rationalizing in the sense of being completely resistant to new evidence: if it was, all of this meditation aimed at exploring no-self would be pretty pointless.
Rather, I think that the situation is more like Scott described in his post: the self-narrative subsystem starts out with a strong prior for one particular hypothesis (with that hypothesis also being culturally reinforced and learned), and creates an explanation which fits things into that hypothesis, treating deviations from it as noise to be discarded. But if it gets the right kind of evidence about the nature of the self (which certain kinds of meditation provide it), then it will update its theories and eventually settle on a different narrative.
To answer your actual question, we certainly do all kinds of reasoning, and this reasoning may certainly resolve internal conflicts or cause us to choose certain kinds of behavior. But I think that reasoning in general, is distinct from the experience of a self. For example, in an earlier post, I talked about the mechanisms by which one may learn to carry out arithmetical reasoning by internalizing a set of rules about how to manipulate numbers; and then later, about how Kahneman’s “System 2” represents a type of reasoning where different subsystems are chaining together their outputs through working memory. So we certainly reason, and that reasoning does provide us with reasons for our behavior, but I see no need to assume that the reasoning would require a self.
I agree that abnormal situations by themselves are not conclusive evidence, yes.
This makes sense.
Talking about how lifting X-wings is impossible is social work and not weightlifting, can you please start yeeting and stop blabbering?
Is there a way you can rephrase this question without using the word “wirehead”? When discussing meditation, the word “wirehead” can have two very different meanings. Usually, “wirehead” refers to the gross failure mode of heavy meditation where a practitioner amnesthesizes him/herself into a potato. Kaj_Sotala has used the word “wirehead” to refer to a subtle specific consequence of taṇhā.
A desire to avoid death is craving. (In fact, death is one of the Four Sights.) The actions of postponing death are not craving. Only the desire to avoid death is.
Because you have a toothache and your teeth will rot if you don’t go to a dentist.
Penetrating taṇhā is the opposite of distancing. It’s about accepting the world right now as it is. If your teeth are rotting right this instant then you should accept that your teeth are rotting right this instant. Such is the Litany of Tarski.
The thing you distance yourself from isn’t the pain, it’s your self. Kaj_Sotala’s post is about taṇhā, one of the Three Characteristics of Existence. Another Characteristic of Existence is anattā or non-self. I hope this becomes clearer once Kaj_Sotala gets to anattā in this series.
It is possible to do something without craving it. For example, consider relaxing on a tropical beach and reaching over to drink a mango smoothie. Now, consider the instant you are mid-sip, sucking through the straw while the flavor washes over your mouth. In that instant, you act without craving.
The same goes for when you are engrossed in fun conversation with close friends and family.
Good!
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I care about my health or maybe because unhealthiness is intrinsically bad. And if they asked me why I care about my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
In fact, the later part of your response makes me feel that the enlightened cannot provide any reasons for their actions at all.
Even to the enlightened, experiences with positive valence still feel like they have positive valence; experiences with negative valence still feel like they have negative valence. (Well, there are accounts which disagree with this and claim that perpetual positive experience is possible, but I am skeptical of those.) One can still prefer states with positive valence, and say that “they just feel good to me”—one is just okay with the possibility of not always getting them.
I realize that this is hard to imagine if you haven’t actually experienced it. An analogy that’s kind of close might be if you were offered a choice between two foods that you were almost indifferent over, but just slightly preferred option B. Given the choice, you ask to have B, but if you were given A instead, you wouldn’t feel any less happy for it. At least, you could let go of your disappointment very quickly.
Interesting. Can you cite any examples of this?
In her book “Get Off Your Cushion”, Li-Anne Tang cites one anecdote with Culadasa a year before his death where he did something like this. But that’s the only example I know of.
Interested in all the meditation information available, I condocted a anticivty of looking at a red antenna light on a black night sky. I ended up being interested whether I can distinguish how different eyes see it. Rather than trying to construct or deduce a 3d-scene one is just content with what one sees.
Analogous to that it seems it could be applied to more abstract thigs. Rather than thingking what THE future will be like being aware of the different future projections that the mind can come up with is more useful than trying to make them fight about who is the “True future”.
I am wondering whether the newton/house situation flickering could be learned to be avoided. As presented here it is easy to read as a biologically unmutable fact about how vision works.
So, do you not need painkillers now thanks to meditation? How did it impact your motivation, do you get more things done?
Outside view: data suggests that conscientiousness is the least impacted of the big 5 by meditation (neither up nor down). Inside view: I think that motivation to whip yourself decreases, but at the same time the intrinsic suffering of just doing things goes down, so it nets out to zero. The idea being that normally we create internal suffering to ‘get us to do’ something that involves external suffering.
Interesting! Can you recommend me any good reading about how meditation interacts with the Big 5 personality traits?
I got this from Jeffrey Martin’s dataset iirc. His book is a bit all over the place though.
Which book is that? I tried to search and found several authors with that name and variants of it.
The finders. The original paper is available online though.
Thanks! Apparently his first name is spelled Jeffery.
https://www.goodreads.com/book/show/44019415-the-finders
He spells his first name “Jeffery”, that’s likely why. The Finders is the book title.
I’ve read that book, and a fair amount dovetails well with my current existence, but quite a bit of it doesn’t. Strange that I cannot find a community of fellow Finders anywhere on da interwebz happily discussing how their lives are with each other and comparing notes and etc.-most Googling of the (correct) name and book title simply brings up a bunch of people going off about the author’s course.
Anyway, I get frustrated with a lot of Buddhist thought & discussion on the ’net, and this one is no exception (the companion entries occ. get rezzed here as well note). Nobody ever discusses what happens if you reverse the polarities, so to speak, and, instead of egoic cravings, you allow the will of the universe to flow through you. Wu wei. [we need some Taoist entries actually in point of fact]
Where does authentic creativity and selfless manifestation lie in this wasteland of viciously craving beings? It’s always ascension, all of the time, forsake forsake forsake. I feel that this singleminded focus on suffering and craving, and transcending such, just leaves a LOT out of the picture. For me (having-mostly-mastered my emotional world) it’s like reading a 3rd grade primer, when I hunger for graduate level work.
For example, if these craving beings were to TRULY experience real, unmediated, 100% pure A-grade ecstasy, they wouldn’t embrace it, they would flee in terror from it. I am well nigh convinced that the problem doesn’t like with attachment, but with fear of actual transcendence, and the cravings and such are simply side-effects of the core issue there.
It may be that the number/% of authentic Finders is much less than the number of self-proclaimed ones (and don’t take my word here either, nuke the Buddha with an RPG).
Thanks.
As for a community, have you tried r/StreamEntry on Reddit? There might be some. I don’t know. I am no Finder.
Depends on the pain and the circumstances; also I have done enough meditation to feel like I roughly understand what’s going on with this stuff, but I don’t claim to be super-advanced. There are lots of complexities with regard to letting go of craving, that I intend to cover in a later post.
That said, if I am in a situation where I have a reasonable certainty of the pain being fine to ignore, then my pain tolerance has gotten better. Pre-COVID, I had a monthly back massage where I learned to use meditative techniques to deal with the pain the first few times around and basically stopped being bothered by it afterwards. And last summer I was stung by a wasp, but after concluding that I probably wasn’t going to get an allergic reaction and that there was nothing to do but wait for the pain to go away, it was relatively easy to just be with the pain and not mind it.
There was also one particular experience where I got, for a few hours, into a meditation-induced state with no craving. I tested it out by turning the water in the shower to be as cold as possible, and stepped under it. The experience was… interesting.
On previous occasions when I’d experimented with cold showers, my reaction to a sudden cold shock had been roughly “AAAAAAAAAAA I’M DYING I HAVE TO GET OUT OF HERE”. And if you had been watching me from the outside, you might have reasonably concluded that I was feeling the same now. Very soon after the water hit me, I could feel myself gasping for breath, the water feeling like a torrent on my back that forced me down on my knees, my body desperately trying to avoid the water. The shock turned my heartbeat into a frenzied gallop, and I would still have a noticeably elevated pulse for minutes after I’d gotten out of the shower.
But I’m not sure if there was any point where I actually felt uncomfortable, this time around. I wasn’t sure of how long this was going to be healthy, so I didn’t stay under the shower for long, but aside for that I could probably have remained there.
Still, there’s lots of pain that’s just as unpleasant as always, especially if it comes by surprise or if I’m uncertain about whether it’s actually a good idea to ignore.
As for motivation, that’s impacted by lots of factors, and also things which are distinct from meditation but overlapping with it (e.g. IFS), so hard to disentangle their impact. If I wanted to give a conservative guess, Romeo’s summary of “no net change either way but feels more pleasant” sounds roughly right; e.g. currently it’s easier to live with the uncertainty of doing research-type work that might potentially have a big impact but also has a large chance of being less valuable than some more direct work. (Less craving for certainty.) That said, the reduction of discomfort also contributes to the fact that I’m able to continue working at all while experiencing it as largely fine, as opposed to just being totally burned out.
You say that “I wasn’t sure of how long this was going to be healthy...”. Was this experienced as a negative valence? If so, why did you do what this valence suggested? I thought you were saying we shouldn’t necessarily make decisions based on negative valences. (From what you’ve been saying, I guess you did not experience the “thought of a cold shower being unhealthy” as a negative valence.)
If it wasn’t experienced as a negative valence, why did you leave the shower? Doesn’t leaving the shower indicate that you have a preference to leave the shower? Is it a self that has this preference? What computes this preference? Why is the result of this computation something worth following? Does the notion of an action being worthy make sense?
So this was pretty much an altered state of consciousness, making it hard for me to recall specifics about the phenomenology afterwards; much of my recollection is based on notes that I made during / immediately after the experience. So I will do my best to answer, but I need to caution that there is a serious risk of me completely misremembering something. That said...
During the event, there was no experience of a separate “doer” nor an “observer”; if I looked at my hand, then there was just a sight of the hand, without a sense of somebody who was watching the hand. The sensations that had previously been associated with a sense of self were still present, but it was as if the mind-system was not interpreting them as indicating any separate entity; rather they were just experienced as “raw sensations”, if that makes any sense.
There was also no sense of being in control of my thoughts or actions. Intentions and experiences would just arise on their own. In the shower, there was a strong negative valence arising from the cold; but the subjective experience was that the part of my mind that was experiencing the negative valence, was distinct from the one that made the decision of leaving the shower or remaining in it. The negative valence did not compel “the deciding subsystem” into any action, it was just available as information.
I do not recall the exact phenomenology associated with stepping out of the shower, but my best guess would be that it was something like: the thought arose that staying in the shower for too long might be unhealthy. This was followed by an intention arising to step out of the shower. That intention led to the action of stepping out of the shower.
From a third-person perspective, my guess of what happened was: different subsystems were submitting motor system bids of what to do. For whatever reason, the subsystem which generated the judgment that staying in the shower might be a bad idea, had the kinds of weights associated with its command pathway that caused its bids to be given the highest priority in this particular situation. (E.g. maybe it had made successful predictions in situations-like-this before, so the system judged it to have the highest probability of making a correct prediction; see the quoted excerpt’s discussion of selecting decision strategies according to context in this comment.)
This selection process is not consciously accessible, so one only gets to experience the end results of the process: intentions, actions, experiences and thoughts arising seemingly on their own, with the exact criteria for choosing between them remaining unknown.
Now if the selection process is not consciously accessible, then that implies that under more normal states of mind, we do not know the exact reasons for our behavior either. And there’s plenty of research suggesting that we are in fact systematically wrong about the causes of our actions. Rather, the subsystem which creates the experience of the self normally has access to the information that does emerge in consciousness—the end results of the actual selection process, i.e. the intentions, actions, experiences and thoughts—and it then generates a narrative of how “the self” has chosen different actions. If that narrative gets temporarily suspended—as it did during my experience—then it becomes apparent that the exact causes of behavior were never known in the first place, only inferred.
One person who achieved stream entry (the traditional “first major step of enlightenment”) reported that his first thought after it was “I don’t know how I’ll ever decide what to do next again.” Then he sat still until he got tired, and went to bed. We may not know the exact reasons for why we do things, but that does not prevent our mind from doing things anyway.
Thank you for this comment. Even if you don’t remember exactly what happened, at the very least, your story of what happened is likely to be based on the theoretical positions you subscribe to, and it’s helpful to explain these theoretical positions in a concrete example.
I guess what I don’t like about what you’re saying is that it’s entirely amoral. You don’t say how actions can be good. Even if a sense of good were to exist, it would be somehow abstract, entirely third-personal, and have no necessary connection to actual action. All intentions just arise on their own, the brain does something with them, some action is performed, and that’s it. We can only be good people by accident, not by evaluating reasons and making conscious choices.
I also disagree that you can generally draw conclusions about what happens in normal states of consciousness from examining an abnormal state of consciousness.
The person who experienced stream entry whose thoughts you link to says (in the very next line after your quote) that he decided to sit still until he experienced a physiological drive. That seems to be a conscious decision.
EDIT: You can find another example of someone being completely amoral (in a very different way) here: https://www.youtube.com/watch?v=B9XGUpQZY38
(I am not at all endorsing anything said in the video.)
To put the point starkly, as far as I can tell, whatever you’re saying (and what that video says) works just as well for a murderer as it does for you. Meditating, and obtaining enlightenment, allows a murderer to suffer less, while continuing to murder.
Well, we can certainly still evaluate reasons: in my example, “being under a cold shower for too long might be unhealthy” was a reason for stepping out of it. And it was evaluated consciously, in that the thought was broadcast into consciousness, allowing other subsystems to react to it—such as by objecting if they happened to disagree, or if they felt that continuing the experiment outweighed the risks. If other subsystems had raised objections, possibly I would have stayed in the shower longer.
This seems correct to me. My understanding is that the samurai actually practiced meditation in order to do well at battle and fear death less, that is, to be better at killing.
A draft for a later post in this series actually contains the following paragraphs:
It is somewhat unclear to me why exactly this bothers you, though. To me, meditation practice—together with the insights that it brings—is just a skill that brings you benefits in some particular areas, just like any other. Getting better at, say, physical exercise, also doesn’t tell you anything about how actions can be good. (Why would it?) Physical exercise also works the same for a murderer, possibly allowing them to murder better and easier. (Why wouldn’t it?)
I do think that there’s definitely some reason to expect that meditation could make you a better person—e.g. many of the reasons for why people are motivated to hurt other people, involve psychological issues and trauma that meditation may be helpful with. But if a sociopath who completely lacked an empathy subsystem (I don’t know enough about sociopathy to say whether this is an accurate description of it, but for the sake of argument, let’s assume that it is) happened to meditate and became enlightened, then of course there’s no reason to assume that meditation alone would create an empathy subsystem for them.
Your values are what they are, and meditation can help you understand and realize them better… but if your values are fundamentally “evil”, then why would meditation change them more than any other skill would?
Of course. You are, at the very least, technically right.
However, I think that obtaining enlightenment only makes it harder for you to change your values, because you’re much more likely to be fine with who you are. For example, the man who went through stream entry you linked to seems to have spent several years doing nothing, and didn’t feel particularly bad for it. Is that not scary? Is that likely to be a result of pursuing physical exercise?
On the other hand, if you spent time thinking clearly about your values, the likelihood of them changing for the better is higher, because you still have a desire (craving?) to be a better person.
He did, and then eventually his mind figured out a new set of motivations, and currently he is very actively doing things again and keeping himself busy.
Even apart from enlightenment, it is my own experience that one’s motivations may change in ways that are long-term good, but leave you adrift in the short term. At one point in my life I was basically driven by anxiety and the need to escape that constant anxiety. When I finally eliminated the source of anxiety, I had a period when I didn’t know what to do with my time anymore, because the vast majority of my habits (both physical and mental) had been oriented towards avoiding it, and that was just not necessary anymore.
Likewise, if people have trained learned to motivate themselves with guilt, then eliminating the guilt and trading it for a healthier form of motivation may be long-term beneficial, but leave them without a source of any motivation until their mind readjusts.
Whether enlightenment makes it easier or harder to change your values—I don’t know. Reducing craving means that you are less likely to cling to values that need revising, but may also eliminate cravings that had previously driven changes to your values. Certainly you can still spend time thinking about your values even if you are enlightened. (Though I am unclear to what extent anyone ever really changes their values in the first place, as opposed to just developing better strategies for achieving what, deep down, are their actual values.)
Personally I am not enlightened, but I certainly feel like developing deeper meditative insights has made it easier rather than harder for me to change my values. But human motivation is complicated, and which way it goes probably depends on a lot of individual factors.
EDIT: Thanks again for the discussion. It has been very helpful, because I think I can now articulate clearly a fundamental fear I have about meditation: it might lead to a loss of the desire to become better.
Cool. :) And yes, it might; it also comes with several other risks. If you feel like these risks are too large, then avoiding meditation may indeed be the right move for you. (As I said in the introductory post, I am trying to explain what I think is going on with meditation, but I am not trying to convince anyone to meditate if they think that it doesn’t seem worth it.)
All the gurus say that physical pain is just something from the body, and you can only have suffering (from it) if you are not enlightened. Would they still maintain that after being tortured for decades? I seriously doubt so.
This has lead me to believe that enlightenment is not about discovering truth, but quite the opposite. It’s about deluding yourself into happiness by believing that this world is actually something good.
That’s why I quit meditating. The only real hope is in eradicating suffering, a la David Pearce. Not ignoring it. Sure you can use meditation as pain management, but it isn’t the truth.
For what it’s worth, none of the people who I’d consider my meditation teachers have suggested that it’d be feasible to avoid suffering during extended torture, nor that it’d be practically possible to become so enlightened as to have no suffering at all.
That’s why I consider this world a not good world, for that (and less) being possible. Whereas all of them (Osho, Sadhguru, Ramana Maharishi) say that enlightenment is about realizing that you’re living in a good world. Hence it’s a lie imo.
If any of the teachers I’m most influenced by (Tucker Peck, Culadasa, Loch Kelly, Michael Taft, Daniel Ingram, Rob Burbea, Leigh Brasington) make that claim, I at least don’t remember encountering it. Pretty sure that at least some of them would disagree with it.
Maybe not is these exact terms, but maybe, I don’t know “realizing the benevolent tendency of existence”, “realizing the source as a benevolent force”, “realizing that all is love, that existence loves you”, etc. I’ve been hearing these kinds of claims from all gurus (although I’m not familiarized with any of the ones you mention, maybe you think the mainstream gurus from Osho to Eckhart Tolle are all bs? I don’t know).
Anyway, isn’t enlightenment also about losing fear, about being at ease? I once bought into it by understanding that ok, maybe all cravings are indeed futile, maybe death is indeed an illusion, maybe a back ache isn’t the end of the world and can be greatly alliviated through meditation… But how can you at least lose fear and be at ease, in a world where extreme physical pain is possible? Impossible.
I haven’t read any of their stuff so I don’t know. :)
“Enlightenment” is a pretty general term, with different traditions and teachers meaning different things by it; not all of the people I mentioned even use the term. Some people do say that it’s something like what you describe, others disagree (e.g. Ingram is quite vocal about his dislike for any models of enlightenment that suggest you can eliminate negative emotions), others yet might agree in part and disagree in part (e.g. they might agree that you can eliminate specific kinds of fear that are rooted in delusions, without being able to eliminate all categories of fear, or without it even necessarily being practically possible to get to all the delusions).
(There might also be some confusion going on in that “being at ease” in the sense used by meditation teachers does not necessarily mean “being without pain or negative emotions”; it might also mean that pain and negative emotions still appear, but the craving to be without them does not, so their appearance does not cause suffering. I think most of the people I mentioned wouldn’t claim you can get rid of all craving, but they would hold that you can substantially reduce it.)
However, there is something else I would like to ask you: do you think meditation can provide you with insights about the nature of consciousness? Those hard questions like “is the brain running algorithms”, “is consciousness possible to emulate or transfer into some other medium”, etc? I’d give a lot to know the answers to those questions but I don’t think that science will arrive there any soon. (And as for psychedelics I think that they just tell you what you want to hear, like dreams).
Ever had any of such kind of insights yourself? Or even about the nature of existence too.
Well, basically my whole multi-agent models of mind sequence (which talks quite a bit about the mechanisms and nature of consciousness) was motivated because I started noticing there being similar claims being made about the mind in neuroscience, psychotherapy and meditation, and wanting to put together a common framework for talking about them. So basically everything in all those posts is at least somewhat motivated by my experiences with meditation (as well as by my experiences with psychotherapy and my understanding of neuroscience).
That Wei Dai post explains little in these specific regards. Every Eastern religion, in my opinion, from Buddhism, to Induism, to Yoga, to Zen, teach Enlightenment as a way to reach some kind of extreme well-being through discovering the true nature of existence. Such would be rational in an acceptable world, not in this one—in this one it is the opposite, achieving well-being through self-delusion about the nature of existence. If you’re gonna keep dribbling this fact or invoking fringe views (regardless of their value) as the dominant ones, then we might just agree to disagree. No offense!
Oh, I’m not disputing that bit. The things I was saying there’s disagreement on were:
Whether it’s practically possible for someone to always experience such extreme well-being, as opposed to just most of the time (since you brought up the example of extreme torture, and it’s true that probably nobody is enlightened enough that they wouldn’t break down eventually if tortured)
Whether that extreme well-being necessarily takes the form of having only positive emotions, as opposed to being more at peace with also having negative emotions.
I think the teachers mean something slightly different by “the nature of existence”. The way I interpret it, “existence” is not so much a claim about the objective external world, but rather about the way your mind constructs your subjective experience. Things are confused by some of the teachers having a worldview that doesn’t really distinguish these, so they might talk of the two being one and the same.
Still, you can steelman the underlying claim to be about subjective rather than objective reality, and to say something like: “The nature of your subjective experience is that it’s entirely constructed by your mind, and that your wellbeing does not intrinsically need to depend on external factors; your mind is just hardwired to delude itself into thinking that it needs specific external conditions in order to have wellbeing. But because wellbeing is an internally computed property, based on interpretations of the world that are themselves internally computed, the mind can switch into experiencing wellbeing regardless of the external conditions.”
Note that this not require any delusion about what external reality is actually like: it would require delusion if internal wellbeing required the external universe to be good, but that’s exactly the kind of dependence on external conditions that the insight refutes. You can acknowledge that the external universe is quite bad, and then have lasting happiness anyway. In fact, this can make it easier to acknowledge that the external universe is bad, since that acknowledgment is no longer a threat to your happiness.
Though there’s also another nuance, which is that the same insight also involves noticing that your judgments of the world’s goodness or badness are also internally generated, and that considering the world intrinsically good or bad is an instance of the mind projection fallacy. And further, that the question of the world’s goodness or badness is just an internally-computed label, in a very similar kind of sense in which a thing’s bleggness is an internally computed label rather than there being an objective fact about whether something is really a blegg. Seeing that can lead to the experience that the world is actually neither good nor bad in an intrinsic sense, as its goodness or badness depends entirely on the criteria we choose for considering it good or bad; and this may be seen on a sufficiently deep level to relieve any suffering one was having due to an experience of the world being bad.
But that kind of nuance may be difficult to communicate, especially if one comes from a tradition which didn’t have terminology like “mind projection fallacy” or knowledge of neural networks, so it then gets rounded into “the world is intrinsically good”. This is because the emotional experience it creates may be similar to that which you’d have if you thought the world was intrinsically good in objective terms… even though there’s actually again no claim about what the external world is really like. You can have an internal emotional experience of feeling good about the world while simultaneously also acknowledging everything that is horrible about the world and still wanting to change it, since the key insight is again that your happiness or motivation does not need to depend on external factors (such as any specific properties of the world).
Also, to clarify: reducing craving means that one’s mind isn’t as compelled to make decisions on the basis of pushing away negative valence or being compulsively drawn towards positive valence; but at the same, a reduction of craving may also mean that the mind is more capable of making decisions based on negative valences.
Suppose that a thing that I am doing is likely to have a negative consequence. This means that thinking about the consequences of my actions, may bring to mind negative valence; if I have a craving to avoid negative valence, I might then flinch away from thinking about those consequences.
In contrast, if I don’t have a craving to avoid negative valence, I might think about the consequences, notice that they have negative valence, and then take that valence into account by deciding to act differently.
Yes, I understand this.