″...pursuing pleasure and happiness even if that sacrifices your ability to impact the world. Reducing the influence of the craving makes your motivations less driven by wireheading-like impulses, and more able to see the world clearly even if it is painful.”
Once we have deemed that wanting to pursue pleasure and happiness are wireheading-like impulses, why stop ourselves from saying that wanting to impact the world is a wireheading-like impulse?
You also talk about meditators ignoring pain, and how the desire to avoid pain is craving. Why isn’t a desire to avoid death craving? You clearly speak as if going to a dentist when you have a tooth ache is the right thing to do, but why? Once you distance your ‘self’ from pain, why not distance yourself from your rotting teeth?
All my intuitions about how to act are based on this flawed sense of self. And from what you are outlining, I don’t see how any intuition about the right way to act can possibly remain once we lose this flawed sense of self.
There’s a general discomfort I have with this series of posts that I’m not able to fully articulate, but the above questions seem related.
Once we have deemed that wanting to pursue pleasure and happiness are wireheading-like impulses, why stop ourselves from saying that wanting to impact the world is a wireheading-like impulse? [...] Why isn’t a desire to avoid death craving?
Fair question. One answer is: wanting to save the world can be a wireheading-like impulse, if it is generated by craving as opposed to some other form of motivation. Likewise, pursuing pleasure and happiness can also be non-wireheading-like, if you pursue them for reasons other than craving. Wanting to avoid death, too, is something that you can pursue either out of craving or for other reasons.
For example, you may pursue pleasure:
Because you value it for its own sake
Because experiencing pleasure makes your mind and body work better than if you were only experiencing unhappiness
Because it is useful for releasing craving
Or for some other reason.
The difference (or at least a difference) is more in how you react to the possibility of there being obstacles to that goal. Take the dentist example.
You might value pleasure and healthy teeth in a non-craving-based way; this leads you to conclude that even though the dentist visit might be unpleasant, overall there is going to be more pleasure if you just go to the dentist right away and get the source of discomfort fixed as soon as possible. You can think about how unpleasant the dentist visit is and weigh it appropriately, without instinctively flinching away from the very thought of that unpleasantness.
Or you might have a craving to pursue pleasure and avoid discomfort, in which case even thinking about the dentist visit is aversive. In third-person terms, you have a constraint “do not think about doing unpleasant things”, so as soon as you mentally simulate the dentist visit and the simulation includes discomfort, your mind is pushed to think about something else. I call this “wireheading-like” in the sense that you are taking actions which are superficially furthering the goal in the short term (by avoiding the thought of the dentist, you are avoiding some discomfort), but are actually hurting it in the long term (if you just went to the dentist right away, you’d end up with much less discomfort overall).
You clearly speak as if going to a dentist when you have a tooth ache is the right thing to do, but why?
Because even when you let go of craving, you still have all of your other values.
I find it helpful to think of craving and non-craving as two layers of motivation: at the bottom there is one system of motivations which is doing things, and then on top there is craving, which sets a variety of its own goals. Decision-making tends to involve a mixture of motivations, some of them coming from craving and some of them coming from non-craving. But craving tends to be so “loud”, and frequently be the dominant form of motivation, that the other motivations can become hard to notice.
As an example, maybe you have had an experience where you are just trying out something for the first time, and don’t have any major expectations one way or the other; you have a relaxed time. Because you are so relaxed and non-stressed, things go well and it ends up being really enjoyable. Afterwards, you develop a desire to repeat the experience and ensure that it goes that well again; as a result, it doesn’t, because you are so focused on how to repeat it rather than on doing things in the relaxed way that actually got you the positive result the first time.
The first time you were acting without craving, which led to good results; then craving seized upon those good results and tried to repeat them, which did not go as well.
(For me, a particularly clear example of this is in the context of romantic relationships. If I’m meeting someone for the first time, I might be relaxed and not particularly focused on whether it will lead to an actual relationship or not. But then if it looks like we might actually end up in a relationship, I can get a major craving towards wanting things to go that way, and then make a mess of it.)
For Western, non-mystical contexts where people have picked up on the craving thing, the examples from this newsletter feel related:
First up is The Inner Game Of Work by W. Timothy Gallwey. This is a successor to The Inner Game Of Tennis, though this one speaks more clearly to me as a non-sporty person. The key thesis of the book is that we have two ‘Selves’: Self 1 and Self 2.
Self 1 is the voice in your head that gives instructions, e.g. to hit the tennis ball, and then criticises performance as good or bad. You know this voice well, I’m sure. Self 2 is the one that actually hits the ball.
When I was playing at my best, I wasn’t trying to control my shots with self-instruction and evaluation. It was a much simpler process than that. I saw the ball clearly, chose where I wanted to hit it, and I let it happen. Surprisingly, the shots were more controlled when I didn’t try to control them.
This is a book where I was nodding along and highlighting every other sentence with notes like “yes!! this is AT!!”. In this context, Alexander Technique is a method to shut Self 1 up and allow Self 2 to express itself.
In those terms, “Self 1” is associated with the construct of the self, as well as craving. “Self 2″ are the subsystems that just do stuff regardless, and may indeed often do better if the craving doesn’t get in the way.
Thank you for your reply, and it does clarify some things for me. If I may summarise in short, I think you are saying:
Craving is a bad sort of motivation because it makes you react badly to obstacles, but other sorts of motivation can be fine.
Self-conscious/ craving-filled states of mind can be unproductive when trying to act on these other sorts of motivations.
I still have some questions though.
You say you may pursue pleasure because you value it for its own sake. But what is the self (or subsystem?) that is doing this valuing? It feels like the valuer is a lot like a “Self 1”, the kind of self which meditation should expose to be some kind of delusion.
Here’s an attempt to put the question another way. Some one suggested in one of the previous comment threads about the topic that non-self was a bit like not identifying with your short term desires, and also your long term desires (and then eventually not identifying with anything). So why is identifying yourself with your values compatible with non-self?
EDIT: I reproduce here part of my response to Isusr, which I think is relevant, and is perhaps yet another way to ask the same question.
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a perfect meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I value my health, or maybe because unhealthiness is intrinsically bad. And if they asked me why I value my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
I kind of feel that the enlightened cannot provide any reasons for their actions at all.
Craving is a bad sort of motivation because it makes you react badly to obstacles, but other sorts of motivation can be fine.
Self-conscious/ craving-filled states of mind can be unproductive when trying to act on these other sorts of motivations.
Roughly, yes, though I would be a bit cautious about framing craving as outright bad, more like “the tradeoffs involved may make it better to let go of it in the end”; but of course, that depends on what exactly one is trying to achieve. As I noted in the post, it is also possible for one to weaken their craving with bad results, at least if we evaluate “results” from the point of view of achieving things.
You say you may pursue pleasure because you value it for its own sake. But what is the self (or subsystem?) that is doing this valuing? It feels like the valuer is a lot like a “Self 1”, the kind of self which meditation should expose to be some kind of delusion.
Different subsystems make valuations all the time; that’s not an illusion. What’s illusory is the notion that all of the different valuations are coming from a single self, and that positive/negative valence are things that the system intrinsically has to pursue/avoid.
For instance, one part of the mechanism is that at any given moment, you may have conscious intentions about what to do next. If you have two conflicting intentions, then those conflicting intentions are generated by different subsystems. However, frequently the mind-system attributes all intentions to a single source: “the self”. Operating based on that assumption, the mind-system models itself as having a single decision-maker that generates all intentions and observes all experiences.
Anosognosia is the condition of not being aware of your own disabilities. [...] Take the example of the woman discussed in Lishman’s Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn’t actually her arm, it was her daughter’s. Why was her daughter’s arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter’s hand? The patient said her daughter had borrowed it. Where was the patient’s arm? The patient “turned her head and searched in a bemused way over her left shoulder”. [...]
Dr. Ramachandran [...] posits two different reasoning modules located in the two different hemispheres. The left brain tries to fit the data to the theory to preserve a coherent internal narrative and prevent a person from jumping back and forth between conclusions upon each new data point. It is primarily an apologist, there to explain why any experience is exactly what its own theory would have predicted. The right brain is the seat of the second virtue. When it’s had enough of the left-brain’s confabulating, it initiates a Kuhnian paradigm shift to a completely new narrative. Ramachandran describes it as “a left-wing revolutionary”.
Normally these two systems work in balance. But if a stroke takes the revolutionary offline, the brain loses its ability to change its mind about anything significant. If your left arm was working before your stroke, the little voice that ought to tell you it might be time to reject the “left arm works fine” theory goes silent. The only one left is the poor apologist, who must tirelessly invent stranger and stranger excuses for why all the facts really fit the “left arm works fine” theory perfectly well. [...]
This divorce between the apologist and the revolutionary might also explain some of the odd behavior of split-brain patients. Consider the following experiment: a split-brain patient was shown two images, one in each visual field. The left hemisphere received the image of a chicken claw, and the right hemisphere received the image of a snowed-in house. The patient was asked verbally to describe what he saw, activating the left (more verbal) hemisphere. The patient said he saw a chicken claw, as expected. Then the patient was asked to point with his left hand (controlled by the right hemisphere) to a picture related to the scene. Among the pictures available were a shovel and a chicken. He pointed to the shovel. So far, no crazier than what we’ve come to expect from neuroscience.
Now the doctor verbally asked the patient to describe why he just pointed to the shovel. The patient verbally (left hemisphere!) answered that he saw a chicken claw, and of course shovels are necessary to clean out chicken sheds, so he pointed to the shovel to indicate chickens. The apologist in the left-brain is helpless to do anything besides explain why the data fits its own theory, and its own theory is that whatever happened had something to do with chickens, dammit!
One way of explaining the construct of the self, is that there’s a reasoning module which constructs a story of there being a single decision-maker, “the self”, that’s deciding everything. In the case of the split-brain patient, a subsystem has decided to point at a shovel because it’s related to the sight of the snowed-in house that it saw; but the subsystem that is constructing the narrative of the self being in charge of everything, has only seen a chicken claw. So in order to fit the things that it knows into a coherent story, it creates a spurious narrative where the self saw the chicken claw, and shovels are needed for cleaning chicken sheds, so that’s the reason why the self picked the shovel.
But what actually made the decision was an independent subsystem that was cut off from the self-narrative subsystem, which happened to infer that a shovel is useful for digging your way out of a snowed-in house. The subsystem creating the construct of the self wasn’t responsible for the decision nor the implicit valuations involved in it, it merely happened to create a story that took the credit for what another subsystem had already done.
Seeing the nature of the self doesn’t stop you from making valuations, it just makes you see that they are not coming from the self. But many of the valuations themselves remain unchanged by that. (As the Zen proverb goes: “Before enlightenment, chop wood, carry water. After enlightenment, chop wood, carry water.”)
Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.
3. This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I’m not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.
I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
Appreciated. :) Answering these in detail is also useful, in that it helps me figure out which things I should mention in my future posts—I might copy-paste some parts of my answers here, right into some of my next posts…
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
It might be helpful to notice that positive/negative valence is usually already one step removed from some underlying set of values. For example:
Appraisal theories of emotionhold that emotional responses (with their underlying positive or negative valence) are the result of subconscious evaluations about the significance of a situation, relative to the person’s goals. An evaluation saying that you have lost something important to you, for example, may trigger the emotion of sadness with its associated negative valence.
In the case of Richard, a subsystem within his brain had formed the prediction that if he were to express confidence, this would cause other people to dislike him. It then generated negative self-talk to prevent him from being confident. Presumably the self-talk had some degree of negative valence; in this case that served as a tool that the subsystem could use to block a particular action it deemed bad.
Consider a situation where you are successfully carrying out some physical activity; playing a fast-paced sport or video game, for example. This is likely to be associated with positive valence, which emerges from the fact that you are having success at the task. On the other hand, if you were failing to keep up and couldn’t get into a good flow, you would likely experience negative valence.
What I’m trying to point at here is that valence looks like a signal about whether or not some set of goals/values are being successfully attained. A subsystem may have a goal X which it pursues independently, and depending on how well it goes, valence is produced as a result; and subsystem A may also produce different levels of valence in order to affect the behavior of subsystem B, to cause subsystem B to act in the way that subsystem A values.
In this model, because valence tends to signal states that are good/bad for the achievement of an organism’s goals, craving acts as an additional mechanism that “grabs onto” states that seem to be particularly good/bad, and tries to direct the organism more strongly towards those. But the underlying machinery that is producing the valence, was always optimizing for some deeper set of values, which only produced valence as a byproduct.
Unfortunately a comprehensive answer to the question of “what is the decision criteria, if not valence” would require a complete theory of human motivation and values, and I don’t have one. :)
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused?
I am not making the claim that reasoning would always only be rationalization. Rather, the chicken claw story was intended to suggest that one particular reasoning module tends to generate a story of a self that acts as the decision-maker. I don’t even think that the module is rationalizing in the sense of being completely resistant to new evidence: if it was, all of this meditation aimed at exploring no-self would be pretty pointless.
Rather, I think that the situation is more like Scott described in his post: the self-narrative subsystem starts out with a strong prior for one particular hypothesis (with that hypothesis also being culturally reinforced and learned), and creates an explanation which fits things into that hypothesis, treating deviations from it as noise to be discarded. But if it gets the right kind of evidence about the nature of the self (which certain kinds of meditation provide it), then it will update its theories and eventually settle on a different narrative.
To answer your actual question, we certainly do all kinds of reasoning, and this reasoning may certainly resolve internal conflicts or cause us to choose certain kinds of behavior. But I think that reasoning in general, is distinct from the experience of a self. For example, in an earlier post, I talked about the mechanisms by which one may learn to carry out arithmetical reasoning by internalizing a set of rules about how to manipulate numbers; and then later, about how Kahneman’s “System 2” represents a type of reasoning where different subsystems are chaining together their outputs through working memory. So we certainly reason, and that reasoning does provide us with reasons for our behavior, but I see no need to assume that the reasoning would require a self.
This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way.
I agree that abnormal situations by themselves are not conclusive evidence, yes.
Once we have deemed that wanting to pursue pleasure and happiness are wireheading-like impulses, why stop ourselves from saying that wanting to impact the world is a wireheading-like impulse?
Is there a way you can rephrase this question without using the word “wirehead”? When discussing meditation, the word “wirehead” can have two very different meanings. Usually, “wirehead” refers to the gross failure mode of heavy meditation where a practitioner amnesthesizes him/herself into a potato. Kaj_Sotala has used the word “wirehead” to refer to a subtle specific consequence of taṇhā.
Why isn’t a desire to avoid death craving?
A desire to avoid death is craving. (In fact, death is one of the Four Sights.) The actions of postponing death are not craving. Only the desire to avoid death is.
You clearly speak as if going to a dentist when you have a tooth ache is the right thing to do, but why?
Because you have a toothache and your teeth will rot if you don’t go to a dentist.
Once you distance your ‘self’ from pain, why not distance yourself from your rotting teeth?
Penetrating taṇhā is the opposite of distancing. It’s about accepting the world right now as it is. If your teeth are rotting right this instant then you should accept that your teeth are rotting right this instant. Such is the Litany of Tarski.
If my teeth are rotting,
then I desire to believe my teeth are rotting;
If my teeth are not rotting,
then I desire to believe my teeth are not rotting;
Let me not become attached to beliefs I may not want.
The thing you distance yourself from isn’t the pain, it’s your self. Kaj_Sotala’s post is about taṇhā, one of the Three Characteristics of Existence. Another Characteristic of Existence is anattā or non-self. I hope this becomes clearer once Kaj_Sotala gets to anattā in this series.
All my intuitions about how to act are based on this flawed sense of self. And from what you are outlining, I don’t see how any intuition about the right way to act can possibly remain once we lose this flawed sense of self.
It is possible to do something without craving it. For example, consider relaxing on a tropical beach and reaching over to drink a mango smoothie. Now, consider the instant you are mid-sip, sucking through the straw while the flavor washes over your mouth. In that instant, you act without craving.
The same goes for when you are engrossed in fun conversation with close friends and family.
There’s a general discomfort I have with this series of posts that I’m not able to fully articulate, but the above questions seem related.
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I care about my health or maybe because unhealthiness is intrinsically bad. And if they asked me why I care about my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
In fact, the later part of your response makes me feel that the enlightened cannot provide any reasons for their actions at all.
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I care about my health or maybe because unhealthiness is intrinsically bad. And if they asked me why I care about my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
Even to the enlightened, experiences with positive valence still feel like they have positive valence; experiences with negative valence still feel like they have negative valence. (Well, there are accounts which disagree with this and claim that perpetual positive experience is possible, but I am skeptical of those.) One can still prefer states with positive valence, and say that “they just feel good to me”—one is just okay with the possibility of not always getting them.
I realize that this is hard to imagine if you haven’t actually experienced it. An analogy that’s kind of close might be if you were offered a choice between two foods that you were almost indifferent over, but just slightly preferred option B. Given the choice, you ask to have B, but if you were given A instead, you wouldn’t feel any less happy for it. At least, you could let go of your disappointment very quickly.
I am a bit confused by the lines:
″...pursuing pleasure and happiness even if that sacrifices your ability to impact the world. Reducing the influence of the craving makes your motivations less driven by wireheading-like impulses, and more able to see the world clearly even if it is painful.”
Once we have deemed that wanting to pursue pleasure and happiness are wireheading-like impulses, why stop ourselves from saying that wanting to impact the world is a wireheading-like impulse?
You also talk about meditators ignoring pain, and how the desire to avoid pain is craving. Why isn’t a desire to avoid death craving? You clearly speak as if going to a dentist when you have a tooth ache is the right thing to do, but why? Once you distance your ‘self’ from pain, why not distance yourself from your rotting teeth?
All my intuitions about how to act are based on this flawed sense of self. And from what you are outlining, I don’t see how any intuition about the right way to act can possibly remain once we lose this flawed sense of self.
There’s a general discomfort I have with this series of posts that I’m not able to fully articulate, but the above questions seem related.
Fair question. One answer is: wanting to save the world can be a wireheading-like impulse, if it is generated by craving as opposed to some other form of motivation. Likewise, pursuing pleasure and happiness can also be non-wireheading-like, if you pursue them for reasons other than craving. Wanting to avoid death, too, is something that you can pursue either out of craving or for other reasons.
For example, you may pursue pleasure:
Because you value it for its own sake
Because experiencing pleasure makes your mind and body work better than if you were only experiencing unhappiness
Because it is useful for releasing craving
Or for some other reason.
The difference (or at least a difference) is more in how you react to the possibility of there being obstacles to that goal. Take the dentist example.
You might value pleasure and healthy teeth in a non-craving-based way; this leads you to conclude that even though the dentist visit might be unpleasant, overall there is going to be more pleasure if you just go to the dentist right away and get the source of discomfort fixed as soon as possible. You can think about how unpleasant the dentist visit is and weigh it appropriately, without instinctively flinching away from the very thought of that unpleasantness.
Or you might have a craving to pursue pleasure and avoid discomfort, in which case even thinking about the dentist visit is aversive. In third-person terms, you have a constraint “do not think about doing unpleasant things”, so as soon as you mentally simulate the dentist visit and the simulation includes discomfort, your mind is pushed to think about something else. I call this “wireheading-like” in the sense that you are taking actions which are superficially furthering the goal in the short term (by avoiding the thought of the dentist, you are avoiding some discomfort), but are actually hurting it in the long term (if you just went to the dentist right away, you’d end up with much less discomfort overall).
Because even when you let go of craving, you still have all of your other values.
I find it helpful to think of craving and non-craving as two layers of motivation: at the bottom there is one system of motivations which is doing things, and then on top there is craving, which sets a variety of its own goals. Decision-making tends to involve a mixture of motivations, some of them coming from craving and some of them coming from non-craving. But craving tends to be so “loud”, and frequently be the dominant form of motivation, that the other motivations can become hard to notice.
As an example, maybe you have had an experience where you are just trying out something for the first time, and don’t have any major expectations one way or the other; you have a relaxed time. Because you are so relaxed and non-stressed, things go well and it ends up being really enjoyable. Afterwards, you develop a desire to repeat the experience and ensure that it goes that well again; as a result, it doesn’t, because you are so focused on how to repeat it rather than on doing things in the relaxed way that actually got you the positive result the first time.
The first time you were acting without craving, which led to good results; then craving seized upon those good results and tried to repeat them, which did not go as well.
(For me, a particularly clear example of this is in the context of romantic relationships. If I’m meeting someone for the first time, I might be relaxed and not particularly focused on whether it will lead to an actual relationship or not. But then if it looks like we might actually end up in a relationship, I can get a major craving towards wanting things to go that way, and then make a mess of it.)
For Western, non-mystical contexts where people have picked up on the craving thing, the examples from this newsletter feel related:
In those terms, “Self 1” is associated with the construct of the self, as well as craving. “Self 2″ are the subsystems that just do stuff regardless, and may indeed often do better if the craving doesn’t get in the way.
Thank you for your reply, and it does clarify some things for me. If I may summarise in short, I think you are saying:
Craving is a bad sort of motivation because it makes you react badly to obstacles, but other sorts of motivation can be fine.
Self-conscious/ craving-filled states of mind can be unproductive when trying to act on these other sorts of motivations.
I still have some questions though.
You say you may pursue pleasure because you value it for its own sake. But what is the self (or subsystem?) that is doing this valuing? It feels like the valuer is a lot like a “Self 1”, the kind of self which meditation should expose to be some kind of delusion.
Here’s an attempt to put the question another way. Some one suggested in one of the previous comment threads about the topic that non-self was a bit like not identifying with your short term desires, and also your long term desires (and then eventually not identifying with anything). So why is identifying yourself with your values compatible with non-self?
EDIT: I reproduce here part of my response to Isusr, which I think is relevant, and is perhaps yet another way to ask the same question.
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a perfect meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I value my health, or maybe because unhealthiness is intrinsically bad. And if they asked me why I value my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
I kind of feel that the enlightened cannot provide any reasons for their actions at all.
Roughly, yes, though I would be a bit cautious about framing craving as outright bad, more like “the tradeoffs involved may make it better to let go of it in the end”; but of course, that depends on what exactly one is trying to achieve. As I noted in the post, it is also possible for one to weaken their craving with bad results, at least if we evaluate “results” from the point of view of achieving things.
Different subsystems make valuations all the time; that’s not an illusion. What’s illusory is the notion that all of the different valuations are coming from a single self, and that positive/negative valence are things that the system intrinsically has to pursue/avoid.
For instance, one part of the mechanism is that at any given moment, you may have conscious intentions about what to do next. If you have two conflicting intentions, then those conflicting intentions are generated by different subsystems. However, frequently the mind-system attributes all intentions to a single source: “the self”. Operating based on that assumption, the mind-system models itself as having a single decision-maker that generates all intentions and observes all experiences.
In The Apologist and the Revolutionary, Scott Alexander writes:
One way of explaining the construct of the self, is that there’s a reasoning module which constructs a story of there being a single decision-maker, “the self”, that’s deciding everything. In the case of the split-brain patient, a subsystem has decided to point at a shovel because it’s related to the sight of the snowed-in house that it saw; but the subsystem that is constructing the narrative of the self being in charge of everything, has only seen a chicken claw. So in order to fit the things that it knows into a coherent story, it creates a spurious narrative where the self saw the chicken claw, and shovels are needed for cleaning chicken sheds, so that’s the reason why the self picked the shovel.
But what actually made the decision was an independent subsystem that was cut off from the self-narrative subsystem, which happened to infer that a shovel is useful for digging your way out of a snowed-in house. The subsystem creating the construct of the self wasn’t responsible for the decision nor the implicit valuations involved in it, it merely happened to create a story that took the credit for what another subsystem had already done.
Seeing the nature of the self doesn’t stop you from making valuations, it just makes you see that they are not coming from the self. But many of the valuations themselves remain unchanged by that. (As the Zen proverb goes: “Before enlightenment, chop wood, carry water. After enlightenment, chop wood, carry water.”)
Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.
3. This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I’m not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.
Appreciated. :) Answering these in detail is also useful, in that it helps me figure out which things I should mention in my future posts—I might copy-paste some parts of my answers here, right into some of my next posts…
It might be helpful to notice that positive/negative valence is usually already one step removed from some underlying set of values. For example:
Appraisal theories of emotionhold that emotional responses (with their underlying positive or negative valence) are the result of subconscious evaluations about the significance of a situation, relative to the person’s goals. An evaluation saying that you have lost something important to you, for example, may trigger the emotion of sadness with its associated negative valence.
In the case of Richard, a subsystem within his brain had formed the prediction that if he were to express confidence, this would cause other people to dislike him. It then generated negative self-talk to prevent him from being confident. Presumably the self-talk had some degree of negative valence; in this case that served as a tool that the subsystem could use to block a particular action it deemed bad.
Consider a situation where you are successfully carrying out some physical activity; playing a fast-paced sport or video game, for example. This is likely to be associated with positive valence, which emerges from the fact that you are having success at the task. On the other hand, if you were failing to keep up and couldn’t get into a good flow, you would likely experience negative valence.
What I’m trying to point at here is that valence looks like a signal about whether or not some set of goals/values are being successfully attained. A subsystem may have a goal X which it pursues independently, and depending on how well it goes, valence is produced as a result; and subsystem A may also produce different levels of valence in order to affect the behavior of subsystem B, to cause subsystem B to act in the way that subsystem A values.
In this model, because valence tends to signal states that are good/bad for the achievement of an organism’s goals, craving acts as an additional mechanism that “grabs onto” states that seem to be particularly good/bad, and tries to direct the organism more strongly towards those. But the underlying machinery that is producing the valence, was always optimizing for some deeper set of values, which only produced valence as a byproduct.
Unfortunately a comprehensive answer to the question of “what is the decision criteria, if not valence” would require a complete theory of human motivation and values, and I don’t have one. :)
I am not making the claim that reasoning would always only be rationalization. Rather, the chicken claw story was intended to suggest that one particular reasoning module tends to generate a story of a self that acts as the decision-maker. I don’t even think that the module is rationalizing in the sense of being completely resistant to new evidence: if it was, all of this meditation aimed at exploring no-self would be pretty pointless.
Rather, I think that the situation is more like Scott described in his post: the self-narrative subsystem starts out with a strong prior for one particular hypothesis (with that hypothesis also being culturally reinforced and learned), and creates an explanation which fits things into that hypothesis, treating deviations from it as noise to be discarded. But if it gets the right kind of evidence about the nature of the self (which certain kinds of meditation provide it), then it will update its theories and eventually settle on a different narrative.
To answer your actual question, we certainly do all kinds of reasoning, and this reasoning may certainly resolve internal conflicts or cause us to choose certain kinds of behavior. But I think that reasoning in general, is distinct from the experience of a self. For example, in an earlier post, I talked about the mechanisms by which one may learn to carry out arithmetical reasoning by internalizing a set of rules about how to manipulate numbers; and then later, about how Kahneman’s “System 2” represents a type of reasoning where different subsystems are chaining together their outputs through working memory. So we certainly reason, and that reasoning does provide us with reasons for our behavior, but I see no need to assume that the reasoning would require a self.
I agree that abnormal situations by themselves are not conclusive evidence, yes.
This makes sense.
Talking about how lifting X-wings is impossible is social work and not weightlifting, can you please start yeeting and stop blabbering?
Is there a way you can rephrase this question without using the word “wirehead”? When discussing meditation, the word “wirehead” can have two very different meanings. Usually, “wirehead” refers to the gross failure mode of heavy meditation where a practitioner amnesthesizes him/herself into a potato. Kaj_Sotala has used the word “wirehead” to refer to a subtle specific consequence of taṇhā.
A desire to avoid death is craving. (In fact, death is one of the Four Sights.) The actions of postponing death are not craving. Only the desire to avoid death is.
Because you have a toothache and your teeth will rot if you don’t go to a dentist.
Penetrating taṇhā is the opposite of distancing. It’s about accepting the world right now as it is. If your teeth are rotting right this instant then you should accept that your teeth are rotting right this instant. Such is the Litany of Tarski.
The thing you distance yourself from isn’t the pain, it’s your self. Kaj_Sotala’s post is about taṇhā, one of the Three Characteristics of Existence. Another Characteristic of Existence is anattā or non-self. I hope this becomes clearer once Kaj_Sotala gets to anattā in this series.
It is possible to do something without craving it. For example, consider relaxing on a tropical beach and reaching over to drink a mango smoothie. Now, consider the instant you are mid-sip, sucking through the straw while the flavor washes over your mouth. In that instant, you act without craving.
The same goes for when you are engrossed in fun conversation with close friends and family.
Good!
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I care about my health or maybe because unhealthiness is intrinsically bad. And if they asked me why I care about my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
In fact, the later part of your response makes me feel that the enlightened cannot provide any reasons for their actions at all.
Even to the enlightened, experiences with positive valence still feel like they have positive valence; experiences with negative valence still feel like they have negative valence. (Well, there are accounts which disagree with this and claim that perpetual positive experience is possible, but I am skeptical of those.) One can still prefer states with positive valence, and say that “they just feel good to me”—one is just okay with the possibility of not always getting them.
I realize that this is hard to imagine if you haven’t actually experienced it. An analogy that’s kind of close might be if you were offered a choice between two foods that you were almost indifferent over, but just slightly preferred option B. Given the choice, you ask to have B, but if you were given A instead, you wouldn’t feel any less happy for it. At least, you could let go of your disappointment very quickly.