Thank you for your reply, and it does clarify some things for me. If I may summarise in short, I think you are saying:
Craving is a bad sort of motivation because it makes you react badly to obstacles, but other sorts of motivation can be fine.
Self-conscious/ craving-filled states of mind can be unproductive when trying to act on these other sorts of motivations.
I still have some questions though.
You say you may pursue pleasure because you value it for its own sake. But what is the self (or subsystem?) that is doing this valuing? It feels like the valuer is a lot like a “Self 1”, the kind of self which meditation should expose to be some kind of delusion.
Here’s an attempt to put the question another way. Some one suggested in one of the previous comment threads about the topic that non-self was a bit like not identifying with your short term desires, and also your long term desires (and then eventually not identifying with anything). So why is identifying yourself with your values compatible with non-self?
EDIT: I reproduce here part of my response to Isusr, which I think is relevant, and is perhaps yet another way to ask the same question.
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a perfect meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I value my health, or maybe because unhealthiness is intrinsically bad. And if they asked me why I value my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
I kind of feel that the enlightened cannot provide any reasons for their actions at all.
Craving is a bad sort of motivation because it makes you react badly to obstacles, but other sorts of motivation can be fine.
Self-conscious/ craving-filled states of mind can be unproductive when trying to act on these other sorts of motivations.
Roughly, yes, though I would be a bit cautious about framing craving as outright bad, more like “the tradeoffs involved may make it better to let go of it in the end”; but of course, that depends on what exactly one is trying to achieve. As I noted in the post, it is also possible for one to weaken their craving with bad results, at least if we evaluate “results” from the point of view of achieving things.
You say you may pursue pleasure because you value it for its own sake. But what is the self (or subsystem?) that is doing this valuing? It feels like the valuer is a lot like a “Self 1”, the kind of self which meditation should expose to be some kind of delusion.
Different subsystems make valuations all the time; that’s not an illusion. What’s illusory is the notion that all of the different valuations are coming from a single self, and that positive/negative valence are things that the system intrinsically has to pursue/avoid.
For instance, one part of the mechanism is that at any given moment, you may have conscious intentions about what to do next. If you have two conflicting intentions, then those conflicting intentions are generated by different subsystems. However, frequently the mind-system attributes all intentions to a single source: “the self”. Operating based on that assumption, the mind-system models itself as having a single decision-maker that generates all intentions and observes all experiences.
Anosognosia is the condition of not being aware of your own disabilities. [...] Take the example of the woman discussed in Lishman’s Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn’t actually her arm, it was her daughter’s. Why was her daughter’s arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter’s hand? The patient said her daughter had borrowed it. Where was the patient’s arm? The patient “turned her head and searched in a bemused way over her left shoulder”. [...]
Dr. Ramachandran [...] posits two different reasoning modules located in the two different hemispheres. The left brain tries to fit the data to the theory to preserve a coherent internal narrative and prevent a person from jumping back and forth between conclusions upon each new data point. It is primarily an apologist, there to explain why any experience is exactly what its own theory would have predicted. The right brain is the seat of the second virtue. When it’s had enough of the left-brain’s confabulating, it initiates a Kuhnian paradigm shift to a completely new narrative. Ramachandran describes it as “a left-wing revolutionary”.
Normally these two systems work in balance. But if a stroke takes the revolutionary offline, the brain loses its ability to change its mind about anything significant. If your left arm was working before your stroke, the little voice that ought to tell you it might be time to reject the “left arm works fine” theory goes silent. The only one left is the poor apologist, who must tirelessly invent stranger and stranger excuses for why all the facts really fit the “left arm works fine” theory perfectly well. [...]
This divorce between the apologist and the revolutionary might also explain some of the odd behavior of split-brain patients. Consider the following experiment: a split-brain patient was shown two images, one in each visual field. The left hemisphere received the image of a chicken claw, and the right hemisphere received the image of a snowed-in house. The patient was asked verbally to describe what he saw, activating the left (more verbal) hemisphere. The patient said he saw a chicken claw, as expected. Then the patient was asked to point with his left hand (controlled by the right hemisphere) to a picture related to the scene. Among the pictures available were a shovel and a chicken. He pointed to the shovel. So far, no crazier than what we’ve come to expect from neuroscience.
Now the doctor verbally asked the patient to describe why he just pointed to the shovel. The patient verbally (left hemisphere!) answered that he saw a chicken claw, and of course shovels are necessary to clean out chicken sheds, so he pointed to the shovel to indicate chickens. The apologist in the left-brain is helpless to do anything besides explain why the data fits its own theory, and its own theory is that whatever happened had something to do with chickens, dammit!
One way of explaining the construct of the self, is that there’s a reasoning module which constructs a story of there being a single decision-maker, “the self”, that’s deciding everything. In the case of the split-brain patient, a subsystem has decided to point at a shovel because it’s related to the sight of the snowed-in house that it saw; but the subsystem that is constructing the narrative of the self being in charge of everything, has only seen a chicken claw. So in order to fit the things that it knows into a coherent story, it creates a spurious narrative where the self saw the chicken claw, and shovels are needed for cleaning chicken sheds, so that’s the reason why the self picked the shovel.
But what actually made the decision was an independent subsystem that was cut off from the self-narrative subsystem, which happened to infer that a shovel is useful for digging your way out of a snowed-in house. The subsystem creating the construct of the self wasn’t responsible for the decision nor the implicit valuations involved in it, it merely happened to create a story that took the credit for what another subsystem had already done.
Seeing the nature of the self doesn’t stop you from making valuations, it just makes you see that they are not coming from the self. But many of the valuations themselves remain unchanged by that. (As the Zen proverb goes: “Before enlightenment, chop wood, carry water. After enlightenment, chop wood, carry water.”)
Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.
3. This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I’m not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.
I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
Appreciated. :) Answering these in detail is also useful, in that it helps me figure out which things I should mention in my future posts—I might copy-paste some parts of my answers here, right into some of my next posts…
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
It might be helpful to notice that positive/negative valence is usually already one step removed from some underlying set of values. For example:
Appraisal theories of emotionhold that emotional responses (with their underlying positive or negative valence) are the result of subconscious evaluations about the significance of a situation, relative to the person’s goals. An evaluation saying that you have lost something important to you, for example, may trigger the emotion of sadness with its associated negative valence.
In the case of Richard, a subsystem within his brain had formed the prediction that if he were to express confidence, this would cause other people to dislike him. It then generated negative self-talk to prevent him from being confident. Presumably the self-talk had some degree of negative valence; in this case that served as a tool that the subsystem could use to block a particular action it deemed bad.
Consider a situation where you are successfully carrying out some physical activity; playing a fast-paced sport or video game, for example. This is likely to be associated with positive valence, which emerges from the fact that you are having success at the task. On the other hand, if you were failing to keep up and couldn’t get into a good flow, you would likely experience negative valence.
What I’m trying to point at here is that valence looks like a signal about whether or not some set of goals/values are being successfully attained. A subsystem may have a goal X which it pursues independently, and depending on how well it goes, valence is produced as a result; and subsystem A may also produce different levels of valence in order to affect the behavior of subsystem B, to cause subsystem B to act in the way that subsystem A values.
In this model, because valence tends to signal states that are good/bad for the achievement of an organism’s goals, craving acts as an additional mechanism that “grabs onto” states that seem to be particularly good/bad, and tries to direct the organism more strongly towards those. But the underlying machinery that is producing the valence, was always optimizing for some deeper set of values, which only produced valence as a byproduct.
Unfortunately a comprehensive answer to the question of “what is the decision criteria, if not valence” would require a complete theory of human motivation and values, and I don’t have one. :)
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused?
I am not making the claim that reasoning would always only be rationalization. Rather, the chicken claw story was intended to suggest that one particular reasoning module tends to generate a story of a self that acts as the decision-maker. I don’t even think that the module is rationalizing in the sense of being completely resistant to new evidence: if it was, all of this meditation aimed at exploring no-self would be pretty pointless.
Rather, I think that the situation is more like Scott described in his post: the self-narrative subsystem starts out with a strong prior for one particular hypothesis (with that hypothesis also being culturally reinforced and learned), and creates an explanation which fits things into that hypothesis, treating deviations from it as noise to be discarded. But if it gets the right kind of evidence about the nature of the self (which certain kinds of meditation provide it), then it will update its theories and eventually settle on a different narrative.
To answer your actual question, we certainly do all kinds of reasoning, and this reasoning may certainly resolve internal conflicts or cause us to choose certain kinds of behavior. But I think that reasoning in general, is distinct from the experience of a self. For example, in an earlier post, I talked about the mechanisms by which one may learn to carry out arithmetical reasoning by internalizing a set of rules about how to manipulate numbers; and then later, about how Kahneman’s “System 2” represents a type of reasoning where different subsystems are chaining together their outputs through working memory. So we certainly reason, and that reasoning does provide us with reasons for our behavior, but I see no need to assume that the reasoning would require a self.
This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way.
I agree that abnormal situations by themselves are not conclusive evidence, yes.
Thank you for your reply, and it does clarify some things for me. If I may summarise in short, I think you are saying:
Craving is a bad sort of motivation because it makes you react badly to obstacles, but other sorts of motivation can be fine.
Self-conscious/ craving-filled states of mind can be unproductive when trying to act on these other sorts of motivations.
I still have some questions though.
You say you may pursue pleasure because you value it for its own sake. But what is the self (or subsystem?) that is doing this valuing? It feels like the valuer is a lot like a “Self 1”, the kind of self which meditation should expose to be some kind of delusion.
Here’s an attempt to put the question another way. Some one suggested in one of the previous comment threads about the topic that non-self was a bit like not identifying with your short term desires, and also your long term desires (and then eventually not identifying with anything). So why is identifying yourself with your values compatible with non-self?
EDIT: I reproduce here part of my response to Isusr, which I think is relevant, and is perhaps yet another way to ask the same question.
Typically, when we reason about what actions we should or should not perform, at the base of that reasoning is something of the form “X is intrinsically bad.” Now, I’d always associated “X is intrinsically bad” with some sort of statement like “X induces a mental state that feels wrong.” Do I have access to this line of reasoning as a perfect meditator?
Concretely, if someone asked me why I would go to a dentist if my teeth were rotting, I would have to reply that I do so because I value my health, or maybe because unhealthiness is intrinsically bad. And if they asked me why I value my health, I cannot answer except to point to the fact the that it does not feel good to me, in my head. But from what I understand, the enlightened cannot say this, because they feel that everything is good to them, in their heads.
I kind of feel that the enlightened cannot provide any reasons for their actions at all.
Roughly, yes, though I would be a bit cautious about framing craving as outright bad, more like “the tradeoffs involved may make it better to let go of it in the end”; but of course, that depends on what exactly one is trying to achieve. As I noted in the post, it is also possible for one to weaken their craving with bad results, at least if we evaluate “results” from the point of view of achieving things.
Different subsystems make valuations all the time; that’s not an illusion. What’s illusory is the notion that all of the different valuations are coming from a single self, and that positive/negative valence are things that the system intrinsically has to pursue/avoid.
For instance, one part of the mechanism is that at any given moment, you may have conscious intentions about what to do next. If you have two conflicting intentions, then those conflicting intentions are generated by different subsystems. However, frequently the mind-system attributes all intentions to a single source: “the self”. Operating based on that assumption, the mind-system models itself as having a single decision-maker that generates all intentions and observes all experiences.
In The Apologist and the Revolutionary, Scott Alexander writes:
One way of explaining the construct of the self, is that there’s a reasoning module which constructs a story of there being a single decision-maker, “the self”, that’s deciding everything. In the case of the split-brain patient, a subsystem has decided to point at a shovel because it’s related to the sight of the snowed-in house that it saw; but the subsystem that is constructing the narrative of the self being in charge of everything, has only seen a chicken claw. So in order to fit the things that it knows into a coherent story, it creates a spurious narrative where the self saw the chicken claw, and shovels are needed for cleaning chicken sheds, so that’s the reason why the self picked the shovel.
But what actually made the decision was an independent subsystem that was cut off from the self-narrative subsystem, which happened to infer that a shovel is useful for digging your way out of a snowed-in house. The subsystem creating the construct of the self wasn’t responsible for the decision nor the implicit valuations involved in it, it merely happened to create a story that took the credit for what another subsystem had already done.
Seeing the nature of the self doesn’t stop you from making valuations, it just makes you see that they are not coming from the self. But many of the valuations themselves remain unchanged by that. (As the Zen proverb goes: “Before enlightenment, chop wood, carry water. After enlightenment, chop wood, carry water.”)
Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.
3. This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I’m not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.
Appreciated. :) Answering these in detail is also useful, in that it helps me figure out which things I should mention in my future posts—I might copy-paste some parts of my answers here, right into some of my next posts…
It might be helpful to notice that positive/negative valence is usually already one step removed from some underlying set of values. For example:
Appraisal theories of emotionhold that emotional responses (with their underlying positive or negative valence) are the result of subconscious evaluations about the significance of a situation, relative to the person’s goals. An evaluation saying that you have lost something important to you, for example, may trigger the emotion of sadness with its associated negative valence.
In the case of Richard, a subsystem within his brain had formed the prediction that if he were to express confidence, this would cause other people to dislike him. It then generated negative self-talk to prevent him from being confident. Presumably the self-talk had some degree of negative valence; in this case that served as a tool that the subsystem could use to block a particular action it deemed bad.
Consider a situation where you are successfully carrying out some physical activity; playing a fast-paced sport or video game, for example. This is likely to be associated with positive valence, which emerges from the fact that you are having success at the task. On the other hand, if you were failing to keep up and couldn’t get into a good flow, you would likely experience negative valence.
What I’m trying to point at here is that valence looks like a signal about whether or not some set of goals/values are being successfully attained. A subsystem may have a goal X which it pursues independently, and depending on how well it goes, valence is produced as a result; and subsystem A may also produce different levels of valence in order to affect the behavior of subsystem B, to cause subsystem B to act in the way that subsystem A values.
In this model, because valence tends to signal states that are good/bad for the achievement of an organism’s goals, craving acts as an additional mechanism that “grabs onto” states that seem to be particularly good/bad, and tries to direct the organism more strongly towards those. But the underlying machinery that is producing the valence, was always optimizing for some deeper set of values, which only produced valence as a byproduct.
Unfortunately a comprehensive answer to the question of “what is the decision criteria, if not valence” would require a complete theory of human motivation and values, and I don’t have one. :)
I am not making the claim that reasoning would always only be rationalization. Rather, the chicken claw story was intended to suggest that one particular reasoning module tends to generate a story of a self that acts as the decision-maker. I don’t even think that the module is rationalizing in the sense of being completely resistant to new evidence: if it was, all of this meditation aimed at exploring no-self would be pretty pointless.
Rather, I think that the situation is more like Scott described in his post: the self-narrative subsystem starts out with a strong prior for one particular hypothesis (with that hypothesis also being culturally reinforced and learned), and creates an explanation which fits things into that hypothesis, treating deviations from it as noise to be discarded. But if it gets the right kind of evidence about the nature of the self (which certain kinds of meditation provide it), then it will update its theories and eventually settle on a different narrative.
To answer your actual question, we certainly do all kinds of reasoning, and this reasoning may certainly resolve internal conflicts or cause us to choose certain kinds of behavior. But I think that reasoning in general, is distinct from the experience of a self. For example, in an earlier post, I talked about the mechanisms by which one may learn to carry out arithmetical reasoning by internalizing a set of rules about how to manipulate numbers; and then later, about how Kahneman’s “System 2” represents a type of reasoning where different subsystems are chaining together their outputs through working memory. So we certainly reason, and that reasoning does provide us with reasons for our behavior, but I see no need to assume that the reasoning would require a self.
I agree that abnormal situations by themselves are not conclusive evidence, yes.
This makes sense.