Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.
3. This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I’m not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.
I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
Appreciated. :) Answering these in detail is also useful, in that it helps me figure out which things I should mention in my future posts—I might copy-paste some parts of my answers here, right into some of my next posts…
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
It might be helpful to notice that positive/negative valence is usually already one step removed from some underlying set of values. For example:
Appraisal theories of emotionhold that emotional responses (with their underlying positive or negative valence) are the result of subconscious evaluations about the significance of a situation, relative to the person’s goals. An evaluation saying that you have lost something important to you, for example, may trigger the emotion of sadness with its associated negative valence.
In the case of Richard, a subsystem within his brain had formed the prediction that if he were to express confidence, this would cause other people to dislike him. It then generated negative self-talk to prevent him from being confident. Presumably the self-talk had some degree of negative valence; in this case that served as a tool that the subsystem could use to block a particular action it deemed bad.
Consider a situation where you are successfully carrying out some physical activity; playing a fast-paced sport or video game, for example. This is likely to be associated with positive valence, which emerges from the fact that you are having success at the task. On the other hand, if you were failing to keep up and couldn’t get into a good flow, you would likely experience negative valence.
What I’m trying to point at here is that valence looks like a signal about whether or not some set of goals/values are being successfully attained. A subsystem may have a goal X which it pursues independently, and depending on how well it goes, valence is produced as a result; and subsystem A may also produce different levels of valence in order to affect the behavior of subsystem B, to cause subsystem B to act in the way that subsystem A values.
In this model, because valence tends to signal states that are good/bad for the achievement of an organism’s goals, craving acts as an additional mechanism that “grabs onto” states that seem to be particularly good/bad, and tries to direct the organism more strongly towards those. But the underlying machinery that is producing the valence, was always optimizing for some deeper set of values, which only produced valence as a byproduct.
Unfortunately a comprehensive answer to the question of “what is the decision criteria, if not valence” would require a complete theory of human motivation and values, and I don’t have one. :)
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused?
I am not making the claim that reasoning would always only be rationalization. Rather, the chicken claw story was intended to suggest that one particular reasoning module tends to generate a story of a self that acts as the decision-maker. I don’t even think that the module is rationalizing in the sense of being completely resistant to new evidence: if it was, all of this meditation aimed at exploring no-self would be pretty pointless.
Rather, I think that the situation is more like Scott described in his post: the self-narrative subsystem starts out with a strong prior for one particular hypothesis (with that hypothesis also being culturally reinforced and learned), and creates an explanation which fits things into that hypothesis, treating deviations from it as noise to be discarded. But if it gets the right kind of evidence about the nature of the self (which certain kinds of meditation provide it), then it will update its theories and eventually settle on a different narrative.
To answer your actual question, we certainly do all kinds of reasoning, and this reasoning may certainly resolve internal conflicts or cause us to choose certain kinds of behavior. But I think that reasoning in general, is distinct from the experience of a self. For example, in an earlier post, I talked about the mechanisms by which one may learn to carry out arithmetical reasoning by internalizing a set of rules about how to manipulate numbers; and then later, about how Kahneman’s “System 2” represents a type of reasoning where different subsystems are chaining together their outputs through working memory. So we certainly reason, and that reasoning does provide us with reasons for our behavior, but I see no need to assume that the reasoning would require a self.
This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way.
I agree that abnormal situations by themselves are not conclusive evidence, yes.
Thank you for your reply, which is helpful. I understand it takes time and energy to compose these responses, so please don’t feel too pressured to keep responding.
1. You say that positive/negative valence are not things that the system intrinsically has to pursue/avoid. Then when the system says it values something, why does it say this? A direct question: there exists at least a single case in which the why is not answered by positive/negative valence (or perhaps it is not answered at all). What is this case, and what is the answer to the why?
2. Often in real life, we feel conflicted within ourselves. Maybe different valuations made by different parts of us contradict each other in some particular situation. And then we feel confused. Now one way we resolve this contradiction is to reason about our values. Maybe you sit and write down a series of assumptions, logical deductions, etc. The output of this process is not just another thing some subsystem is shouting about. Reasons are the kind of things that motivate action, in anyone. So it seems the reasoning module is somehow special, and I think there’s a long tradition in Western philosophy of equating this reasoner with the self. This self takes into account all the things parts of it feel and value, and makes a decision. This self computes the tradeoffs involved in keeping/ letting go of craving. What do you think about this?
I think you are saying that the reasoning module is also somehow always under suspicion of producing mere rationalisations (like in the chicken claw story), and that even when we think it is the reasoning module making a decision, we’re often deluded. But if the reasoning module, and every other module, is to be treated as somehow not-final, how do (should) you make a decision when you’re confused? I think you would reject this kind of first-person decision making, and give a sort of third-person explanation of how the brain just does make decisions, somehow accumulating the things various subsystems say. But this provides no practical knowledge about what processes the brains of people who end up making good (or bad) decisions deploy.
3. This is unrelated to my main point, but the brain showing some behaviour in an ‘abnormal’ situation does not mean the same behaviour exists in the ‘normal’ situation. In particular, the theory that there are multiple subsystems doing their own thing might make sense in the case of the person with anosognosia or the person experiencing a binocular rivalry illusion, but it does not follow that the normal person in a normal situation also has multiple subsystems in the same way. Perhaps it might follow if you have a mechanistic, reductionist account of how the brain works. I’m not being merely pedantic; Merleau-Ponty takes this quite seriously in his analysis of Schneider.
Appreciated. :) Answering these in detail is also useful, in that it helps me figure out which things I should mention in my future posts—I might copy-paste some parts of my answers here, right into some of my next posts…
It might be helpful to notice that positive/negative valence is usually already one step removed from some underlying set of values. For example:
Appraisal theories of emotionhold that emotional responses (with their underlying positive or negative valence) are the result of subconscious evaluations about the significance of a situation, relative to the person’s goals. An evaluation saying that you have lost something important to you, for example, may trigger the emotion of sadness with its associated negative valence.
In the case of Richard, a subsystem within his brain had formed the prediction that if he were to express confidence, this would cause other people to dislike him. It then generated negative self-talk to prevent him from being confident. Presumably the self-talk had some degree of negative valence; in this case that served as a tool that the subsystem could use to block a particular action it deemed bad.
Consider a situation where you are successfully carrying out some physical activity; playing a fast-paced sport or video game, for example. This is likely to be associated with positive valence, which emerges from the fact that you are having success at the task. On the other hand, if you were failing to keep up and couldn’t get into a good flow, you would likely experience negative valence.
What I’m trying to point at here is that valence looks like a signal about whether or not some set of goals/values are being successfully attained. A subsystem may have a goal X which it pursues independently, and depending on how well it goes, valence is produced as a result; and subsystem A may also produce different levels of valence in order to affect the behavior of subsystem B, to cause subsystem B to act in the way that subsystem A values.
In this model, because valence tends to signal states that are good/bad for the achievement of an organism’s goals, craving acts as an additional mechanism that “grabs onto” states that seem to be particularly good/bad, and tries to direct the organism more strongly towards those. But the underlying machinery that is producing the valence, was always optimizing for some deeper set of values, which only produced valence as a byproduct.
Unfortunately a comprehensive answer to the question of “what is the decision criteria, if not valence” would require a complete theory of human motivation and values, and I don’t have one. :)
I am not making the claim that reasoning would always only be rationalization. Rather, the chicken claw story was intended to suggest that one particular reasoning module tends to generate a story of a self that acts as the decision-maker. I don’t even think that the module is rationalizing in the sense of being completely resistant to new evidence: if it was, all of this meditation aimed at exploring no-self would be pretty pointless.
Rather, I think that the situation is more like Scott described in his post: the self-narrative subsystem starts out with a strong prior for one particular hypothesis (with that hypothesis also being culturally reinforced and learned), and creates an explanation which fits things into that hypothesis, treating deviations from it as noise to be discarded. But if it gets the right kind of evidence about the nature of the self (which certain kinds of meditation provide it), then it will update its theories and eventually settle on a different narrative.
To answer your actual question, we certainly do all kinds of reasoning, and this reasoning may certainly resolve internal conflicts or cause us to choose certain kinds of behavior. But I think that reasoning in general, is distinct from the experience of a self. For example, in an earlier post, I talked about the mechanisms by which one may learn to carry out arithmetical reasoning by internalizing a set of rules about how to manipulate numbers; and then later, about how Kahneman’s “System 2” represents a type of reasoning where different subsystems are chaining together their outputs through working memory. So we certainly reason, and that reasoning does provide us with reasons for our behavior, but I see no need to assume that the reasoning would require a self.
I agree that abnormal situations by themselves are not conclusive evidence, yes.
This makes sense.