Generating these pleasant feelings is, on a preconscious level, the desire motivating reasoning.
I haven’t even talked about actual motivated reasoning in this post… barely touched on it. What I’m talking about here is something you might think of as “pre-biased reasoning”—that is, before you even consciously perform any reasoning, your brain has to generate hypotheses… and these are based on manipulation of existing memories… which are retrieved in emotion-biased sequences.
This is a hell of a lot more low-level description than an idea like “unconsciously trying to generate pleasant emotions”. Also, that description attributes motivation and thinking-process to the unconscious… which is pure projection. The unconscious is not a “mind”, in the sense of having intentions of the sort we attribute to ourselves and to other humans.
When I get to the Savant/Speculator distinction, that part will hopefully be a lot clearer.
Also, are you taking as a premise something like the James-Lange theory of emotions? What about something like Reich’s theory of muscular armor?
Not as a premise, no, although there may be similarities in our conclusions.
However, I’m not in full agreement with the idea that you can generate emotions through muscular action, partly because I see physical action as being caused by emotion (rather than being emotion as such) and partly because an existing emotion can easily dominate the relatively weak influence of working from the outside in.
I also know that, Reich to the contrary, muscular armor can be dropped through mental work alone—body awareness is required, at least to be able to tell if you’re doing things right—but you don’t necessarily need to do anything particularly physical.
The “efficiency” objection to somatic markers and James-Lange is nonsense, however. If the purpose of an emotions is to prepare the body for action, then it’s not “inefficient” to send the information out to the periphery—it’s the purpose!
It’s the part where we infer emotions from that information coming back that’s the kludge, because we only needed that information once we became social creatures… and even then, we already had the communication taking place via the external action.
Hell, I’m not sure why evolution had any reason for us to know what our own emotions are in the first place, which would certainly explain why we have to learn to interpret them, and people vary widely in their ability to do so.
Whew. I think my next post is going to need to work on demolishing the self-applied mind projection fallacy. A massive amount of popular psychology (and not a small amount of actual psychology) is based on a flawed model of how our minds work, and you have to dismantle that before you can see how it really works. It’s about as big of a leap as quantum mechanics is, really, in the sense that it utterly makes no sense to our intuitions about the classical world.
Basically, consciousness and the perception of free-will are actually a side-effect of the same brain functions that allow us to believe in disembodied minds. We believe that we decide things for the simple reason that our brain also believes that other people decide things: it’s part of our in-built theory of mind.
Hell, I’m not sure why evolution had any reason for us to know what our own emotions are in the first place, which would certainly explain why we have to learn to interpret them, and people vary widely in their ability to do so.
This point is incisive, has important consequences for rationality, and deserves a post (by somebody).
Whew. I think my next post is going to need to work on demolishing the self-applied mind projection fallacy. A massive amount of popular psychology (and not a small amount of actual psychology) is based on a flawed model of how our minds work, and you have to dismantle that before you can see how it really works. It’s about as big of a leap as quantum mechanics is, really, in the sense that it utterly makes no sense to our intuitions about the classical world.
Do you plan to substitute the flawed model with whatever model the ‘Savant/Speculator distinction’ comes from? If so, perhaps consider another post explaining and validating said system first? Google seems unwilling to tell me what on earth you are talking about. Book? Paper? Link of some kind?
I haven’t even talked about actual motivated reasoning in this post… barely touched on it. What I’m talking about here is something you might think of as “pre-biased reasoning”—that is, before you even consciously perform any reasoning, your brain has to generate hypotheses… and these are based on manipulation of existing memories… which are retrieved in emotion-biased sequences.
This is a hell of a lot more low-level description than an idea like “unconsciously trying to generate pleasant emotions”. Also, that description attributes motivation and thinking-process to the unconscious… which is pure projection. The unconscious is not a “mind”, in the sense of having intentions of the sort we attribute to ourselves and to other humans.
When I get to the Savant/Speculator distinction, that part will hopefully be a lot clearer.
Not as a premise, no, although there may be similarities in our conclusions.
However, I’m not in full agreement with the idea that you can generate emotions through muscular action, partly because I see physical action as being caused by emotion (rather than being emotion as such) and partly because an existing emotion can easily dominate the relatively weak influence of working from the outside in.
I also know that, Reich to the contrary, muscular armor can be dropped through mental work alone—body awareness is required, at least to be able to tell if you’re doing things right—but you don’t necessarily need to do anything particularly physical.
The “efficiency” objection to somatic markers and James-Lange is nonsense, however. If the purpose of an emotions is to prepare the body for action, then it’s not “inefficient” to send the information out to the periphery—it’s the purpose!
It’s the part where we infer emotions from that information coming back that’s the kludge, because we only needed that information once we became social creatures… and even then, we already had the communication taking place via the external action.
Hell, I’m not sure why evolution had any reason for us to know what our own emotions are in the first place, which would certainly explain why we have to learn to interpret them, and people vary widely in their ability to do so.
Whew. I think my next post is going to need to work on demolishing the self-applied mind projection fallacy. A massive amount of popular psychology (and not a small amount of actual psychology) is based on a flawed model of how our minds work, and you have to dismantle that before you can see how it really works. It’s about as big of a leap as quantum mechanics is, really, in the sense that it utterly makes no sense to our intuitions about the classical world.
Basically, consciousness and the perception of free-will are actually a side-effect of the same brain functions that allow us to believe in disembodied minds. We believe that we decide things for the simple reason that our brain also believes that other people decide things: it’s part of our in-built theory of mind.
This point is incisive, has important consequences for rationality, and deserves a post (by somebody).
Do you plan to substitute the flawed model with whatever model the ‘Savant/Speculator distinction’ comes from? If so, perhaps consider another post explaining and validating said system first? Google seems unwilling to tell me what on earth you are talking about. Book? Paper? Link of some kind?