I don’t think preference directly influences motivation, so much as preference just represents the desires of our idealized self-image. That is, it’s what we want to be motivated by.
What I mean by preference is a valuation of how I want the world to be. It’s not about cognitive ritual, although cognitive ritual, as a part of the world, may also be mentioned there. Preference is not the sort of thing that does anything, it is a statement of what I think should be done. Through the activity of the mind, the way preference is may influence other things, and conversely, other things may influence preference, as in the case of wireheading, for example. (This is an invitation to synchronize the definitions.)
Some motivations we’re fine with identifying as part of our self-image, so they are also our preferences. But this is not a case of preference influencing motivation; in fact, it’s the other way around!
This is a statement in connotation opposite to one that triggered my comment in the first place, see here. How do you recognize which motivations you choose to identify with, and which you don’t? I guess in this model, the criterion may be said to derive from that very preference stuff.
In general, I say that akrasia is really anosognosia of the will: i.e. believing that your preference should motivate you, and creating elaborate explanations for why it doesn’t, when in fact it’s barely related.
How is this a fact? From my perspective, we are groping in the dark at this point, so any statement should either be intuitive, as raw material to build upon, generated from being primed by a representative sample of data, to give any chance of showing true regularities, or study the few true regularities that can be supported.
I don’t understand the relation between preference, motivation, shouldness, influence, and facts that you are making in the above quoted sentence.
Motivations exist in a different schema, and have to be created, modified, or deleted independently of our preferences.
What’s a ‘schema’, what kind of object is this motivation thing that can be created or deleted? Are there many of them if one can be created and deleted? What role do they play in the cognitive algorithm? If there is no relation between preferences and these motivation instances, what is the role played respectively by preference and emotion in the overall algorithm?
What I mean by preference is a valuation of how I want the world to be. It’s not about cognitive ritual, although cognitive ritual, as a part of the world, may also be mentioned there. Preference is not the sort of thing that does anything, it is a statement of what I think should be done. Through the activity of the mind, the way preference is may influence other things, and conversely, other things may influence preference, as in the case of wireheading, for example.
I don’t understand what you mean by “cognitive ritual”.
This is a statement in connotation opposite to one that triggered my comment in the first place, see here. How do you recognize which motivations you choose to identify with, and which you don’t? I guess in this model, the criterion may be said to derive from that very preference stuff.
I couldn’t make heads or tails of that comment, sorry. I’m not entirely sure I understand what you wrote here, either, except that it sounds like you think we “choose” to identify with things. My observation is that choice is not the default—we have the ability to choose, but mostly, we don’t use it, and when we think we are, we are mostly lying to ourselves.
This doesn’t much connect to standard theories or intuition, for the same reason that relativity doesn’t: it’s correct over a wider range of conditions than our default intuitions. If you view minds through a mechanical lens, their behaviors don’t require such complex explanations.
How is this a fact? From my perspective, we are groping in the dark at this point, so any statement should either be intuitive, as raw material to build upon, generated from being primed by a representative sample of data, to give any chance of showing true regularities, or study the few true regularities that can be supported.
I say that it’s a fact our preferences are barely related to our motivations because it’s trivial to show that they function independently—you’ve pointed this out yourself. That most people fail to change their motivation by modifying their preferences is more than sufficient to demonstrate the lack of connection in practice between these two brain functions. (See also the near/far distinction.)
I don’t understand the relation between preference, motivation, shouldness, influence, and facts that you are making in the above quoted sentence.
By “should” I mean, expecting that merely having a preference will automatically mean you have corresponding motivation, or that the lack of ability to enforce your preference over your motivation equals a personal failure—it merely reflects the “design” parameters of the systems involved. There is no evolutionary reason for us to control over our motivations, since they exist to control us—to shape us to the world we find ourselves in.
What’s a ‘schema’, what kind of object is this motivation thing that can be created or deleted?
By schema here, I’m referring to “near” vs. “far” thinking. Action versus abstraction.
A motivation is simply an emotional response attached to an outcome or behavior, through conditioning or simple association.
Are there many of them if one can be created and deleted?
Yep.
What role do they play in the cognitive algorithm?
They drive the planning process, which we experience as “motivation”. See, for example, the video I and others have linked here before, which demonstrates how to induce a (temporary) motivation state to clean your desk. There is a lot of deep theory behind that video, virtually none of which is present in the video.
If there is no relation between preferences and these motivation instances, what is the role played respectively by preference and emotion in the overall algorithm?
If you really care about deep understanding of the “cognitive algorithm”, you would be well advised to read “NLP Volume I”, which explains the model I use quite well. As its subtitle calls it, “the study of the structure of subjective experience”—i.e., what algorithms feel like from the inside.
The motivation video I made demonstrates one simple algorithm (“strategy” in NLP lingo) that is conveyed in terms of sensory representation (“near” thinking) steps. This is because most of our actual cognitive processing consists of manipulating sensory data, both in and out of consciousness. Verbal processing gives us flexibility and suggestibility, but a huge part of our outward verbalization is devoted to making up plausible explanations and things that make us sound good. And it is driven by our motivations (including hard-wired and trained status motivations), rather than being a source of motivations.
The distinction can be easily seen in my video, as I demonstrate using verbal thinking merely to suggest and “lead” the near system to evoke certain sensory representations in visual and kinesthetic form, rather than by trying to “talk” one’s self into doing something through logic or slogan.
Btw, a lot of the disconnect that you’re experiencing from my writing is simply that if you care more about theory than practice, you need to read a hell of a lot more than what I write, to understand what I’m writing about.
I’ve been studying NLP in my spare time for around 20 years now, and there is absolutely no way I can teach that entire field of study in off-hand comments. Since most people are more interested in practice than theory, I focus my writing to have the least amount of theory that’s needed to DO something, or at least come to an understanding of why what you’re already doing doesn’t work.
If you insist on implementation-quality theory, and you don’t “get” representational systems and strategies as the primary model of all behavior (internal as well as external), you’re not going to “get” what I write about, because I presuppose that that model is the closest thing we have to a functional theory of mind, from a practical-results perspective. There is nothing in mainstream cognitive psychology that remotely approaches the usefulness of NLP as a model of subjective experience and behavior, which likely means there’s nothing approaching its accuracy as an operational model.
(Disclaimer: Popular depictions of NLP are ridiculously shallow, so anyone who hasn’t read “NLP, Volume I” or “The Structure Of Magic I”, stands a very strong chance of not even remotely knowing what NLP actually is. Even some supposedly-certified “practitioners” have no clue, treating the theory as something they just had to learn to get their certificate, alas. Having a bit more epistemic hygiene probably would be helpful to the discipline as a whole… but then, you can say that about most fields.)
I have copies of The Structure of Magic, Volumes I and II (Hardcover, 1975) to give away. If you want them, please contact me privately. Preference given to those who will either travel to my home in San Rafael, CA, to pick them up or who will attend the next OB/LW meetup in the Bay Area (because then I do not have to pay shipping costs).
The fact that I own the volumes should not be taken as endorsement of them. In fact, I tend to suspect that Eliezer and those about as smart, knowledgable and committed to understanding intelligence are better off not wasting their time on NLP and that they should stick to ev psy and hard cognitive science and neuroscience instead.
What I mean by preference is a valuation of how I want the world to be. It’s not about cognitive ritual, although cognitive ritual, as a part of the world, may also be mentioned there. Preference is not the sort of thing that does anything, it is a statement of what I think should be done. Through the activity of the mind, the way preference is may influence other things, and conversely, other things may influence preference, as in the case of wireheading, for example.
I don’t understand what you mean by “cognitive ritual”.
A particular algorithm, or a property thereof, that your mind currently runs. For example, a cognitive ritual of following causal decision theory to determine your actions may result in two-boxing in Newcomb’s problem.
(I’m going to break the response down, to make shorter and more focused comments.)
Eby: Some motivations we’re fine with identifying as part of our self-image, so they are also our preferences. But this is not a case of preference influencing motivation; in fact, it’s the other way around!
Nesov: How do you recognize which motivations you choose to identify with, and which you don’t? I guess in this model, the criterion may be said to derive from that very preference stuff.
Eby: I’m not entirely sure I understand what you wrote here, either, except that it sounds like you think we “choose” to identify with things. My observation is that choice is not the default—we have the ability to choose, but mostly, we don’t use it, and when we think we are, we are mostly lying to ourselves.
I’m not talking about deliberative choice, I’m talking about the determination of semantics: how are the motivations, to which you refer as the ones you identify with, different from the rest? What property makes some motivations belong to one class, and others to another?
how are the motivations, to which you refer as the ones you identify with, different from the rest? What property makes some motivations belong to one class, and others to another?
In short: whether they’re congruent with your professed values or ideals. More specifically: do they reflect (or at least not conflict with) the image that you wish to present to others.
Of course, “from the inside”, it doesn’t feel like the image you wish to present to others, it just feels like something that is “good”, or at least not “bad”.
That is, if you learn that “good people are honest” (honesty = social value) then you are motivated to appear honest, and you will identify with any motivations you have towards actual honesty. But you may also have motivations that are dishonest… and you will reject those and attribute them to some failure of will, the flesh being weak, etc. etc. in the (evolutionary) hope of persuading others that your error was a temporary failure rather than an accurate portrayal of your behavior.
IOW, motivation is primary, while the identification or disidentification is a secondary function that had to evolve later, after the primary “motivation” machinery already existed.
OK. So, your “preference” is actually Hanson’s “image”. I see your words in the comment as mostly confirming this. Do you hold additional distinctions?
My preference in about your own self-interest (which however need not be about you), and I suspect that, unified with my usage of term “motivation”, you bundle them both in your usage of “motivation”. Does it sound correct to you?
OK. So, your “preference” is actually Hanson’s “image”. I see your words in the comment as mostly confirming this. Do you hold additional distinctions?
None that are directly relevant to the present context, no. (I do hold more near/far distinctions in general.)
My preference in about your own self-interest (which however need not be about you), and I suspect that, unified with my usage of term “motivation”, you bundle them both in your usage of “motivation”. Does it sound correct to you?
Not particularly. I’m using motivation largely to refer to the somatic markers (physio-emotional responses) keyed to actions or goal subjects, irrespective of any verbal explanations associated with those markers.
To put it another way, I understand a motivation to be a feeling about a concrete behavioral goal, regardless of how you came to have that feeling. A preference is a feeling about an abstract goal, as opposed to a concrete one.
So, “I prefer excitement over boredom” is distinct from “I am motivated to go rock-climbing today”. The former is an abstraction, and can be developed either top-down (e.g. through learning that excitement is a socially-valued qualtiy) or bottom up (summarization of prior experience or motivations).
However, even if it is derived by summarizing motivations, the preference is merely descriptive, not prescriptive. It can lead me to consciously try a new “exciting” behavior, but if I turn out not to like it, I will not still be motivated to carry out that behavior.
So, our preferences can lead us to situations that cause us to develop motivations, and we can even have the motivation to try things, because of a preference. We can even develop a motivation based on the feelings a preference may give us—e.g. a person who believes it is “good” to do a certain thing may be able to develop an inherent motivation for the thing, by feeling that “goodness” in association with the thing. Some people do this naturally for some things, others do not.
(Motivations can also form for reasons opaque to us: I’m still trying to track down what’s led me to such marathons of posting on LW this weekend, or at least figure out how to redirect it into finishing writing my book. I’ve probably written a book in my comments by now!)
I’m using motivation largely to refer to the somatic markers (physio-emotional responses) keyed to actions or goal subjects, irrespective of any verbal explanations associated with those markers.
Empirical, one-step-towards-subjective-from behavioral regularities, stimulus-response pairs. Not “should”, “is”. Is this correct? (I’m going to stop asking this question, but you should assume that I meticulously add it to every sentence in which I declare something about your statements, as a way of moving towards mutual understanding of terms.)
To put it another way, I understand a motivation to be a feeling about a concrete behavioral goal, regardless of how you came to have that feeling.
This confuses me, since you start using a word “feeling”, that has too many connotations, many of them deeper than the no-strings-attached regularity you seem to have just defined “motivations” to be.
A preference is a feeling about an abstract goal, as opposed to a concrete one.
So, there are two kinds of feelings: motivations (empirical stimulus-response pairs), and preferences, whatever that is.
So, “I prefer excitement over boredom” is distinct from “I am motivated to go rock-climbing today”. The former is an abstraction, and can be developed either top-down (e.g. through learning that excitement is a socially-valued qualtiy) or bottom up (summarization of prior experience or motivations).
So, preferences and motivations are not fundamentally different in your model, but merely the north and south of abstraction in “feelings”, in what the “stimulus-response” pairs are about.
(I drifted off after this point, I need more detailed understanding of the questions above in order to go further.)
Empirical, one-step-towards-subjective-from behavioral regularities, stimulus-response pairs. Not “should”, “is”. Is this correct?
Yes! Precisely. In NLP this would be referred to as one “step” in a “strategy”.
This confuses me, since you start using a word “feeling”, that has too many connotations, many of them deeper than the no-strings-attached regularity you seem to have just defined “motivations” to be.
See “Emotions and Feelings: A Neurobiological Perspective” for what I mean by “feelings”. This is also the NLP meaning of the term; i.e., Damasio’s paper supports the NLP model of subjective experience to this extent.
So, preferences and motivations are not fundamentally different in your model, but merely the north and south of abstraction in “feelings”, in what the “effect-response” pairs are about.
Yes, and as such, they lead to different practical effects, causing us to applaud and speak in favor of our abstract preferences, while only acting on concrete motivations.
Preferences only influence our behavior when they become concrete: for example, all those experiments Robin Hanson keeps mentioning about people’s donation behavior depending on whether they’ve been primed about in-group or out-group behaviors. That’s basically a situation where “preference” becomes “motivation” by becoming linked to a specific “near” behavior.
In general, preference becomes motivation by being made concrete, grounded in some sensory-specific context. If we have conflicting motivation, but try to proceed anyway, we experience “mixed feelings”—i.e., dueling somatic responses.
Now, the “stimulus-response” pairs are really predictions. The brain generates these responses that create feelings as a preparation to take action, and/or a marker of “good” or “bad”. So if you change your expectation of a situation, your feeling response changes as well.
For example, if I at first enjoy an activity, and then it becomes tedious, I may at some point “go over threshold” (in NLP parlance) and thus conclude that my tedious experiences constitute a better prediction of what will happen the next time I do it. At that point, my mental model is now updated, so I will no longer be motivated to do that activity.
That’s how it works, assuming that I “stack” (i.e. mentally combine) each tedious experience together as part of a trend, versus continuing to consider them as isolated experiences.
(It’s not the only way to change a motivation, but it’s a common one that people naturally use in practice. One NLP intervention for dealing with an abusive relationship, btw, is to teach a person how to mentally stack their representations of the relationship to “change their mind” about staying in it, by stacking up enough representations of past abuse and expected future abuse to create a strong enough feeling response to induce a model change. In general, NLP is not about anything spooky or magical so much as being able to deliberately replicate cognitive processes that other people use or that the same person uses, but in a different context.)
One place where akrasia comes into the picture, however, is when we don’t identify with the motivation to be changed, i.e., it’s not a preference. If you don’t know what you’re getting out of something, you can’t readily decide that you don’t want that thing any more!
This is why most of my interventions for procrastination involve finding out what prediction underlies the feeling response to a specific behavior being sought or avoided. Our feelings almost always occur in response to predictions made by our brains about the likely “near” outcome of a behavior.
These predictions are almost exclusively nonverbal, and brief. They normally “flash” by at subliminal speeds, faster than you can think or speed-read a single word. (Which makes sense, given that the same machinery is likely used to turn words into “meaning”!) You can learn to observe them, but only if you can quiet your verbal mind from “talking over them”. But once you see or hear them, you can hold them in consciousness for examination and modification.
It is these unconscious predictions that produce feeling responses, and thereby direct our actions. (They are probably also the basis of “priming”.)
When I or my students successfully change these unconscious representations, the corresponding behavioral motivation also changes. If a technique (whether it be one of mine or one of someone else’s) does NOT successfully change the representation, the motivation does not change, either. If one technique doesn’t work, we try another, until the desired result is achieved.
This is why I don’t care much about theory—if my hypothesis is that technique #1 will change representation X, and I’m mistaken, it only takes another few minutes to try technique #2 or #3. It’s catching the representations in the first place that’s much harder to do on your own, not the actual application of the techniques. I’ve gotten pretty good at guessing what techniques work better for what when I do them on other people, but oddly not as much on myself… which suggests that the certainty I appear to have may have more impact than the specific choice of technique.
Is that a placebo effect? Don’t know. Don’t care. As long as my student can also produce successes through self-application, and get whatever other results they’re after, what difference does it make?
I should note, however, that I personally never got any good results from any self-help techniques until I learned to 1) act “as if” a technique would work, regardless of my intellectual opinions about its probability of working, and 2) observe these sorts of unconscious thoughts. So even if it doesn’t matter how you change them, the ability to observe them appears to be a prerequisite.
(Also, these sorts of thoughts sometimes come out randomly in talk therapies, journalling, etc. Some particularly good therapists ask questions that tend to bring them out, and the NLP “Structure Of Magic” books were an attempt to explain how those therapists knew what questions to ask, given that the therapists each belonged to completely different schools of therapy. I use an extremely restricted subset of similar questions in my work, since I focus mainly on certain classes of chronic procrastination, and its patterns are very regular, at least in my experience.)
I say that it’s a fact our preferences are barely related to our motivations because it’s trivial to show that they function independently—you’ve pointed this out yourself. That most people fail to change their motivation by modifying their preferences is more than sufficient to demonstrate the lack of connection in practice between these two brain functions.
Being separate is far from the same thing as being independent, or having no connection with each other. It is only grounds for introducing a concept, for making a distinction.
Also, at this point we are entering a territory where our definitions are still at odds, for example I expect that the sense in which I talk about preferences being modified is significantly different from what you mean by that. The place of that discussion is in this linked thread.
Now, line-by-line:
What I mean by preference is a valuation of how I want the world to be. It’s not about cognitive ritual, although cognitive ritual, as a part of the world, may also be mentioned there. Preference is not the sort of thing that does anything, it is a statement of what I think should be done. Through the activity of the mind, the way preference is may influence other things, and conversely, other things may influence preference, as in the case of wireheading, for example. (This is an invitation to synchronize the definitions.)
This is a statement in connotation opposite to one that triggered my comment in the first place, see here. How do you recognize which motivations you choose to identify with, and which you don’t? I guess in this model, the criterion may be said to derive from that very preference stuff.
How is this a fact? From my perspective, we are groping in the dark at this point, so any statement should either be intuitive, as raw material to build upon, generated from being primed by a representative sample of data, to give any chance of showing true regularities, or study the few true regularities that can be supported.
I don’t understand the relation between preference, motivation, shouldness, influence, and facts that you are making in the above quoted sentence.
What’s a ‘schema’, what kind of object is this motivation thing that can be created or deleted? Are there many of them if one can be created and deleted? What role do they play in the cognitive algorithm? If there is no relation between preferences and these motivation instances, what is the role played respectively by preference and emotion in the overall algorithm?
I don’t understand what you mean by “cognitive ritual”.
I couldn’t make heads or tails of that comment, sorry. I’m not entirely sure I understand what you wrote here, either, except that it sounds like you think we “choose” to identify with things. My observation is that choice is not the default—we have the ability to choose, but mostly, we don’t use it, and when we think we are, we are mostly lying to ourselves.
This doesn’t much connect to standard theories or intuition, for the same reason that relativity doesn’t: it’s correct over a wider range of conditions than our default intuitions. If you view minds through a mechanical lens, their behaviors don’t require such complex explanations.
I say that it’s a fact our preferences are barely related to our motivations because it’s trivial to show that they function independently—you’ve pointed this out yourself. That most people fail to change their motivation by modifying their preferences is more than sufficient to demonstrate the lack of connection in practice between these two brain functions. (See also the near/far distinction.)
By “should” I mean, expecting that merely having a preference will automatically mean you have corresponding motivation, or that the lack of ability to enforce your preference over your motivation equals a personal failure—it merely reflects the “design” parameters of the systems involved. There is no evolutionary reason for us to control over our motivations, since they exist to control us—to shape us to the world we find ourselves in.
By schema here, I’m referring to “near” vs. “far” thinking. Action versus abstraction.
A motivation is simply an emotional response attached to an outcome or behavior, through conditioning or simple association.
Yep.
They drive the planning process, which we experience as “motivation”. See, for example, the video I and others have linked here before, which demonstrates how to induce a (temporary) motivation state to clean your desk. There is a lot of deep theory behind that video, virtually none of which is present in the video.
If you really care about deep understanding of the “cognitive algorithm”, you would be well advised to read “NLP Volume I”, which explains the model I use quite well. As its subtitle calls it, “the study of the structure of subjective experience”—i.e., what algorithms feel like from the inside.
The motivation video I made demonstrates one simple algorithm (“strategy” in NLP lingo) that is conveyed in terms of sensory representation (“near” thinking) steps. This is because most of our actual cognitive processing consists of manipulating sensory data, both in and out of consciousness. Verbal processing gives us flexibility and suggestibility, but a huge part of our outward verbalization is devoted to making up plausible explanations and things that make us sound good. And it is driven by our motivations (including hard-wired and trained status motivations), rather than being a source of motivations.
The distinction can be easily seen in my video, as I demonstrate using verbal thinking merely to suggest and “lead” the near system to evoke certain sensory representations in visual and kinesthetic form, rather than by trying to “talk” one’s self into doing something through logic or slogan.
Btw, a lot of the disconnect that you’re experiencing from my writing is simply that if you care more about theory than practice, you need to read a hell of a lot more than what I write, to understand what I’m writing about.
I’ve been studying NLP in my spare time for around 20 years now, and there is absolutely no way I can teach that entire field of study in off-hand comments. Since most people are more interested in practice than theory, I focus my writing to have the least amount of theory that’s needed to DO something, or at least come to an understanding of why what you’re already doing doesn’t work.
If you insist on implementation-quality theory, and you don’t “get” representational systems and strategies as the primary model of all behavior (internal as well as external), you’re not going to “get” what I write about, because I presuppose that that model is the closest thing we have to a functional theory of mind, from a practical-results perspective. There is nothing in mainstream cognitive psychology that remotely approaches the usefulness of NLP as a model of subjective experience and behavior, which likely means there’s nothing approaching its accuracy as an operational model.
(Disclaimer: Popular depictions of NLP are ridiculously shallow, so anyone who hasn’t read “NLP, Volume I” or “The Structure Of Magic I”, stands a very strong chance of not even remotely knowing what NLP actually is. Even some supposedly-certified “practitioners” have no clue, treating the theory as something they just had to learn to get their certificate, alas. Having a bit more epistemic hygiene probably would be helpful to the discipline as a whole… but then, you can say that about most fields.)
I have copies of The Structure of Magic, Volumes I and II (Hardcover, 1975) to give away. If you want them, please contact me privately. Preference given to those who will either travel to my home in San Rafael, CA, to pick them up or who will attend the next OB/LW meetup in the Bay Area (because then I do not have to pay shipping costs).
The fact that I own the volumes should not be taken as endorsement of them. In fact, I tend to suspect that Eliezer and those about as smart, knowledgable and committed to understanding intelligence are better off not wasting their time on NLP and that they should stick to ev psy and hard cognitive science and neuroscience instead.
A particular algorithm, or a property thereof, that your mind currently runs. For example, a cognitive ritual of following causal decision theory to determine your actions may result in two-boxing in Newcomb’s problem.
(I’m going to break the response down, to make shorter and more focused comments.)
I’m not talking about deliberative choice, I’m talking about the determination of semantics: how are the motivations, to which you refer as the ones you identify with, different from the rest? What property makes some motivations belong to one class, and others to another?
In short: whether they’re congruent with your professed values or ideals. More specifically: do they reflect (or at least not conflict with) the image that you wish to present to others.
Of course, “from the inside”, it doesn’t feel like the image you wish to present to others, it just feels like something that is “good”, or at least not “bad”.
That is, if you learn that “good people are honest” (honesty = social value) then you are motivated to appear honest, and you will identify with any motivations you have towards actual honesty. But you may also have motivations that are dishonest… and you will reject those and attribute them to some failure of will, the flesh being weak, etc. etc. in the (evolutionary) hope of persuading others that your error was a temporary failure rather than an accurate portrayal of your behavior.
IOW, motivation is primary, while the identification or disidentification is a secondary function that had to evolve later, after the primary “motivation” machinery already existed.
OK. So, your “preference” is actually Hanson’s “image”. I see your words in the comment as mostly confirming this. Do you hold additional distinctions?
My preference in about your own self-interest (which however need not be about you), and I suspect that, unified with my usage of term “motivation”, you bundle them both in your usage of “motivation”. Does it sound correct to you?
None that are directly relevant to the present context, no. (I do hold more near/far distinctions in general.)
Not particularly. I’m using motivation largely to refer to the somatic markers (physio-emotional responses) keyed to actions or goal subjects, irrespective of any verbal explanations associated with those markers.
To put it another way, I understand a motivation to be a feeling about a concrete behavioral goal, regardless of how you came to have that feeling. A preference is a feeling about an abstract goal, as opposed to a concrete one.
So, “I prefer excitement over boredom” is distinct from “I am motivated to go rock-climbing today”. The former is an abstraction, and can be developed either top-down (e.g. through learning that excitement is a socially-valued qualtiy) or bottom up (summarization of prior experience or motivations).
However, even if it is derived by summarizing motivations, the preference is merely descriptive, not prescriptive. It can lead me to consciously try a new “exciting” behavior, but if I turn out not to like it, I will not still be motivated to carry out that behavior.
So, our preferences can lead us to situations that cause us to develop motivations, and we can even have the motivation to try things, because of a preference. We can even develop a motivation based on the feelings a preference may give us—e.g. a person who believes it is “good” to do a certain thing may be able to develop an inherent motivation for the thing, by feeling that “goodness” in association with the thing. Some people do this naturally for some things, others do not.
(Motivations can also form for reasons opaque to us: I’m still trying to track down what’s led me to such marathons of posting on LW this weekend, or at least figure out how to redirect it into finishing writing my book. I’ve probably written a book in my comments by now!)
Empirical, one-step-towards-subjective-from behavioral regularities, stimulus-response pairs. Not “should”, “is”. Is this correct? (I’m going to stop asking this question, but you should assume that I meticulously add it to every sentence in which I declare something about your statements, as a way of moving towards mutual understanding of terms.)
This confuses me, since you start using a word “feeling”, that has too many connotations, many of them deeper than the no-strings-attached regularity you seem to have just defined “motivations” to be.
So, there are two kinds of feelings: motivations (empirical stimulus-response pairs), and preferences, whatever that is.
So, preferences and motivations are not fundamentally different in your model, but merely the north and south of abstraction in “feelings”, in what the “stimulus-response” pairs are about.
(I drifted off after this point, I need more detailed understanding of the questions above in order to go further.)
Yes! Precisely. In NLP this would be referred to as one “step” in a “strategy”.
See “Emotions and Feelings: A Neurobiological Perspective” for what I mean by “feelings”. This is also the NLP meaning of the term; i.e., Damasio’s paper supports the NLP model of subjective experience to this extent.
Yes, and as such, they lead to different practical effects, causing us to applaud and speak in favor of our abstract preferences, while only acting on concrete motivations.
Preferences only influence our behavior when they become concrete: for example, all those experiments Robin Hanson keeps mentioning about people’s donation behavior depending on whether they’ve been primed about in-group or out-group behaviors. That’s basically a situation where “preference” becomes “motivation” by becoming linked to a specific “near” behavior.
In general, preference becomes motivation by being made concrete, grounded in some sensory-specific context. If we have conflicting motivation, but try to proceed anyway, we experience “mixed feelings”—i.e., dueling somatic responses.
Now, the “stimulus-response” pairs are really predictions. The brain generates these responses that create feelings as a preparation to take action, and/or a marker of “good” or “bad”. So if you change your expectation of a situation, your feeling response changes as well.
For example, if I at first enjoy an activity, and then it becomes tedious, I may at some point “go over threshold” (in NLP parlance) and thus conclude that my tedious experiences constitute a better prediction of what will happen the next time I do it. At that point, my mental model is now updated, so I will no longer be motivated to do that activity.
That’s how it works, assuming that I “stack” (i.e. mentally combine) each tedious experience together as part of a trend, versus continuing to consider them as isolated experiences.
(It’s not the only way to change a motivation, but it’s a common one that people naturally use in practice. One NLP intervention for dealing with an abusive relationship, btw, is to teach a person how to mentally stack their representations of the relationship to “change their mind” about staying in it, by stacking up enough representations of past abuse and expected future abuse to create a strong enough feeling response to induce a model change. In general, NLP is not about anything spooky or magical so much as being able to deliberately replicate cognitive processes that other people use or that the same person uses, but in a different context.)
One place where akrasia comes into the picture, however, is when we don’t identify with the motivation to be changed, i.e., it’s not a preference. If you don’t know what you’re getting out of something, you can’t readily decide that you don’t want that thing any more!
This is why most of my interventions for procrastination involve finding out what prediction underlies the feeling response to a specific behavior being sought or avoided. Our feelings almost always occur in response to predictions made by our brains about the likely “near” outcome of a behavior.
These predictions are almost exclusively nonverbal, and brief. They normally “flash” by at subliminal speeds, faster than you can think or speed-read a single word. (Which makes sense, given that the same machinery is likely used to turn words into “meaning”!) You can learn to observe them, but only if you can quiet your verbal mind from “talking over them”. But once you see or hear them, you can hold them in consciousness for examination and modification.
It is these unconscious predictions that produce feeling responses, and thereby direct our actions. (They are probably also the basis of “priming”.)
When I or my students successfully change these unconscious representations, the corresponding behavioral motivation also changes. If a technique (whether it be one of mine or one of someone else’s) does NOT successfully change the representation, the motivation does not change, either. If one technique doesn’t work, we try another, until the desired result is achieved.
This is why I don’t care much about theory—if my hypothesis is that technique #1 will change representation X, and I’m mistaken, it only takes another few minutes to try technique #2 or #3. It’s catching the representations in the first place that’s much harder to do on your own, not the actual application of the techniques. I’ve gotten pretty good at guessing what techniques work better for what when I do them on other people, but oddly not as much on myself… which suggests that the certainty I appear to have may have more impact than the specific choice of technique.
Is that a placebo effect? Don’t know. Don’t care. As long as my student can also produce successes through self-application, and get whatever other results they’re after, what difference does it make?
I should note, however, that I personally never got any good results from any self-help techniques until I learned to 1) act “as if” a technique would work, regardless of my intellectual opinions about its probability of working, and 2) observe these sorts of unconscious thoughts. So even if it doesn’t matter how you change them, the ability to observe them appears to be a prerequisite.
(Also, these sorts of thoughts sometimes come out randomly in talk therapies, journalling, etc. Some particularly good therapists ask questions that tend to bring them out, and the NLP “Structure Of Magic” books were an attempt to explain how those therapists knew what questions to ask, given that the therapists each belonged to completely different schools of therapy. I use an extremely restricted subset of similar questions in my work, since I focus mainly on certain classes of chronic procrastination, and its patterns are very regular, at least in my experience.)
Being separate is far from the same thing as being independent, or having no connection with each other. It is only grounds for introducing a concept, for making a distinction.
Also, at this point we are entering a territory where our definitions are still at odds, for example I expect that the sense in which I talk about preferences being modified is significantly different from what you mean by that. The place of that discussion is in this linked thread.