You seem to be making the assumption that discovering a suppressed desire necessarily has negative utility. What’s your rationale for that? ;-)
Yup, I made my statement on that assumption, but know that expected negative utility isn’t always the case. Just sometimes it’s not clear whether discovering suppressed desires yields positive expected utility.
So, adding desires is not a problem. It’s adding attachments that’s bad.
Good point, didn’t think of that.
But I am not convinced that rationalizing/cognitive dissonance doesn’t help ease (or eliminate) feelings of dissatisfaction induced by attachments in all cases. I think realizing a desire can play a causal role in building attachment for that desire.
I think realizing a desire can play a causal role in building attachment for that desire.
It might be necessary, but it’s not sufficient. If you have a general belief that life is bad whenever you don’t have everything you want, then yes, definitely. On the other extreme, if you believe that life is just fine as it is, then it’s equally clearly no.
(Also, don’t forget that attachment can exist without desire—I can be attached to getting something done on time, that I don’t actually want to do in the first place!)
In general, children are more likely to believe that it’s bad to not have what they want, now, than adults are. And in general, we might say that being less attached to things is correlated with maturity. So, if you’re going to extrapolate what an older, wiser you would do, it’s probably best to assume less likelihood of having attachment, rather than more. (Note too that there are things you can do to lessen your attachments, but I’m not aware of anything that can cause you to add one, in the absence of generalized must-get-what-i-want beliefs.)
I am not convinced that rationalizing doesn’t help ease (or eliminate) feelings of dissatisfaction induced by attachments in all cases
This is like saying, “I’m not convinced that painkillers don’t help ease or eliminate the symptoms of cancer”—it’s probably true, and even more probably irrelevant. ;-)
However, we have far more effective (and painless) treatments for attachment than we do for cancer, and they are even easier, more effective, and faster-acting than rationalization.
Just wondering, what treatments do you have for attachments?
The simplest one is just realizing you don’t need the object of the attachment in order to be happy—to realize you can still get your SASS (Status, Affiliation, Safety & Stimulation) needs met without it.
And, do you think some attachments are healthy?
They’re an emergency response mechanism, so using them to respond to actual emergencies is at least within design parameters. Though honestly, I’m not sure how much good they do in emergencies that don’t reflect the ancestral environment… which is probably most emergencies these days.
For any situation where you have enough time to think about the matter, an attachment is counterproductive… because attachments turn off thinking. (Or at least, induce some rather severe forms of tunnel vision.)
When I first started helping people with chronic procrastination, I focused on removing obstacles to working. After a couple years, I realized that I was doing it backwards; I needed to remove their attachments to getting things done, instead. (Attachments appear to have priority over desire; Increasing desire doesn’t seem to help while the attachments are still there.)
Invariably, the result of getting rid of the attachment(s) is that people suddenly begin thinking clearly about what they’re trying to accomplish, and either immediately see solutions of their own, or realize that the solutions their friends or colleagues have been proposing all along are actually pretty good.
So, attachments are not that useful for a modern human, living in civilization.
The simplest one is just realizing you don’t need the object of the attachment in order to be happy—to realize you can still get your SASS (Status, Affiliation, Safety & Stimulation) needs met without it.
That sounds more like the outcome of a treatment than a treatment by itself.
That sounds more like the outcome of a treatment than a treatment by itself.
Well, “realizing” is usually the result of sincere questioning (e.g. asking, “Do I really need this?”), such that I tend to equate the two a bit in my mind.
If the answer to your sincere question is that you DO need it, though, then you have to untangle whatever SASS-loaded belief(s) are connected to the thing.
This is like saying, “I’m not convinced that painkillers don’t help ease or eliminate the symptoms of cancer”—it’s probably true, and even more probably irrelevant. ;-)
Those two are the same if you consider ‘not getting the subject of the desire that you rationalized away’ to be the same as ‘dying from untreated cancer’.
Those two are the same if you consider ‘not getting the subject of the desire that you rationalized away’ to be the same as ‘dying from untreated cancer’.
If you die from chronic stress due to buried resentments, you’re still dead. It just takes longer.
Good point. But I wonder, can people ever benefit from rationalizing away a desire without loading themselves up with burried resentments? I know the two are certainly correlated but I would be surprised to find that rationalization didn’t give a strict benefit sometimes. Even though I am ideologically adverse to rationalization I find reality is seldom as black and white as I am with these things.
But I wonder, can people ever benefit from rationalizing away a desire without loading themselves up with burried resentments?
I think that once again we are having a problem with the definitions of words, rather than the things pointed to by the words.
Since a desire’s payoff matrix is 1,0, I’m not clear on why you would want to rationalize away a desire. I might desire to be a rock star or a famous movie director, but I have no need to rationalize the fact that I will likely be neither, ever.
However, if I felt I couldn’t be happy without being one of those things, then merely rationalizing that I didn’t really want them wouldn’t help. If you banish the thing from your awareness, you can’t actually let go of the attachment.
To be clear: by “rationalize” I assume you mean to use activity in the logical mind to deflect from awareness of the emotional mind, and by “let go of” I mean, “get the emotional mind to decide upon reflection that the attachment is not required”. I consider the latter to be beneficial, and the former not. I wonder if perhaps you are fuzzing these two together.
Even though I am ideologically adverse to rationalization I find reality is seldom as black and white as I am with these things.
I have found, on the other hand, that viewing things in black and white is a tremendous aid to practical learning. The fool who persists in his folly will become wise, and he who follows a rule of thumb will find the exceptions soon enough.
OTOH, he who gets all the data in advance, will often be confused or lose his motivation to act. Finding counterexamples is a useful mental muscle, but it tends to keep one from actually doing things, since everything useful has some counterexample or counterindication, somewhere.
I have found, on the other hand, that viewing things in black and white is a tremendous aid to practical learning. The fool who persists in his folly will become wise, and he who follows a rule of thumb will find the exceptions soon enough.
That’s what I like to implement in practice too.
I don’t disagree with you here particularly, just acknowledge that there is a coherent value system for which the consequences of rationalizing differ in nature as well as degree to the consequences of what (who was it you were discussing with again?) described as ‘rationalizing’. The way I would descibe (whatsisnames) ‘rationalizing’ in your language would be to use what is basically unconscious mind hacking techniques to actually release the desire for the particular thing by actually sincerely integrating the ‘rationalization’.
The way I would descibe (whatsisnames) ‘rationalizing’ in your language would be to use what is basically unconscious mind hacking techniques to actually release the desire for the particular thing by actually sincerely integrating the ‘rationalization’.
In which case, we’re indeed quibbling about terminology again.
And still quibbling, because what falls under my definition of “rationalization” is something that wouldn’t be able to be directly processed by the emotional side of the brain, which doesn’t process logic, only connections like “X good” and “Y bad”.
The only way you get that side to agree with the “rationalizing” side is if the rationalizing side uses its logic to construct imagined scenarios that the emotional brain can reduce to simple association.
(Which, by and large, is what all forms of mind hacking and persuasion are—using logic to paint pretty pictures for the emotional brain. Or more effectively, using logic to get the emotional brain to paint its own pictures and draw appropriate conclusions from them, since the brain usually puts up less of a fight against the conclusions it draws from unconscious inference than it does from those obtained by conscious inference or explicit statement.)
The only way you get that side to agree with the “rationalizing” side is if the rationalizing side uses its logic to construct imagined scenarios that the emotional brain can reduce to simple association.
Some people are fotunate to have wiring that makes this process more or less automatic whenever they rationalise. All else being equal such individuals may be expected to be more content in a given circumstance but less likely to achieve grand things (that are probably unnecessary for their own emotional wellbeing). I think it would be bad thing if, say, Eliezer had a natural knack for satisfying his emotional brain with this sort of rationalization. (And this may well be a claim that you disagree with.)
I think that you are still using sufficiently different terms from me that a discussion isn’t really possible without further definition of terms.
Perhaps you should taboo “rationalize”, so I can see if you have a precise and consistent unpacking for that term—as far as I can see, your definition for it appears much more vague, broad, and less technical than my own.
I have a very narrow and precise meaning in mind for it, and if I substitute it into your comment, your comment appears nonsensical, in the manner of tree/forest/sound arguments with an alternate expansion of “sound”.
I think that you are still using sufficiently different terms from me that a discussion isn’t really possible without further definition of terms.
I don’t think either of us care enough to bother with that just now. For my part (as is rather common) I was just backing up some other guy on a specific point and mostly agree with you.
Where (I think) there may be potential for an interesting discussion in the future is just how often the ‘negative’ emotional adaptions apply to (even) the current environment. Less, obviously, than the EEA but I think we would disagree in how much the ‘negative stuff’ applies here and now. I also suggest a relevant selection effect. We pay attention to the consequences of things like anger, rationalization, denial and even (though I’m extremely hesitant to conceede this one) shame mostly when they are maladaptive. When they are actually working to benefit us we don’t think about them (or bother to go get help from mind hacking instructors).
As you say, it is the sort of thing where precise definition of the terms is necessary. When (and if) I choose (get around) to publishing any of the rough drafts of posts I have laying around there are a couple that touch on this kind of area and I have no doubt you could provide a useful critique!
Yup, I made my statement on that assumption, but know that expected negative utility isn’t always the case. Just sometimes it’s not clear whether discovering suppressed desires yields positive expected utility.
Good point, didn’t think of that.
But I am not convinced that rationalizing/cognitive dissonance doesn’t help ease (or eliminate) feelings of dissatisfaction induced by attachments in all cases. I think realizing a desire can play a causal role in building attachment for that desire.
It might be necessary, but it’s not sufficient. If you have a general belief that life is bad whenever you don’t have everything you want, then yes, definitely. On the other extreme, if you believe that life is just fine as it is, then it’s equally clearly no.
(Also, don’t forget that attachment can exist without desire—I can be attached to getting something done on time, that I don’t actually want to do in the first place!)
In general, children are more likely to believe that it’s bad to not have what they want, now, than adults are. And in general, we might say that being less attached to things is correlated with maturity. So, if you’re going to extrapolate what an older, wiser you would do, it’s probably best to assume less likelihood of having attachment, rather than more. (Note too that there are things you can do to lessen your attachments, but I’m not aware of anything that can cause you to add one, in the absence of generalized must-get-what-i-want beliefs.)
This is like saying, “I’m not convinced that painkillers don’t help ease or eliminate the symptoms of cancer”—it’s probably true, and even more probably irrelevant. ;-)
However, we have far more effective (and painless) treatments for attachment than we do for cancer, and they are even easier, more effective, and faster-acting than rationalization.
Just wondering, what treatments do you have for attachments?
And, do you think some attachments are healthy?
The simplest one is just realizing you don’t need the object of the attachment in order to be happy—to realize you can still get your SASS (Status, Affiliation, Safety & Stimulation) needs met without it.
They’re an emergency response mechanism, so using them to respond to actual emergencies is at least within design parameters. Though honestly, I’m not sure how much good they do in emergencies that don’t reflect the ancestral environment… which is probably most emergencies these days.
For any situation where you have enough time to think about the matter, an attachment is counterproductive… because attachments turn off thinking. (Or at least, induce some rather severe forms of tunnel vision.)
When I first started helping people with chronic procrastination, I focused on removing obstacles to working. After a couple years, I realized that I was doing it backwards; I needed to remove their attachments to getting things done, instead. (Attachments appear to have priority over desire; Increasing desire doesn’t seem to help while the attachments are still there.)
Invariably, the result of getting rid of the attachment(s) is that people suddenly begin thinking clearly about what they’re trying to accomplish, and either immediately see solutions of their own, or realize that the solutions their friends or colleagues have been proposing all along are actually pretty good.
So, attachments are not that useful for a modern human, living in civilization.
That sounds more like the outcome of a treatment than a treatment by itself.
Well, “realizing” is usually the result of sincere questioning (e.g. asking, “Do I really need this?”), such that I tend to equate the two a bit in my mind.
If the answer to your sincere question is that you DO need it, though, then you have to untangle whatever SASS-loaded belief(s) are connected to the thing.
Those two are the same if you consider ‘not getting the subject of the desire that you rationalized away’ to be the same as ‘dying from untreated cancer’.
If you die from chronic stress due to buried resentments, you’re still dead. It just takes longer.
Good point. But I wonder, can people ever benefit from rationalizing away a desire without loading themselves up with burried resentments? I know the two are certainly correlated but I would be surprised to find that rationalization didn’t give a strict benefit sometimes. Even though I am ideologically adverse to rationalization I find reality is seldom as black and white as I am with these things.
I think that once again we are having a problem with the definitions of words, rather than the things pointed to by the words.
Since a desire’s payoff matrix is 1,0, I’m not clear on why you would want to rationalize away a desire. I might desire to be a rock star or a famous movie director, but I have no need to rationalize the fact that I will likely be neither, ever.
However, if I felt I couldn’t be happy without being one of those things, then merely rationalizing that I didn’t really want them wouldn’t help. If you banish the thing from your awareness, you can’t actually let go of the attachment.
To be clear: by “rationalize” I assume you mean to use activity in the logical mind to deflect from awareness of the emotional mind, and by “let go of” I mean, “get the emotional mind to decide upon reflection that the attachment is not required”. I consider the latter to be beneficial, and the former not. I wonder if perhaps you are fuzzing these two together.
I have found, on the other hand, that viewing things in black and white is a tremendous aid to practical learning. The fool who persists in his folly will become wise, and he who follows a rule of thumb will find the exceptions soon enough.
OTOH, he who gets all the data in advance, will often be confused or lose his motivation to act. Finding counterexamples is a useful mental muscle, but it tends to keep one from actually doing things, since everything useful has some counterexample or counterindication, somewhere.
That’s what I like to implement in practice too.
I don’t disagree with you here particularly, just acknowledge that there is a coherent value system for which the consequences of rationalizing differ in nature as well as degree to the consequences of what (who was it you were discussing with again?) described as ‘rationalizing’. The way I would descibe (whatsisnames) ‘rationalizing’ in your language would be to use what is basically unconscious mind hacking techniques to actually release the desire for the particular thing by actually sincerely integrating the ‘rationalization’.
In which case, we’re indeed quibbling about terminology again.
And still quibbling, because what falls under my definition of “rationalization” is something that wouldn’t be able to be directly processed by the emotional side of the brain, which doesn’t process logic, only connections like “X good” and “Y bad”.
The only way you get that side to agree with the “rationalizing” side is if the rationalizing side uses its logic to construct imagined scenarios that the emotional brain can reduce to simple association.
(Which, by and large, is what all forms of mind hacking and persuasion are—using logic to paint pretty pictures for the emotional brain. Or more effectively, using logic to get the emotional brain to paint its own pictures and draw appropriate conclusions from them, since the brain usually puts up less of a fight against the conclusions it draws from unconscious inference than it does from those obtained by conscious inference or explicit statement.)
Some people are fotunate to have wiring that makes this process more or less automatic whenever they rationalise. All else being equal such individuals may be expected to be more content in a given circumstance but less likely to achieve grand things (that are probably unnecessary for their own emotional wellbeing). I think it would be bad thing if, say, Eliezer had a natural knack for satisfying his emotional brain with this sort of rationalization. (And this may well be a claim that you disagree with.)
I think that you are still using sufficiently different terms from me that a discussion isn’t really possible without further definition of terms.
Perhaps you should taboo “rationalize”, so I can see if you have a precise and consistent unpacking for that term—as far as I can see, your definition for it appears much more vague, broad, and less technical than my own.
I have a very narrow and precise meaning in mind for it, and if I substitute it into your comment, your comment appears nonsensical, in the manner of tree/forest/sound arguments with an alternate expansion of “sound”.
I don’t think either of us care enough to bother with that just now. For my part (as is rather common) I was just backing up some other guy on a specific point and mostly agree with you.
Where (I think) there may be potential for an interesting discussion in the future is just how often the ‘negative’ emotional adaptions apply to (even) the current environment. Less, obviously, than the EEA but I think we would disagree in how much the ‘negative stuff’ applies here and now. I also suggest a relevant selection effect. We pay attention to the consequences of things like anger, rationalization, denial and even (though I’m extremely hesitant to conceede this one) shame mostly when they are maladaptive. When they are actually working to benefit us we don’t think about them (or bother to go get help from mind hacking instructors).
As you say, it is the sort of thing where precise definition of the terms is necessary. When (and if) I choose (get around) to publishing any of the rough drafts of posts I have laying around there are a couple that touch on this kind of area and I have no doubt you could provide a useful critique!