Huh. Well, this sort of competitive outcome is something I prefer not to emphasize but the general idiom here sounds like it could be important.
I confess that when I first read this paragraph:
Here I was, with a post that allowed me to stop rationalizing reasons for why spreading money was good, and instead spread them because I was honestly selfish and just buying a good feeling. Now, I didn’t need to worry about being irrational in having diversified donations. So since it was okay, I logged in to PayPal
I actually sat up and said “What on Earth?” out loud.
But I can see the causality. Removal of pressure → removal of counterpressure → collapse of irrationality. If it’s okay to be irrational then it’s okay to acknowledge that behavior X is irrational and then you stop doing it.
Well. I shall remember this when dealing with theists.
Removal of pressure → removal of counterpressure → collapse of irrationality.
This is precisely what I mean when I say that getting rid of people’s negative motivation to accomplish a goal (i.e., the “I have to do this or else I’m a bad person” motivation) is critical to ending chronic procrastination… and even to remove the sense of “struggling” in a non-procrastinator.
It’s counterintuitive, but true. The hypothesis in my model is that there’s a bug in our cognitive architecture… what I call the “perform-to-prevent” bug.
Our avoidance-motivation system—the “freeze or flight” system, if you will—is not designed to support sustained action, or really take any positive actions at all. It’s designed to make us avoid things, after all! So sustained activation leads to avoidance behaviors (rationalizing, procrastinating) rather than the desired positive action, even though to our logical conscious minds it seems like it should do the opposite. So we push ourselves MORE… which makes things worse!
The only point at which negative motivation works well, is when the threat is imminent enough to feel like the action you’re taking is actually “running away” from the threat. Otherwise, the system seems to want to just “hide and wait for the predator to give up”.
(Of course, what differs from person to person is their internal model of the “threat”, and some people’s threats are other people’s minor annoyances not even worth thinking about. Seligman’s 3 P’s and the Dweck Fixed/Growth mindsets play a big part here as well.)
(EDIT: It occurred to me after posting this that it might not be clear that I’m not comparing Kaj Sotala’s situation to procrastination per se. I’m only using it a s springboard to illustrate how negative motivation—specifically, the kind that draws on lowered personal status/esteem connected with an action—produces counterintuitive and irrational behaviors. Kaj’s situation isn’t the same as procrastination per se, but the diagram Eliezer drew does precisely match the pattern I see in chronic procrastination and its treatment.)
This is VERY relevant to my argument that it’s OK to lie because if you think it’s not OK to lie you won’t allow yourself to see that the convenient thing to say might not be the truth… or even to look at it hard enough to check whether it’s the convenient thing to say.
I don’t disagree with the argument, but I don’t think it holds for all people—I for one have a taste for believing heresies that I find myself having to fight.
Mike’s argument applies fairly independently of one’s tastes. The premise is just that what yourself motivated to say differs, in some instances or others, from what the evidence best suggests is true. Your non-truth-based speech motive could be to avoid hurting someone’s feelings, or to assert that that clever heresy you were advocating is indeed a good line of thought, or … any of the other reasonable or unreasonable pulls that cause us humans to want to say some things and avoid others.
OK, so I guess I should have said “applies a lot less to some people”. Also, this seems like one of those cases where one bias might cancel out another; fighting bias with bias means I’m in murky waters, but in the context of this thread we might already be in those murky waters.
ETA: From a cached selves point of view, it seems like building emotional comfort with lying might completely obviate the effect where false statements cause later beliefs that are consistent with those statements (and therefore false), or it might not (e.g., because you don’t perfectly remember what was a lie and what was honest). If not then that seems like a serious problem with lying. Lying while in denial of one’s capacity to lie is even worse, but the bad effect from more lying might outweigh the good effect from more comfortable lying.
I know this has been discussed before but it deserves a top-level post.
We need to think about categories of lies. Some of them will not help us believe the truth.
I’ve long felt that I can avoid lying better than most because I’m good at finding things that are technically true, and make people feel good, without denying the truths that are uncomfortable.
This logic also suggests we benefit from spending time in groups where convenient things to say are different.
Huh. Well, this sort of competitive outcome is something I prefer not to emphasize but the general idiom here sounds like it could be important.
I confess that when I first read this paragraph:
I actually sat up and said “What on Earth?” out loud.
But I can see the causality. Removal of pressure → removal of counterpressure → collapse of irrationality. If it’s okay to be irrational then it’s okay to acknowledge that behavior X is irrational and then you stop doing it.
Well. I shall remember this when dealing with theists.
This is precisely what I mean when I say that getting rid of people’s negative motivation to accomplish a goal (i.e., the “I have to do this or else I’m a bad person” motivation) is critical to ending chronic procrastination… and even to remove the sense of “struggling” in a non-procrastinator.
It’s counterintuitive, but true. The hypothesis in my model is that there’s a bug in our cognitive architecture… what I call the “perform-to-prevent” bug.
Our avoidance-motivation system—the “freeze or flight” system, if you will—is not designed to support sustained action, or really take any positive actions at all. It’s designed to make us avoid things, after all! So sustained activation leads to avoidance behaviors (rationalizing, procrastinating) rather than the desired positive action, even though to our logical conscious minds it seems like it should do the opposite. So we push ourselves MORE… which makes things worse!
The only point at which negative motivation works well, is when the threat is imminent enough to feel like the action you’re taking is actually “running away” from the threat. Otherwise, the system seems to want to just “hide and wait for the predator to give up”.
(Of course, what differs from person to person is their internal model of the “threat”, and some people’s threats are other people’s minor annoyances not even worth thinking about. Seligman’s 3 P’s and the Dweck Fixed/Growth mindsets play a big part here as well.)
(EDIT: It occurred to me after posting this that it might not be clear that I’m not comparing Kaj Sotala’s situation to procrastination per se. I’m only using it a s springboard to illustrate how negative motivation—specifically, the kind that draws on lowered personal status/esteem connected with an action—produces counterintuitive and irrational behaviors. Kaj’s situation isn’t the same as procrastination per se, but the diagram Eliezer drew does precisely match the pattern I see in chronic procrastination and its treatment.)
This is VERY relevant to my argument that it’s OK to lie because if you think it’s not OK to lie you won’t allow yourself to see that the convenient thing to say might not be the truth… or even to look at it hard enough to check whether it’s the convenient thing to say.
I don’t disagree with the argument, but I don’t think it holds for all people—I for one have a taste for believing heresies that I find myself having to fight.
Mike’s argument applies fairly independently of one’s tastes. The premise is just that what yourself motivated to say differs, in some instances or others, from what the evidence best suggests is true. Your non-truth-based speech motive could be to avoid hurting someone’s feelings, or to assert that that clever heresy you were advocating is indeed a good line of thought, or … any of the other reasonable or unreasonable pulls that cause us humans to want to say some things and avoid others.
OK, so I guess I should have said “applies a lot less to some people”. Also, this seems like one of those cases where one bias might cancel out another; fighting bias with bias means I’m in murky waters, but in the context of this thread we might already be in those murky waters.
ETA: From a cached selves point of view, it seems like building emotional comfort with lying might completely obviate the effect where false statements cause later beliefs that are consistent with those statements (and therefore false), or it might not (e.g., because you don’t perfectly remember what was a lie and what was honest). If not then that seems like a serious problem with lying. Lying while in denial of one’s capacity to lie is even worse, but the bad effect from more lying might outweigh the good effect from more comfortable lying.
I know this has been discussed before but it deserves a top-level post.
We need to think about categories of lies. Some of them will not help us believe the truth.
I’ve long felt that I can avoid lying better than most because I’m good at finding things that are technically true, and make people feel good, without denying the truths that are uncomfortable.
This logic also suggests we benefit from spending time in groups where convenient things to say are different.