Well, I’ve managed to dissuade myself of most cognitive biases clustered around the self-serving bias, by deliberately training myself to second-guess anything that would make me feel comfortable or feel good about myself. This one worked pretty well; I tend to be much less susceptible to ‘delusional optimism’ than most people, and I’m generally the voice of rational dissent in any group I belong to.
I’ve tried to make myself more ‘programmable’ - developed techniques for quickly adjusting my “gut feelings” / subconscious heuristics towards whatever ends I anticipated would be useful. This worked out really well in my 20′s, but after awhile I wound up introducing a lot of bugs into it. As it turns out, when you start using meditation and occult techniques to hack your brain, it’s really easy to lock yourself into a stable attractor where you no longer have the capacity to perform further useful programming. Oops.
I’m not sure if this advice will be useful to you, but I think what I’d do in this situation would be to stick to standard techniques and avoid the brain hacking, at least for now. “Standard techniques” might be things like learning particular skills, or asking your friends to be on the lookout for weird behavior from you.
One other thing—though I may have understood you here—you say you’ve trained yourself not to feel good about yourself in certain circumstances. To compensate, have you trained yourself to feel better about yourself in other circumstances? I’d guess there’s an optimal overall level of feeling good about yourself and our natural overall level is probably about right (but not necessarily right about the specifics of what to feel good about)
No, in general I’ve trained myself to operate, as much as possible, with an incredibly lean dopamine mixture. To hell with feeling good; I want to be able to push on no matter how bad I feel.
(As it turns out, I have limits, but I’ve mostly trained to push through those limits through shame and willpower rather than through reward mechanisms, to the point that reward mechanisms generally don’t even really work on me anymore—at least, not to the level other people expect them to).
A lot of this was a direct decision, at a very young age, to never exploit primate dominance rituals or competitive zero-sum exchanges to get ahead. It’s been horrific, but… the best metaphor I can give is from a story called “Those Who Walk Away From Omelas”.
Essentially, you have a utopia, that is powered by the horrific torture and suffering of a single innocent child. At a certain age, everyone in the culture is explained how the utopia works, and given two choices: commit fully to making the utopia worth the cost of that kid’s suffering, or walk away from utopia and brave the harsh world outside.
I tried to take a third path, and say “fuck it. Let the kid go and strap me in.”
So in a sense, I suppose I tried to replace normal feel-good routines for a sort of smug moral superiority, but then I trained myself to see my own behavior as smug moral superiority so I wouldn’t feel good about it. So, yeah.
Are you sure this is optimal? You seem to have goals but have thrown away three potentially useful tools: reward mechanisms, primate dominance rituals and zero-sum competitions. Obviously you’ve gained grit.
Optimal by what criteria? And what right do I have to assign criteria for ‘optimal’? I have neither power nor charisma; criteria are chosen by those with the power to enforce an agenda.
Some people might value occupying a particular mental state for its own sake, but that wasn’t what I was talking about here. I was talking purely instrumentally—your interest in existential risk suggests you have goals or long term preferences about the world (although I understand that I may have got this wrong), and I was contemplating what might help you achieve those and what might stand in your way.
Just to clarify—is it my assessment of you as an aspiring utility maximizer that I’m wrong about, or am I right about that but wrong about something at the strategic level? (Or fundamentally misunderstanding your preferences)
Problem being that Omelas doesn’t just require that /somebody/ be suffering; if it did, they’d probably take turns or something. It’s some quality of that one kid.
Which is part of where the metaphor breaks down. In our world, our relative prosperity and status doesn’t require that some specific, dehumanized ‘Other’ be exploited to maintain our own privilege—it merely requires that someone be identified as ‘Other’, that some kind of class distinction be created, and then natural human instincts take over and ensure that marginal power differentials are amplified into a horrific and hypocritical class structure. (Sure, it’s a lot better than it was before, but that doesn’t make it “good” by any stretch of the imagination).
I have no interest in earning money by exploiting the emotional tendencies of those less intelligent than me, so Ialdabaoth-sub-1990 drew a hard line around jobs (or tasks or alliances or friendships) that aid people who do such things.
More generally, Brent-sub-1981 came up with a devastating heuristic: “any time I experience a social situation where humans are cruel to me, I will perform a detailed analysis of the thought processes and behaviors that led to that social situation, and I will exclude myself from performing those processes and behaviors, even if they are advantageous to me.”
It’s the kernel to my “code of honor”, and at this point it’s virtually non-negotiable.
It is not, however, particularly good at “winning”.
Well, I’ve managed to dissuade myself of most cognitive biases clustered around the self-serving bias, by deliberately training myself to second-guess anything that would make me feel comfortable or feel good about myself. This one worked pretty well; I tend to be much less susceptible to ‘delusional optimism’ than most people, and I’m generally the voice of rational dissent in any group I belong to.
I’ve tried to make myself more ‘programmable’ - developed techniques for quickly adjusting my “gut feelings” / subconscious heuristics towards whatever ends I anticipated would be useful. This worked out really well in my 20′s, but after awhile I wound up introducing a lot of bugs into it. As it turns out, when you start using meditation and occult techniques to hack your brain, it’s really easy to lock yourself into a stable attractor where you no longer have the capacity to perform further useful programming. Oops.
I’m not sure if this advice will be useful to you, but I think what I’d do in this situation would be to stick to standard techniques and avoid the brain hacking, at least for now. “Standard techniques” might be things like learning particular skills, or asking your friends to be on the lookout for weird behavior from you.
One other thing—though I may have understood you here—you say you’ve trained yourself not to feel good about yourself in certain circumstances. To compensate, have you trained yourself to feel better about yourself in other circumstances? I’d guess there’s an optimal overall level of feeling good about yourself and our natural overall level is probably about right (but not necessarily right about the specifics of what to feel good about)
No, in general I’ve trained myself to operate, as much as possible, with an incredibly lean dopamine mixture. To hell with feeling good; I want to be able to push on no matter how bad I feel.
(As it turns out, I have limits, but I’ve mostly trained to push through those limits through shame and willpower rather than through reward mechanisms, to the point that reward mechanisms generally don’t even really work on me anymore—at least, not to the level other people expect them to).
A lot of this was a direct decision, at a very young age, to never exploit primate dominance rituals or competitive zero-sum exchanges to get ahead. It’s been horrific, but… the best metaphor I can give is from a story called “Those Who Walk Away From Omelas”.
Essentially, you have a utopia, that is powered by the horrific torture and suffering of a single innocent child. At a certain age, everyone in the culture is explained how the utopia works, and given two choices: commit fully to making the utopia worth the cost of that kid’s suffering, or walk away from utopia and brave the harsh world outside.
I tried to take a third path, and say “fuck it. Let the kid go and strap me in.”
So in a sense, I suppose I tried to replace normal feel-good routines for a sort of smug moral superiority, but then I trained myself to see my own behavior as smug moral superiority so I wouldn’t feel good about it. So, yeah.
Are you sure this is optimal? You seem to have goals but have thrown away three potentially useful tools: reward mechanisms, primate dominance rituals and zero-sum competitions. Obviously you’ve gained grit.
Optimal by what criteria? And what right do I have to assign criteria for ‘optimal’? I have neither power nor charisma; criteria are chosen by those with the power to enforce an agenda.
By the same right that you assign criteria according to which primate dominance rituals or competitive zero-sum exchanges are bad.
Some people might value occupying a particular mental state for its own sake, but that wasn’t what I was talking about here. I was talking purely instrumentally—your interest in existential risk suggests you have goals or long term preferences about the world (although I understand that I may have got this wrong), and I was contemplating what might help you achieve those and what might stand in your way.
Just to clarify—is it my assessment of you as an aspiring utility maximizer that I’m wrong about, or am I right about that but wrong about something at the strategic level? (Or fundamentally misunderstanding your preferences)
Problem being that Omelas doesn’t just require that /somebody/ be suffering; if it did, they’d probably take turns or something. It’s some quality of that one kid.
Which is part of where the metaphor breaks down. In our world, our relative prosperity and status doesn’t require that some specific, dehumanized ‘Other’ be exploited to maintain our own privilege—it merely requires that someone be identified as ‘Other’, that some kind of class distinction be created, and then natural human instincts take over and ensure that marginal power differentials are amplified into a horrific and hypocritical class structure. (Sure, it’s a lot better than it was before, but that doesn’t make it “good” by any stretch of the imagination).
I have no interest in earning money by exploiting the emotional tendencies of those less intelligent than me, so Ialdabaoth-sub-1990 drew a hard line around jobs (or tasks or alliances or friendships) that aid people who do such things.
More generally, Brent-sub-1981 came up with a devastating heuristic: “any time I experience a social situation where humans are cruel to me, I will perform a detailed analysis of the thought processes and behaviors that led to that social situation, and I will exclude myself from performing those processes and behaviors, even if they are advantageous to me.”
It’s the kernel to my “code of honor”, and at this point it’s virtually non-negotiable.
It is not, however, particularly good at “winning”.