Is Getting More Utilons Your True Acceptance??
Meta: Inspired by The Least Convenient Possible World I asked the person who most criticized my previous posts help on writing a new one, since that seemed very inconvenient, specially because the whole thing was already written. He agreed and suggested I begin by posting only a part of it here, and wait for the comments to further change the rest of the text. So here is the beggining and one section, and we’ll see how it goes from there. I have changed the title to better reflect the only section presented here.
This post will be about how random events can preclude or steal attention from the goals you set up to begin with, and about how hormone fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given that follows, taking in consideration likelihood of a goal’s success in terms of difficulty and lenght.
Through it I suggest a new bias, Avoid-Frustration bias, which is composed of a few others:
A Self-serving bias in which Loss aversion manifests by postponing one’s goals, thus avoiding frustration through wishful thinking about far futures, big worlds, immortal lives, and in general, high numbers of undetectable utilons.
It can be thought of a kind of Cognitive Dissonance, though Cognitive Dissonance doesn’t to justice to specific properties and details of how this kind, in particular, seems to me to have affected the lives of Less-Wrongers, Transhumanists and others. Probably in a good way, more on that later.
Sections will be:
-
What Significantly Changes Life’s Direction (lists)
-
Long Term Goals and Even Longer Term Goals
-
Proportionality Between Goal Achievement Expected Time and Plan Execution Time
-
A Hypothesis On Why We Became Long-Term Oriented
-
Adapting Bayesian Reasoning to Get More Utilons
-
Time You Can Afford to Wait, Not to Waste
-
Reference Classes that May Be Avoid-Frustration Biased
The Road Ahead
[Section 4 is shown here]
4 A Hypothesis On Why We Became Long-Term Oriented
For anyone who rejoiced the company of the writings of Derek Parfit, George Ainslie, or Nick Bostrom, there are a lot of very good reasons to become more long-term oriented. I am here to ask you about those reasons: Is that you true acceptance?
It is not for me. I became longer term oriented because of different reasons. Two obvious ones are genetics expressing in me the kind of person that waits a year for the extra marshmallow while fantasyzing about marshmallow worlds and rocking horse pies, and secondly wanting to live thousands of years. But the one I’d like to suggest that might be relevant to some here is that I was very bad at making people who were sad or hurt happy. I was not, as they say, empathic. It was a piece of cake bringing folks from neutral state to joy and bliss. But if someone got angry or sad, specially sad with something I did, I would be absolutely powerless about it. This is only one way of not being good with people, a people’s person etc… So my emotional system, like the tale’s Big Bad Wolf blew, and blew, and blew, until my utilons were confortably sitting aside in the Far Future, where none of them could look back at my face, cry, and point to me as the tears cause.
Paradoxically, though understandably, I have since been thankful for that lack of empathy towards those near. In fact, I have claimed, where I forget, that it is the moral responsibility of those with less natural empathy of the giving to beggars kind to care about the far future, since so few are within this tiny psychological mindspace of being able to care abstractly while not caring that much visibly/emotionally. We are such a minority that foreign aid seems to be the thing that is more disproportional in public policy between countries (Savulescu, J—Genetically Enhance Humanity of Face Extinction 2009 video). Like the whole minority of billionnaires ought to be more like Bill Gates, Peter Thiel and Jaan Tallinn, the minority of underempathic folk ought to be more like an economist doing quantitative analysis to save or help in quantitative ways.
So maybe your true acceptance of Longterm, like mine, was something like Genes + Death sucks + I’d rather interact with people of the future whose bots in my mind smile, than those actually meaty folk around me, with all their specific problems, complicated families and boring christian relashionship problems. This is my hypothesis. Even if true, notice it does not imply that Longterm isn’t rational, after all Parfit, Bostrom and Ainslie are still standing, even after careful scrutiny.
[Ideas on how to develop other sections are as appreciated as commentary on this one]
It’s unclear to me in what sense it’s useful to use the phrase “new bias,” especially for something you describe in terms of known biases. This kind of writing has negative connotations to me, generally speaking because it suggests that you care more about being the first person to come up with an idea than you do about whether it’s a useful idea, and specifically speaking because it sort of makes you sound just a little bit like a crackpot.
Interesting. I would have never thought of that interpretation. My take on the memetics of ideas is that people should come up with names for things, and usefulness will sift later (along with other memetic pressures) what will become norm and what wont. Thanks for the heads up! I’ll change it for something more neutral.
That’s an interesting story about how you came to focus on long-term goals. I wonder how many other people have similar stories. I predict that people have a wide variety of stories about how their goals and values evolved. Is the rest of your post written for people who have a similar story to you? If so, it will be of limited use to people who don’t have a similar story. And if you believe that many people have a similar story, you should have evidence for that.