If I said that human fears are irrational, because you are probably more afraid sleeping in an abandoned house than driving to work, I would hardly be covering new ground. Myself, I thought to have understood this well before finding LessWrong: some threats are programmed by evolution to be scary, so we are greatly afraid of them; some threats aren’t, so we are a little bit afraid of those. Simple enough.
But is that actually true? Am I, in fact, afraid of those threats? Am I actually afraid, at all, of dying in travel, of Climate Change, nuclear war, or unfriendly AI?
The answers are no, a little bit, just barely, and nope, and the reason for that ‘barely’ has nothing to do with the actual scope of the problem, but rather with an ability to roughly visualize (accurately or not) the event due to its usage in media. As for Climate Change, the sole reason why I am somewhat afraid is that I’ve been telling myself for the better part of my life that it is by far humanities biggest problem.
In truth, the scope of a problem doesn’t seem to have a small impact on our sensitivities; rather it seems to have none. And this is a symptom of a more far more fundamental problem. The inspiration for writing this came when I pondered the causes for Signaling. Kaj_Sotala opens his article The Curse of Identity with the following quote:
So what you probably mean is, “I intend to do school to improve my chances on the market”. But this statement is still false, unless it is also true that “I intend to improve my chances on the market”. Do you, in actual fact, intend to improve your chances on the market?
I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market… which is not really motivating enough to actually DO the work.
The reason to do this, I realized, is not that the motivation of Signaling – to appear to be the sort of person who does certain stuff – is larger than I had thought, but because the motivation to do the thing it is based on is virtually non-existent outside the cognitive level. If I visualize a goal I have right now, then I don’t seem to feel any emotional drive to be working on it. At all. It is really a bit scary.
The common approach to deal with Signaling seems to be either to overrule emotional instincts with cognitive choices, or to attempt to compromise, finding ways to reward status-seeking instincts with actions that also help pursuing its respective cognitive goal. But, if it is true that we are starting from zero, why not instead try to create emotional attachment, as I did with Climate Change?
I will briefly raise the question of whether being more afraid of significant threats is actually a good thing. I have heard the argument that it is bad, given that fear causes irrationality and hasty decision making, which I’d assess to be true in a very limited context, but not when applied to life decisions with sufficient time. As with every problem of map and territory, I think it would be nice if the degree to which one is afraid had some kind of correlation to reality, which often enough isn’t the case. A higher amount of rational fear may also cause a decrease in irrational fear. Maybe. I don’t know. If you have no interest in raising fear of rational threats, I’d advise skipping the final paragraph.
Take a moment to try and visualize what will happen in the case of unfriendly AI – or another X-risk of your choice. Do it in a concrete way. Think through the steps that might occur, that would result in your death. Would you have time to notice it? Would there be panic? An uprising? Chaos? You may be noticing now how hard it is to be afraid, even if you are trying, and even if the threat is so real. Or maybe you succeeded. Maybe it can be a source of motivation for you. Because the other way doesn’t work. Attempting to establish a connection of a goal’s end to an emotion reward fails due to the goal’s distance. You want to achieve the goal, not the first step that would lead you there. But fear doesn’t have this problem. Fear will motivate you immediately, without caring that the road is long.
The true degree of our emotional disconnect
If I said that human fears are irrational, because you are probably more afraid sleeping in an abandoned house than driving to work, I would hardly be covering new ground. Myself, I thought to have understood this well before finding LessWrong: some threats are programmed by evolution to be scary, so we are greatly afraid of them; some threats aren’t, so we are a little bit afraid of those. Simple enough.
But is that actually true? Am I, in fact, afraid of those threats? Am I actually afraid, at all, of dying in travel, of Climate Change, nuclear war, or unfriendly AI?
The answers are no, a little bit, just barely, and nope, and the reason for that ‘barely’ has nothing to do with the actual scope of the problem, but rather with an ability to roughly visualize (accurately or not) the event due to its usage in media. As for Climate Change, the sole reason why I am somewhat afraid is that I’ve been telling myself for the better part of my life that it is by far humanities biggest problem.
In truth, the scope of a problem doesn’t seem to have a small impact on our sensitivities; rather it seems to have none. And this is a symptom of a more far more fundamental problem. The inspiration for writing this came when I pondered the causes for Signaling. Kaj_Sotala opens his article The Curse of Identity with the following quote:
The reason to do this, I realized, is not that the motivation of Signaling – to appear to be the sort of person who does certain stuff – is larger than I had thought, but because the motivation to do the thing it is based on is virtually non-existent outside the cognitive level. If I visualize a goal I have right now, then I don’t seem to feel any emotional drive to be working on it. At all. It is really a bit scary.
The common approach to deal with Signaling seems to be either to overrule emotional instincts with cognitive choices, or to attempt to compromise, finding ways to reward status-seeking instincts with actions that also help pursuing its respective cognitive goal. But, if it is true that we are starting from zero, why not instead try to create emotional attachment, as I did with Climate Change?
I will briefly raise the question of whether being more afraid of significant threats is actually a good thing. I have heard the argument that it is bad, given that fear causes irrationality and hasty decision making, which I’d assess to be true in a very limited context, but not when applied to life decisions with sufficient time. As with every problem of map and territory, I think it would be nice if the degree to which one is afraid had some kind of correlation to reality, which often enough isn’t the case. A higher amount of rational fear may also cause a decrease in irrational fear. Maybe. I don’t know. If you have no interest in raising fear of rational threats, I’d advise skipping the final paragraph.
Take a moment to try and visualize what will happen in the case of unfriendly AI – or another X-risk of your choice. Do it in a concrete way. Think through the steps that might occur, that would result in your death. Would you have time to notice it? Would there be panic? An uprising? Chaos? You may be noticing now how hard it is to be afraid, even if you are trying, and even if the threat is so real. Or maybe you succeeded. Maybe it can be a source of motivation for you. Because the other way doesn’t work. Attempting to establish a connection of a goal’s end to an emotion reward fails due to the goal’s distance. You want to achieve the goal, not the first step that would lead you there. But fear doesn’t have this problem. Fear will motivate you immediately, without caring that the road is long.