AGI will either cause doom or create a utopia. Everything else seems unimportant and meaningless.
Alex is an ML engineer working in a startup that fights with aging. He believes that AGI will either destroy humanity or bring a utopia, and among other things it will stop aging, so Alex thinks that his job is meaningless, and quits it. He also sometimes asks himself “Should I invest? Should I exercise? Should I even floss my teeth? This all seems meaningless.”
No one knows how the post-AGI world will look like. All predictions are wild speculations, and it’s very hard to tell whether any actions unrelated to AI safety are meaningful. This uncertainty can cause anxiety and depression
These problems are an exacerbated version of existential problem of meaninglessness of life, and the way to mitigate them is to rediscover meaning in the world that ultimately doesn’t have meaning.
I feel like this is an instance of a more general issue: In general, we are bad at rescaling utility when we encounter new situations, and our non-utilitarian way of evaluating outcomes can lead us into very large amounts of pain. The issue is basically that utopia and doom/dystopia are the limiting cases of the problem of information that appears to change their utility calculations vastly, especially negative utility, so psychological problems appear like denialism or guilt.
Essentially, the way to handle this problem is to do 2 things:
Reset the 0 point, such that the new information means that your 0 point is the way the world works now.
Rescale utilities such that instead of treating vastly important problems as things where you treat them as having massive disutility or utility, instead go in the opposite direction. Rescale utilities such that other problems have less utility than this problem and maintain something approximating a normal utility for even the most important problems.
philip_b has the gory details on that process, and it’s worth taking a look at it:
I suggest not only shifting the zero point, but also scaling utilities when you update on information about what’s achievable and what’s not. For example, suppose you thought that saving 1-10 people in poor countries was the best you could do with your life, and you felt like every life saved was +1 utility. But then you learned about longtermism and figured out that if you try, then in expectation you can save 1kk lives in the far future. In such situation it doesn’t make sense to continue caring about saving an individual life as much as you cared before this insight—your system 1 feeling for how good thing can be won’t be able to do its epistemological job then. It’s better to scale utility of saving lives down, so that +1kk lives is +10 utility, and +1 life is +1/100000 utility. This is related to Caring less.
In general, I kinda wished rationalists would make their pitches, at least later on as essentially about caring less about certain problems, rather than caring more about x cause.
I feel like this is an instance of a more general issue: In general, we are bad at rescaling utility when we encounter new situations, and our non-utilitarian way of evaluating outcomes can lead us into very large amounts of pain. The issue is basically that utopia and doom/dystopia are the limiting cases of the problem of information that appears to change their utility calculations vastly, especially negative utility, so psychological problems appear like denialism or guilt.
Essentially, the way to handle this problem is to do 2 things:
Reset the 0 point, such that the new information means that your 0 point is the way the world works now.
Rescale utilities such that instead of treating vastly important problems as things where you treat them as having massive disutility or utility, instead go in the opposite direction. Rescale utilities such that other problems have less utility than this problem and maintain something approximating a normal utility for even the most important problems.
philip_b has the gory details on that process, and it’s worth taking a look at it:
In general, I kinda wished rationalists would make their pitches, at least later on as essentially about caring less about certain problems, rather than caring more about x cause.
What is this 0 point?
Essentially what you count as neutral, or what you consider to be normal, as distinguished from negative or positive states of the world.