I feel like this post is the best reference I know for explaining possibly the highest order bit of what drives most of my life-plans and daily actions, and I think the same is true for many other people working on existential and catastrophic risk stuff. I have a lot of specific beliefs about the future, but the core point of “look, we are clearly outside of the regime where you can simply apply the absurdity heuristic to stuff, something wild is going to happen, and many of those outcomes seem really important to bring by or to avoid” feels like one of the most central ones to my worldview.
I really appreciate this post for being both very approachable in covering this point, and also very charitable to critics of it. It feels like a particularly good post to send to people who haven’t been super immersed in all the jargon on LW and in the broader Rationality/EA/X-Risk/Longtermist community, and even if they end up disagreeing with it, I expect it won’t cause very strong defensive reactions. Which really isn’t something I want all posts to optimize for (or even anything close to the majority of posts on LW), but it is something that is useful to have at least one instance off for every important argument, and this post fills that niche quite well.
I feel like this post is the best reference I know for explaining possibly the highest order bit of what drives most of my life-plans and daily actions, and I think the same is true for many other people working on existential and catastrophic risk stuff. I have a lot of specific beliefs about the future, but the core point of “look, we are clearly outside of the regime where you can simply apply the absurdity heuristic to stuff, something wild is going to happen, and many of those outcomes seem really important to bring by or to avoid” feels like one of the most central ones to my worldview.
I really appreciate this post for being both very approachable in covering this point, and also very charitable to critics of it. It feels like a particularly good post to send to people who haven’t been super immersed in all the jargon on LW and in the broader Rationality/EA/X-Risk/Longtermist community, and even if they end up disagreeing with it, I expect it won’t cause very strong defensive reactions. Which really isn’t something I want all posts to optimize for (or even anything close to the majority of posts on LW), but it is something that is useful to have at least one instance off for every important argument, and this post fills that niche quite well.
Always beware of the spectre of anthropic reasoning though.