Because it’s horribly depressing for a lot of people?
I’m the sort of person whose okay with that, but really comprehending that in a non-compartmentalized way is difficult for many people. (It also took me a while to become the kind of person who IS okay with it). Partly because of the sheer scope of it, and partly because it’s a weird outlier belief that isn’t socially acceptable.
There’s a reason “God has a mysterious plan that ultimately makes everything okay somehow” is a popular meme.
But … if they don’t own up to it … the world is still going to kill them without a second thought, except now they don’t even know they need to be careful!
In practice, I don’t think it dramatically affects how careful they are. People who believe in God may still look both ways when crossing the street and buy health insurance. The primary difference is that when someone get sick or loses their legs or dies, they have a comforting lie to tell themselves.
No, this still isn’t the best scenario, because they should also aren’t making good decisions about politics or charity that might actually improve the world. But deciding to do THOSE requires a lot of additional new beliefs that will all time take to integrate, and in the meanwhile there’ll be depressing existential angst that reduces their quality of life.
Even people I know who I consider pretty good, rational people who make good decisions avoid talking to me about death and immortality because they find my views really depressing.
The Litany isn’t going to be much help to people who are so uncommitted to rationality that they will shy away from any depressing idea. To quote Beyond the Reach of God:
But this post was written for those who have something to protect.
If you really need rationality, then following the Litany is indeed necessary.
people who are so uncommitted to rationality that they will shy away from any depressing idea
I don’t think that’s a fair characterization at all.
There are lots of things worth protecting. You can want to protect some things and not care so much about others.
Beyond the Reach of God is part of a sequence designed to inspire people to care about a particular thing in a particular way. And I heartily endorse that goal. But that line “this post was written for those who have something to protect” is powerful specifically because it acknowledges that this is coming with a huge cost. Wrapping your brain around the sheer horror of the world is hard. Lonely dissent is hard. Radically changing your worldview is hard. Translating all of this into a meaningful course of action that actually benefits the thing you care about is hard.
That line of that post comes right after Eliezer has acknowledged that it will make you less happy. And whether or not happiness is the only thing you care about, most people are going to care about it significantly. And even if you make that sacrifice, you might fail to translate your new beliefs into meaningful actions. (Edit: Actually, I think a big reason the conversation about death was depressing was because there was no corresponding action to take to fix it, and for me to explain such a course of action would have required a huge bridging of inferential distance which would have sounded condescending and made them tune out. Engaging the depressing fact will only seem to be a rational choice if that inferential distance has already been crossed).
I think people should want to protect the future, more than they want to ensure their own happiness (and possibly that of their children). But most people don’t. It’s not that they don’t have anything to protect, they just don’t have something they consider worth sacrificing their happiness to protect.
I think people should want to protect the future, more than they want to ensure their own happiness (and possibly that of their children). But most people don’t.
I don’t want to protect the future much more than I want to protect my future. I prefer to put all my resources into increasing the likelihood that I, personally, will be there to enjoy that future—even before putting resources into ensuring it will be a good one.
I’m not sure if I understand you correctly; do you mean that I “should” want to protect The Future (of humanity/etc) significantly beyond my own future? If so, what exactly do you mean by should, and why?
The present is better that it might have been because some people cared about it. The future will be better if some people cared about it. I think of cooperating on behalf of the future as part of high level prisoner’s dilemma. Yes, you can technically get away with only caring about your own personal happiness and future. Your cooperation does not actually improve your lot. But if everyone operates on that algorithm in the name of self-interested-rationality, everybody suffers.
I don’t think most people should dedicate their entire lives to THE FUTURE™ (I do not intend to). That’s a hard job that only some people are cut out for. But I do think people should spend some amount of time thinking about where, on the margins, they can work to make the future (and present) better WITHOUT sacrificing their own happiness, because most people are basically bleeding utility that doesn’t benefit anyone.
(i.e. not even bothering to write that existential-risk-mitigation-agency a check every now and then, or whatever form of philanthropy they’re most concerned with)
But I also think that, in doing so, some percentage of the population would realize that they DO care about the future in the abstract, not just for their own benefit, and that they can self-modify into the sort of person who derives pride and joy from working on the problem, even if taking it seriously requires them to embrace truths that are not just uncomfortable but genuinely depressing.
While I don’t plan on dedicating all my life to philanthropic purposes, I think I’m the sort of person who will end up falling in the middle—I’m working on improving my philanthropy-on-the-margins, and I think that I will probably do at least one major, challenging project in my life that I wouldn’t have done if I hadn’t started down this path. (Not sure, just a guess).
Yes, you can technically get away with only caring about your own personal happiness and future. Your cooperation does not actually improve your lot. But if everyone operates on that algorithm in the name of self-interested-rationality, everybody suffers.
You misunderstand. I do cooperate where appropriate, because it is in my self-interest, and if everyone else did the same the world would be much better for everyone!
I cooperate because that’s a winning strategy in the real-world, iterated PD. My cooperation does improve my lot because others can reciprocate and because we can mutually precommit to cooperating in the future. (There are also second-order effects such as using cooperation for social signalling, which also promote cooperation and altruism, although in nonoptimal forms.)
If it wasn’t a winning strategy, I expect people in general would cooperate a lot less. Just because we can subvert or ignore Azatoth some of the time doesn’t mean we can expect to do so regularly. Cooperation is a specific, evolved behavior that persists for good game-theoretical reasons.
If the only chance for the future lay in people cooperating against their personal interests, then I would have much less hope for a good future. But luckily for us all cooperation is rewarded, even when one’s marginal contribution is insignificant or might be better spent on personal projects. Most people do not contribute towards e.g. X-risk reduction, not because they are selfishly reserving resources for personal gain, but because they are misinformed, irrational, biased, and so on. When I say that I place supreme value on personal survival, I must include X-risks in that calculation as well as driving accidents.
The last section of my comment was indicating that I value humanity/the-future for its own sake, in addition to cooperating in iterated PD. I estimate that The-Rest-Of-The-World’s welfare makes up around 5-10% of my utility function. In order for me to be maximally satisfied with life, I need to believe that about 5-10% of my efforts need to contribute to that.
(This is a guess. Right now I’m NOT maximally happy, I do not currently put that much effort in, but based on my introspection so far, it seems about right. I know that I care about the world independent of my welfare to SOME extent, but I know that realistically I value my own happiness more, and am glad that rational choices about my happiness also coincide with making the world better in many ways).
I would take a pill that made me less happy but a better philanthropist, but not a pill that would make unhappy, even if it made me a much better philanthropist.
Edit: This is my personal feelings, which I’d LIKE other people to share but I don’t expect to convince them to.
How does owning up to that make it worse?
Because it’s horribly depressing for a lot of people?
I’m the sort of person whose okay with that, but really comprehending that in a non-compartmentalized way is difficult for many people. (It also took me a while to become the kind of person who IS okay with it). Partly because of the sheer scope of it, and partly because it’s a weird outlier belief that isn’t socially acceptable.
There’s a reason “God has a mysterious plan that ultimately makes everything okay somehow” is a popular meme.
But … if they don’t own up to it … the world is still going to kill them without a second thought, except now they don’t even know they need to be careful!
In practice, I don’t think it dramatically affects how careful they are. People who believe in God may still look both ways when crossing the street and buy health insurance. The primary difference is that when someone get sick or loses their legs or dies, they have a comforting lie to tell themselves.
No, this still isn’t the best scenario, because they should also aren’t making good decisions about politics or charity that might actually improve the world. But deciding to do THOSE requires a lot of additional new beliefs that will all time take to integrate, and in the meanwhile there’ll be depressing existential angst that reduces their quality of life.
Even people I know who I consider pretty good, rational people who make good decisions avoid talking to me about death and immortality because they find my views really depressing.
The Litany isn’t going to be much help to people who are so uncommitted to rationality that they will shy away from any depressing idea. To quote Beyond the Reach of God:
If you really need rationality, then following the Litany is indeed necessary.
I don’t think that’s a fair characterization at all.
There are lots of things worth protecting. You can want to protect some things and not care so much about others.
Beyond the Reach of God is part of a sequence designed to inspire people to care about a particular thing in a particular way. And I heartily endorse that goal. But that line “this post was written for those who have something to protect” is powerful specifically because it acknowledges that this is coming with a huge cost. Wrapping your brain around the sheer horror of the world is hard. Lonely dissent is hard. Radically changing your worldview is hard. Translating all of this into a meaningful course of action that actually benefits the thing you care about is hard.
That line of that post comes right after Eliezer has acknowledged that it will make you less happy. And whether or not happiness is the only thing you care about, most people are going to care about it significantly. And even if you make that sacrifice, you might fail to translate your new beliefs into meaningful actions. (Edit: Actually, I think a big reason the conversation about death was depressing was because there was no corresponding action to take to fix it, and for me to explain such a course of action would have required a huge bridging of inferential distance which would have sounded condescending and made them tune out. Engaging the depressing fact will only seem to be a rational choice if that inferential distance has already been crossed).
I think people should want to protect the future, more than they want to ensure their own happiness (and possibly that of their children). But most people don’t. It’s not that they don’t have anything to protect, they just don’t have something they consider worth sacrificing their happiness to protect.
I don’t want to protect the future much more than I want to protect my future. I prefer to put all my resources into increasing the likelihood that I, personally, will be there to enjoy that future—even before putting resources into ensuring it will be a good one.
I’m not sure if I understand you correctly; do you mean that I “should” want to protect The Future (of humanity/etc) significantly beyond my own future? If so, what exactly do you mean by should, and why?
I think the way I phrased that was wrong.
The present is better that it might have been because some people cared about it. The future will be better if some people cared about it. I think of cooperating on behalf of the future as part of high level prisoner’s dilemma. Yes, you can technically get away with only caring about your own personal happiness and future. Your cooperation does not actually improve your lot. But if everyone operates on that algorithm in the name of self-interested-rationality, everybody suffers.
I don’t think most people should dedicate their entire lives to THE FUTURE™ (I do not intend to). That’s a hard job that only some people are cut out for. But I do think people should spend some amount of time thinking about where, on the margins, they can work to make the future (and present) better WITHOUT sacrificing their own happiness, because most people are basically bleeding utility that doesn’t benefit anyone.
(i.e. not even bothering to write that existential-risk-mitigation-agency a check every now and then, or whatever form of philanthropy they’re most concerned with)
But I also think that, in doing so, some percentage of the population would realize that they DO care about the future in the abstract, not just for their own benefit, and that they can self-modify into the sort of person who derives pride and joy from working on the problem, even if taking it seriously requires them to embrace truths that are not just uncomfortable but genuinely depressing.
While I don’t plan on dedicating all my life to philanthropic purposes, I think I’m the sort of person who will end up falling in the middle—I’m working on improving my philanthropy-on-the-margins, and I think that I will probably do at least one major, challenging project in my life that I wouldn’t have done if I hadn’t started down this path. (Not sure, just a guess).
You misunderstand. I do cooperate where appropriate, because it is in my self-interest, and if everyone else did the same the world would be much better for everyone!
I cooperate because that’s a winning strategy in the real-world, iterated PD. My cooperation does improve my lot because others can reciprocate and because we can mutually precommit to cooperating in the future. (There are also second-order effects such as using cooperation for social signalling, which also promote cooperation and altruism, although in nonoptimal forms.)
If it wasn’t a winning strategy, I expect people in general would cooperate a lot less. Just because we can subvert or ignore Azatoth some of the time doesn’t mean we can expect to do so regularly. Cooperation is a specific, evolved behavior that persists for good game-theoretical reasons.
If the only chance for the future lay in people cooperating against their personal interests, then I would have much less hope for a good future. But luckily for us all cooperation is rewarded, even when one’s marginal contribution is insignificant or might be better spent on personal projects. Most people do not contribute towards e.g. X-risk reduction, not because they are selfishly reserving resources for personal gain, but because they are misinformed, irrational, biased, and so on. When I say that I place supreme value on personal survival, I must include X-risks in that calculation as well as driving accidents.
I think we’re basically in agreement.
The last section of my comment was indicating that I value humanity/the-future for its own sake, in addition to cooperating in iterated PD. I estimate that The-Rest-Of-The-World’s welfare makes up around 5-10% of my utility function. In order for me to be maximally satisfied with life, I need to believe that about 5-10% of my efforts need to contribute to that.
(This is a guess. Right now I’m NOT maximally happy, I do not currently put that much effort in, but based on my introspection so far, it seems about right. I know that I care about the world independent of my welfare to SOME extent, but I know that realistically I value my own happiness more, and am glad that rational choices about my happiness also coincide with making the world better in many ways).
I would take a pill that made me less happy but a better philanthropist, but not a pill that would make unhappy, even if it made me a much better philanthropist.
Edit: This is my personal feelings, which I’d LIKE other people to share but I don’t expect to convince them to.