The present is better that it might have been because some people cared about it. The future will be better if some people cared about it. I think of cooperating on behalf of the future as part of high level prisoner’s dilemma. Yes, you can technically get away with only caring about your own personal happiness and future. Your cooperation does not actually improve your lot. But if everyone operates on that algorithm in the name of self-interested-rationality, everybody suffers.
I don’t think most people should dedicate their entire lives to THE FUTURE™ (I do not intend to). That’s a hard job that only some people are cut out for. But I do think people should spend some amount of time thinking about where, on the margins, they can work to make the future (and present) better WITHOUT sacrificing their own happiness, because most people are basically bleeding utility that doesn’t benefit anyone.
(i.e. not even bothering to write that existential-risk-mitigation-agency a check every now and then, or whatever form of philanthropy they’re most concerned with)
But I also think that, in doing so, some percentage of the population would realize that they DO care about the future in the abstract, not just for their own benefit, and that they can self-modify into the sort of person who derives pride and joy from working on the problem, even if taking it seriously requires them to embrace truths that are not just uncomfortable but genuinely depressing.
While I don’t plan on dedicating all my life to philanthropic purposes, I think I’m the sort of person who will end up falling in the middle—I’m working on improving my philanthropy-on-the-margins, and I think that I will probably do at least one major, challenging project in my life that I wouldn’t have done if I hadn’t started down this path. (Not sure, just a guess).
Yes, you can technically get away with only caring about your own personal happiness and future. Your cooperation does not actually improve your lot. But if everyone operates on that algorithm in the name of self-interested-rationality, everybody suffers.
You misunderstand. I do cooperate where appropriate, because it is in my self-interest, and if everyone else did the same the world would be much better for everyone!
I cooperate because that’s a winning strategy in the real-world, iterated PD. My cooperation does improve my lot because others can reciprocate and because we can mutually precommit to cooperating in the future. (There are also second-order effects such as using cooperation for social signalling, which also promote cooperation and altruism, although in nonoptimal forms.)
If it wasn’t a winning strategy, I expect people in general would cooperate a lot less. Just because we can subvert or ignore Azatoth some of the time doesn’t mean we can expect to do so regularly. Cooperation is a specific, evolved behavior that persists for good game-theoretical reasons.
If the only chance for the future lay in people cooperating against their personal interests, then I would have much less hope for a good future. But luckily for us all cooperation is rewarded, even when one’s marginal contribution is insignificant or might be better spent on personal projects. Most people do not contribute towards e.g. X-risk reduction, not because they are selfishly reserving resources for personal gain, but because they are misinformed, irrational, biased, and so on. When I say that I place supreme value on personal survival, I must include X-risks in that calculation as well as driving accidents.
The last section of my comment was indicating that I value humanity/the-future for its own sake, in addition to cooperating in iterated PD. I estimate that The-Rest-Of-The-World’s welfare makes up around 5-10% of my utility function. In order for me to be maximally satisfied with life, I need to believe that about 5-10% of my efforts need to contribute to that.
(This is a guess. Right now I’m NOT maximally happy, I do not currently put that much effort in, but based on my introspection so far, it seems about right. I know that I care about the world independent of my welfare to SOME extent, but I know that realistically I value my own happiness more, and am glad that rational choices about my happiness also coincide with making the world better in many ways).
I would take a pill that made me less happy but a better philanthropist, but not a pill that would make unhappy, even if it made me a much better philanthropist.
Edit: This is my personal feelings, which I’d LIKE other people to share but I don’t expect to convince them to.
I think the way I phrased that was wrong.
The present is better that it might have been because some people cared about it. The future will be better if some people cared about it. I think of cooperating on behalf of the future as part of high level prisoner’s dilemma. Yes, you can technically get away with only caring about your own personal happiness and future. Your cooperation does not actually improve your lot. But if everyone operates on that algorithm in the name of self-interested-rationality, everybody suffers.
I don’t think most people should dedicate their entire lives to THE FUTURE™ (I do not intend to). That’s a hard job that only some people are cut out for. But I do think people should spend some amount of time thinking about where, on the margins, they can work to make the future (and present) better WITHOUT sacrificing their own happiness, because most people are basically bleeding utility that doesn’t benefit anyone.
(i.e. not even bothering to write that existential-risk-mitigation-agency a check every now and then, or whatever form of philanthropy they’re most concerned with)
But I also think that, in doing so, some percentage of the population would realize that they DO care about the future in the abstract, not just for their own benefit, and that they can self-modify into the sort of person who derives pride and joy from working on the problem, even if taking it seriously requires them to embrace truths that are not just uncomfortable but genuinely depressing.
While I don’t plan on dedicating all my life to philanthropic purposes, I think I’m the sort of person who will end up falling in the middle—I’m working on improving my philanthropy-on-the-margins, and I think that I will probably do at least one major, challenging project in my life that I wouldn’t have done if I hadn’t started down this path. (Not sure, just a guess).
You misunderstand. I do cooperate where appropriate, because it is in my self-interest, and if everyone else did the same the world would be much better for everyone!
I cooperate because that’s a winning strategy in the real-world, iterated PD. My cooperation does improve my lot because others can reciprocate and because we can mutually precommit to cooperating in the future. (There are also second-order effects such as using cooperation for social signalling, which also promote cooperation and altruism, although in nonoptimal forms.)
If it wasn’t a winning strategy, I expect people in general would cooperate a lot less. Just because we can subvert or ignore Azatoth some of the time doesn’t mean we can expect to do so regularly. Cooperation is a specific, evolved behavior that persists for good game-theoretical reasons.
If the only chance for the future lay in people cooperating against their personal interests, then I would have much less hope for a good future. But luckily for us all cooperation is rewarded, even when one’s marginal contribution is insignificant or might be better spent on personal projects. Most people do not contribute towards e.g. X-risk reduction, not because they are selfishly reserving resources for personal gain, but because they are misinformed, irrational, biased, and so on. When I say that I place supreme value on personal survival, I must include X-risks in that calculation as well as driving accidents.
I think we’re basically in agreement.
The last section of my comment was indicating that I value humanity/the-future for its own sake, in addition to cooperating in iterated PD. I estimate that The-Rest-Of-The-World’s welfare makes up around 5-10% of my utility function. In order for me to be maximally satisfied with life, I need to believe that about 5-10% of my efforts need to contribute to that.
(This is a guess. Right now I’m NOT maximally happy, I do not currently put that much effort in, but based on my introspection so far, it seems about right. I know that I care about the world independent of my welfare to SOME extent, but I know that realistically I value my own happiness more, and am glad that rational choices about my happiness also coincide with making the world better in many ways).
I would take a pill that made me less happy but a better philanthropist, but not a pill that would make unhappy, even if it made me a much better philanthropist.
Edit: This is my personal feelings, which I’d LIKE other people to share but I don’t expect to convince them to.