Yes, you can technically get away with only caring about your own personal happiness and future. Your cooperation does not actually improve your lot. But if everyone operates on that algorithm in the name of self-interested-rationality, everybody suffers.
You misunderstand. I do cooperate where appropriate, because it is in my self-interest, and if everyone else did the same the world would be much better for everyone!
I cooperate because that’s a winning strategy in the real-world, iterated PD. My cooperation does improve my lot because others can reciprocate and because we can mutually precommit to cooperating in the future. (There are also second-order effects such as using cooperation for social signalling, which also promote cooperation and altruism, although in nonoptimal forms.)
If it wasn’t a winning strategy, I expect people in general would cooperate a lot less. Just because we can subvert or ignore Azatoth some of the time doesn’t mean we can expect to do so regularly. Cooperation is a specific, evolved behavior that persists for good game-theoretical reasons.
If the only chance for the future lay in people cooperating against their personal interests, then I would have much less hope for a good future. But luckily for us all cooperation is rewarded, even when one’s marginal contribution is insignificant or might be better spent on personal projects. Most people do not contribute towards e.g. X-risk reduction, not because they are selfishly reserving resources for personal gain, but because they are misinformed, irrational, biased, and so on. When I say that I place supreme value on personal survival, I must include X-risks in that calculation as well as driving accidents.
The last section of my comment was indicating that I value humanity/the-future for its own sake, in addition to cooperating in iterated PD. I estimate that The-Rest-Of-The-World’s welfare makes up around 5-10% of my utility function. In order for me to be maximally satisfied with life, I need to believe that about 5-10% of my efforts need to contribute to that.
(This is a guess. Right now I’m NOT maximally happy, I do not currently put that much effort in, but based on my introspection so far, it seems about right. I know that I care about the world independent of my welfare to SOME extent, but I know that realistically I value my own happiness more, and am glad that rational choices about my happiness also coincide with making the world better in many ways).
I would take a pill that made me less happy but a better philanthropist, but not a pill that would make unhappy, even if it made me a much better philanthropist.
Edit: This is my personal feelings, which I’d LIKE other people to share but I don’t expect to convince them to.
You misunderstand. I do cooperate where appropriate, because it is in my self-interest, and if everyone else did the same the world would be much better for everyone!
I cooperate because that’s a winning strategy in the real-world, iterated PD. My cooperation does improve my lot because others can reciprocate and because we can mutually precommit to cooperating in the future. (There are also second-order effects such as using cooperation for social signalling, which also promote cooperation and altruism, although in nonoptimal forms.)
If it wasn’t a winning strategy, I expect people in general would cooperate a lot less. Just because we can subvert or ignore Azatoth some of the time doesn’t mean we can expect to do so regularly. Cooperation is a specific, evolved behavior that persists for good game-theoretical reasons.
If the only chance for the future lay in people cooperating against their personal interests, then I would have much less hope for a good future. But luckily for us all cooperation is rewarded, even when one’s marginal contribution is insignificant or might be better spent on personal projects. Most people do not contribute towards e.g. X-risk reduction, not because they are selfishly reserving resources for personal gain, but because they are misinformed, irrational, biased, and so on. When I say that I place supreme value on personal survival, I must include X-risks in that calculation as well as driving accidents.
I think we’re basically in agreement.
The last section of my comment was indicating that I value humanity/the-future for its own sake, in addition to cooperating in iterated PD. I estimate that The-Rest-Of-The-World’s welfare makes up around 5-10% of my utility function. In order for me to be maximally satisfied with life, I need to believe that about 5-10% of my efforts need to contribute to that.
(This is a guess. Right now I’m NOT maximally happy, I do not currently put that much effort in, but based on my introspection so far, it seems about right. I know that I care about the world independent of my welfare to SOME extent, but I know that realistically I value my own happiness more, and am glad that rational choices about my happiness also coincide with making the world better in many ways).
I would take a pill that made me less happy but a better philanthropist, but not a pill that would make unhappy, even if it made me a much better philanthropist.
Edit: This is my personal feelings, which I’d LIKE other people to share but I don’t expect to convince them to.