What do you think of my argument that we have non-self-interested reasons to pursue other-directed goals, even if those reasons perhaps aren’t as strong as we’d like?
I think that phenomenologically, you’re right. Other-directed goals (need for relatedness, in SDT terminology) feel like they’re essentially other-directed.
I think that the evolutionary cause for having other-directed goals is directed at your own genetic proliferation, and I also think that autonomously holding other-directed goals improves your own well-being, even above and beyond the benefits you get because they like you for it. Eg. Gore et al. 2009.
Stated differently, even if you’re optimising completely selfishly, you’ll have to be unselfish. We care about others simply because they are important to us, not because they make us happy. They are a terminal value. If they are instrumental, we don’t get the benefits to well-being. But caring for them terminally also carries benefits to ourselves. I think that’s wonderful!
What do you think of my argument that we have non-self-interested reasons to pursue other-directed goals, even if those reasons perhaps aren’t as strong as we’d like?
I think that phenomenologically, you’re right. Other-directed goals (need for relatedness, in SDT terminology) feel like they’re essentially other-directed.
I think that the evolutionary cause for having other-directed goals is directed at your own genetic proliferation, and I also think that autonomously holding other-directed goals improves your own well-being, even above and beyond the benefits you get because they like you for it. Eg. Gore et al. 2009.
Stated differently, even if you’re optimising completely selfishly, you’ll have to be unselfish. We care about others simply because they are important to us, not because they make us happy. They are a terminal value. If they are instrumental, we don’t get the benefits to well-being. But caring for them terminally also carries benefits to ourselves. I think that’s wonderful!