I always assumed that altruists are aiming for a different region in the space of all possible configurations of the future light cone than egoists are, but maybe I need to examine that assumption.
I think that these converge in at least some cases. For example, an egoist wanting to live a really long time might work on reducing existential risks because it’s the most effective way of ensuring that he wakes up from being frozen. An altruist might work on reducing existential risks because he wants to save (other) people, not just himself.
I always assumed that altruists are aiming for a different region in the space of all possible configurations of the future light cone than egoists are
They do act on conflicting explicit reasons (to some small extent), but I don’t expect they should.
I always assumed that altruists are aiming for a different region in the space of all possible configurations of the future light cone than egoists are, but maybe I need to examine that assumption.
I think that these converge in at least some cases. For example, an egoist wanting to live a really long time might work on reducing existential risks because it’s the most effective way of ensuring that he wakes up from being frozen. An altruist might work on reducing existential risks because he wants to save (other) people, not just himself.
They do act on conflicting explicit reasons (to some small extent), but I don’t expect they should.