Ehn, I think this is dodging the question. There _ARE_ things one could do differently if one truly believed that others were as important as oneself. NOBODY actually behaves that way. EVERYONE does things that benefit themselves using resources that would certainly give more benefit to others.
Any moral theory that doesn’t recognize self-interest as an important factor does not apply to any being we know of.
I would say that’s yet another set of (related) debates that are interesting and important, but not core to this post :)
Examples of assumptions/questions/debates that your comment seem to make/raise:
What is it to “truly believe” others are as important as oneself? Humans aren’t really cohesive agents with a single utility function and set of beliefs. Maybe someone does believe that, on some level, but it just doesn’t filter through to their preferences, or their preferences don’t filter through to their behaviours.
Is “true altruism” possible? There are arguably some apparent cases, such as soldiers jumping on grenades to save their brothers in arms, or that guy who jumped on the subway tracks to save a stranger.
What does “true altruism” even mean?
Should we care whether altruism is “true” or not”? If so, why?
As I suggested above, would it really be the case that a person who does act quite a bit based on (effective) altruism would bring more benefit to others by trying to make sure every little action benefits others as much as possible, rather than by setting policies that save themselves time and emotional energy on the small matters so they can spend it on bigger things?
Is the goal of moral philosophy to find a moral theory that “applies” to beings we know of, or to find the moral theory these beings should follow?
More generally, what criteria should we judge moral theories by?
What’s the best moral theory?
A bunch of metaethical and metaphysical cans of worms you opened up in trying to tackle the last three questions
Each of those points would deserve at least one post for itself, if not a series of books by different debating people who dedicated their whole lives to studying the matters.
This post wasn’t trying to chuck all that in one place. This post is just about disentangling what we even mean by “morality” from other related concepts.
So I guess maybe I’m biting the bullet of the charge of dodging the question? I.e., that was exactly my intention when I switched to an example “which, while still somewhat debatable, seems a lot less so”, because this post is about things other than those debates.
Ehn, I think this is dodging the question. There _ARE_ things one could do differently if one truly believed that others were as important as oneself. NOBODY actually behaves that way. EVERYONE does things that benefit themselves using resources that would certainly give more benefit to others.
Any moral theory that doesn’t recognize self-interest as an important factor does not apply to any being we know of.
I would say that’s yet another set of (related) debates that are interesting and important, but not core to this post :)
Examples of assumptions/questions/debates that your comment seem to make/raise:
What is it to “truly believe” others are as important as oneself? Humans aren’t really cohesive agents with a single utility function and set of beliefs. Maybe someone does believe that, on some level, but it just doesn’t filter through to their preferences, or their preferences don’t filter through to their behaviours.
Is “true altruism” possible? There are arguably some apparent cases, such as soldiers jumping on grenades to save their brothers in arms, or that guy who jumped on the subway tracks to save a stranger.
What does “true altruism” even mean?
Should we care whether altruism is “true” or not”? If so, why?
As I suggested above, would it really be the case that a person who does act quite a bit based on (effective) altruism would bring more benefit to others by trying to make sure every little action benefits others as much as possible, rather than by setting policies that save themselves time and emotional energy on the small matters so they can spend it on bigger things?
Is the goal of moral philosophy to find a moral theory that “applies” to beings we know of, or to find the moral theory these beings should follow?
More generally, what criteria should we judge moral theories by?
What’s the best moral theory?
A bunch of metaethical and metaphysical cans of worms you opened up in trying to tackle the last three questions
Each of those points would deserve at least one post for itself, if not a series of books by different debating people who dedicated their whole lives to studying the matters.
This post wasn’t trying to chuck all that in one place. This post is just about disentangling what we even mean by “morality” from other related concepts.
So I guess maybe I’m biting the bullet of the charge of dodging the question? I.e., that was exactly my intention when I switched to an example “which, while still somewhat debatable, seems a lot less so”, because this post is about things other than those debates.