For example, I could say that, from the perspective of epistemic rationality, I “shouldn’t” believe that buying that burrito will create more utility in expectation than donating the same money to AMF would. This is because holding that belief won’t help me meet the goal of having accurate beliefs.
There is a phenomena in AI safety called “you can’t fetch the coffee if your dead”. A perfect total utilitarian, or even a money maximiser would still need to eat, if they want to be able to work next year. If you have a well paid job, or a good chance of getting one, don’t starve yourself. Eat something quick cheap and healthy. Quick so you can work more today and healthy so you can work years later. In a world where you need to wear a sharp suit to be CEO, the utilitarians should buy sharp suits. Don’t fall for the false economy of personal deprivation. This doesn’t entitle utilitarians to whatever luxury they feel like. If most of your money is going on sharp suits, it isn’t a good job. A sharp suited executive should be able to donate far more than a cardboard box wearing ditch digger.
Fair point. I’ve now replaced it with “buying a Ferrari”, which, while still somewhat debatable, seems a lot less so. Thanks for the feedback!
I do think there’s a sense in which, under most reasonable assumptions, it’d be true that buying the burrito itself won’t maximise universe-wide utility, partly because there’s likely some cheaper food option. But that requires some assumptions, and there’s also a good chance that, if we’re really talking about someone actively guided by utilitarianism, they’ve probably got a lot of good to do, and will likely do it better in the long run if they don’t overthink every small action and instead mostly use some policies/heuristics (e.g., allow myself nice small things, but don’t rationalise endless overseas holidays and shiny cars). And then there’s also the point you raise about how one would look to others, and the consequences of that.
I do remember noticing when writing this post that that was an unnecessarily debatable example (the kind which whole posts could be and have been written about how to handle), but for some reason I then dropped that line of thinking.
Ehn, I think this is dodging the question. There _ARE_ things one could do differently if one truly believed that others were as important as oneself. NOBODY actually behaves that way. EVERYONE does things that benefit themselves using resources that would certainly give more benefit to others.
Any moral theory that doesn’t recognize self-interest as an important factor does not apply to any being we know of.
I would say that’s yet another set of (related) debates that are interesting and important, but not core to this post :)
Examples of assumptions/questions/debates that your comment seem to make/raise:
What is it to “truly believe” others are as important as oneself? Humans aren’t really cohesive agents with a single utility function and set of beliefs. Maybe someone does believe that, on some level, but it just doesn’t filter through to their preferences, or their preferences don’t filter through to their behaviours.
Is “true altruism” possible? There are arguably some apparent cases, such as soldiers jumping on grenades to save their brothers in arms, or that guy who jumped on the subway tracks to save a stranger.
What does “true altruism” even mean?
Should we care whether altruism is “true” or not”? If so, why?
As I suggested above, would it really be the case that a person who does act quite a bit based on (effective) altruism would bring more benefit to others by trying to make sure every little action benefits others as much as possible, rather than by setting policies that save themselves time and emotional energy on the small matters so they can spend it on bigger things?
Is the goal of moral philosophy to find a moral theory that “applies” to beings we know of, or to find the moral theory these beings should follow?
More generally, what criteria should we judge moral theories by?
What’s the best moral theory?
A bunch of metaethical and metaphysical cans of worms you opened up in trying to tackle the last three questions
Each of those points would deserve at least one post for itself, if not a series of books by different debating people who dedicated their whole lives to studying the matters.
This post wasn’t trying to chuck all that in one place. This post is just about disentangling what we even mean by “morality” from other related concepts.
So I guess maybe I’m biting the bullet of the charge of dodging the question? I.e., that was exactly my intention when I switched to an example “which, while still somewhat debatable, seems a lot less so”, because this post is about things other than those debates.
There is a phenomena in AI safety called “you can’t fetch the coffee if your dead”. A perfect total utilitarian, or even a money maximiser would still need to eat, if they want to be able to work next year. If you have a well paid job, or a good chance of getting one, don’t starve yourself. Eat something quick cheap and healthy. Quick so you can work more today and healthy so you can work years later. In a world where you need to wear a sharp suit to be CEO, the utilitarians should buy sharp suits. Don’t fall for the false economy of personal deprivation. This doesn’t entitle utilitarians to whatever luxury they feel like. If most of your money is going on sharp suits, it isn’t a good job. A sharp suited executive should be able to donate far more than a cardboard box wearing ditch digger.
Fair point. I’ve now replaced it with “buying a Ferrari”, which, while still somewhat debatable, seems a lot less so. Thanks for the feedback!
I do think there’s a sense in which, under most reasonable assumptions, it’d be true that buying the burrito itself won’t maximise universe-wide utility, partly because there’s likely some cheaper food option. But that requires some assumptions, and there’s also a good chance that, if we’re really talking about someone actively guided by utilitarianism, they’ve probably got a lot of good to do, and will likely do it better in the long run if they don’t overthink every small action and instead mostly use some policies/heuristics (e.g., allow myself nice small things, but don’t rationalise endless overseas holidays and shiny cars). And then there’s also the point you raise about how one would look to others, and the consequences of that.
I do remember noticing when writing this post that that was an unnecessarily debatable example (the kind which whole posts could be and have been written about how to handle), but for some reason I then dropped that line of thinking.
Ehn, I think this is dodging the question. There _ARE_ things one could do differently if one truly believed that others were as important as oneself. NOBODY actually behaves that way. EVERYONE does things that benefit themselves using resources that would certainly give more benefit to others.
Any moral theory that doesn’t recognize self-interest as an important factor does not apply to any being we know of.
I would say that’s yet another set of (related) debates that are interesting and important, but not core to this post :)
Examples of assumptions/questions/debates that your comment seem to make/raise:
What is it to “truly believe” others are as important as oneself? Humans aren’t really cohesive agents with a single utility function and set of beliefs. Maybe someone does believe that, on some level, but it just doesn’t filter through to their preferences, or their preferences don’t filter through to their behaviours.
Is “true altruism” possible? There are arguably some apparent cases, such as soldiers jumping on grenades to save their brothers in arms, or that guy who jumped on the subway tracks to save a stranger.
What does “true altruism” even mean?
Should we care whether altruism is “true” or not”? If so, why?
As I suggested above, would it really be the case that a person who does act quite a bit based on (effective) altruism would bring more benefit to others by trying to make sure every little action benefits others as much as possible, rather than by setting policies that save themselves time and emotional energy on the small matters so they can spend it on bigger things?
Is the goal of moral philosophy to find a moral theory that “applies” to beings we know of, or to find the moral theory these beings should follow?
More generally, what criteria should we judge moral theories by?
What’s the best moral theory?
A bunch of metaethical and metaphysical cans of worms you opened up in trying to tackle the last three questions
Each of those points would deserve at least one post for itself, if not a series of books by different debating people who dedicated their whole lives to studying the matters.
This post wasn’t trying to chuck all that in one place. This post is just about disentangling what we even mean by “morality” from other related concepts.
So I guess maybe I’m biting the bullet of the charge of dodging the question? I.e., that was exactly my intention when I switched to an example “which, while still somewhat debatable, seems a lot less so”, because this post is about things other than those debates.