Utilitarianism isn’t a metaethic in the first place; it’s a family of ethical systems.
Good point. Here’s the intuition behind my comment. Classical utilitarianism starts with “maximize aggregate utility” and jumps off from there (Mill calls it obvious, then gives a proof that he admits to be flawed). This opens them up to a slew of standard criticisms (e.g. utility monsters). I’m not very well versed on more modern versions of utilitarianism, but the impression I get is that they do something similar. But, as you point out, all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents).
EY’s metaethics, on the other hand, eventually says something like “maximize this specific utility function that we don’t know perfectly. Oh yeah, it’s your utility function, and most everyone else’s.” With a suitable utility function, EY’s metaethics seems completely compatable with utilitarianism, I admit, but that seems unlikely. The utilitarian has to take into account the murderer’s preference for murder, should that preference actually exist (and not be a confusion). It seems highly unlikely to me that I and most of my fellow humans (which is where the utility function in question exists) care about someone’s preference for murder. Even assuming that I/we thought faster, more rationally, etc.
Oh, and a note on the “maximize your own utility function” language that I used. I tend to think about ethics in the first person: what should I do. Well, maximize my own utility function/preferences, whatever they are. I only start worrying about your preferences when I find out that they are information about my own preferences (or if I specifically care about your preferences in my own.) This is an explanation of how I’m thinking, but I should know better than to use this language on LW where most people haven’t seen it before and so will be confused.
all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents)
The answer is the aggregate of some function for all suitable agents, but that function needn’t itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.
Ah, I was equating preference utilitarianism with utilitarianism.
I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don’t apply given EY’s metaethics. It may be worth sticking to the terminology despite the cost though.
Good point. Here’s the intuition behind my comment. Classical utilitarianism starts with “maximize aggregate utility” and jumps off from there (Mill calls it obvious, then gives a proof that he admits to be flawed). This opens them up to a slew of standard criticisms (e.g. utility monsters). I’m not very well versed on more modern versions of utilitarianism, but the impression I get is that they do something similar. But, as you point out, all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents).
EY’s metaethics, on the other hand, eventually says something like “maximize this specific utility function that we don’t know perfectly. Oh yeah, it’s your utility function, and most everyone else’s.” With a suitable utility function, EY’s metaethics seems completely compatable with utilitarianism, I admit, but that seems unlikely. The utilitarian has to take into account the murderer’s preference for murder, should that preference actually exist (and not be a confusion). It seems highly unlikely to me that I and most of my fellow humans (which is where the utility function in question exists) care about someone’s preference for murder. Even assuming that I/we thought faster, more rationally, etc.
Oh, and a note on the “maximize your own utility function” language that I used. I tend to think about ethics in the first person: what should I do. Well, maximize my own utility function/preferences, whatever they are. I only start worrying about your preferences when I find out that they are information about my own preferences (or if I specifically care about your preferences in my own.) This is an explanation of how I’m thinking, but I should know better than to use this language on LW where most people haven’t seen it before and so will be confused.
The answer is the aggregate of some function for all suitable agents, but that function needn’t itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.
Ah, I was equating preference utilitarianism with utilitarianism.
I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don’t apply given EY’s metaethics. It may be worth sticking to the terminology despite the cost though.