There is no way to coherently hold utilitarianism without it leading to “the repugnant conclusion” that we should maximise reproduction.
Everyone who thinks they’re utilitarian is engaging in signalling behaviour by claiming to value the happiness of other agents, but always rationalizes a utility argument for the self serving position they had anyway.
There’s an unfortunate trick of naming here: some LWers use “utilitarian” to describe “valuing different outcomes by real numbers and acting to maximize expected value according to some decision theory”, and would thus describe a paperclip maximizer as utilitarian. I can easily accept that traditional Benthamite utilitarianism has no answer to the repugnant conclusion, though.
I mostly agree with this point, but the dust-speck/torture debate showed me that believing certain things about the additive properties of virtue/pleasure/suchlike commits me to choosing torture over dust specks. That is, unless your moral theory prohibits certain trade-offs, dust specks worse.
Further, utilitarian theories are all committed to believing the additive property. Is it unfair to say that any theory that believes the additive property (and is trying to maximize human virtue/pleasure/suchlike) is utilitarian?
Not all utilitarian theories are committed to believing the additive property—“average utilitarianism” most famously. Trying to maximize human “virtue” strikes me as quite a different thing than maximizing pleasure/utility and not something I would call utilitarianism. But in general yes, total, act utilitarianism is belief in maximizing additive property x where x is something like pleasure, desires, preferences etc.
this is stupid. Utilitarianism does not decide what you value, only what you should do once you know your values. If you don’t care about the absolute total amount of utility, there’s no reason to maximize reproduction.
If you don’t care about the absolute total amount of utility then you are not a total utilitarian.
(Which is fine, and I’d agree with you but if you really mean average utilitarianism or some kind of Millsian or Rawlsian variation you should specify since people here usually mean total when they use the word.)
There is no way to coherently hold utilitarianism without it leading to “the repugnant conclusion” that we should maximise reproduction.
Everyone who thinks they’re utilitarian is engaging in signalling behaviour by claiming to value the happiness of other agents, but always rationalizes a utility argument for the self serving position they had anyway.
There’s an unfortunate trick of naming here: some LWers use “utilitarian” to describe “valuing different outcomes by real numbers and acting to maximize expected value according to some decision theory”, and would thus describe a paperclip maximizer as utilitarian. I can easily accept that traditional Benthamite utilitarianism has no answer to the repugnant conclusion, though.
I mostly agree with this point, but the dust-speck/torture debate showed me that believing certain things about the additive properties of virtue/pleasure/suchlike commits me to choosing torture over dust specks. That is, unless your moral theory prohibits certain trade-offs, dust specks worse.
Further, utilitarian theories are all committed to believing the additive property. Is it unfair to say that any theory that believes the additive property (and is trying to maximize human virtue/pleasure/suchlike) is utilitarian?
Not all utilitarian theories are committed to believing the additive property—“average utilitarianism” most famously. Trying to maximize human “virtue” strikes me as quite a different thing than maximizing pleasure/utility and not something I would call utilitarianism. But in general yes, total, act utilitarianism is belief in maximizing additive property x where x is something like pleasure, desires, preferences etc.
People realize “the repugnant conclusion” is just the other side of the Torture/Dust-specs coin, right?
How did I not see that before? Wow.
this is stupid. Utilitarianism does not decide what you value, only what you should do once you know your values. If you don’t care about the absolute total amount of utility, there’s no reason to maximize reproduction.
Technically it limits it. If you allow any value system, you get Consequentialism.
I don’t know what the limits are, but apparently ethical egoism and ethical altruism don’t count.
If you don’t care about the absolute total amount of utility then you are not a total utilitarian.
(Which is fine, and I’d agree with you but if you really mean average utilitarianism or some kind of Millsian or Rawlsian variation you should specify since people here usually mean total when they use the word.)