You needn’t value all people equally to be a true utilitarian, at least in the sense the word is used here.
Really? Is all I need to do to be a utilitarian is attach any amount of utility to other peoples’ utility function and/or feelings?
I think you are seriously underestimating torture by supposing that the difference between really happy (top 5% level) and sad (bottom 25% level) is bigger than between sad and tortured. It should rather be something like: really happy 100 U, happy 70 U, sad 0 U, tortured −3500 U.
Uh, oops. I’m thinking that I could respond with this counterargument: “But 0 funU is really, really bad—you’re just sticking the really bad mark at −3500 while I’m sticking it at zero.”
Sadly, the fact that I could make that sort of my remark reveals that I haven’t actually made much of a claim at all in my post because I haven’t defined what 1 funU is in real world terms. All I’ve really assumed is that funU is additive, which doesn’t make much sense considering human psychology.
Is all I need to do to be a utilitarian is attach any amount of utility to other peoples’ utility function and/or feelings?
Attach amounts of utility to possible states of the world. Otherwise no constraints. It is how utilitarianism is probably understood by most people here. Outside LessWrong, different definitions may be predominant.
“But 0 funU is really, really bad—you’re just sticking the really bad mark at −3500 while I’m sticking it at zero.”
As you wish: so really happy 3600, happy 3570, sad 3500, tortured 0. Utility functions should be invariant with respect to additive or multiplicative constants. (Any monotonous transformation may work if done for the whole your utility function, but not for parts you are going to sum.) I was objecting to relative differences—in your original setting, assuming additivity (not wrong per se), moving one person from sad to very happy would balance moving two other people from sad to tortured. That seems obviously wrong.
Really? Is all I need to do to be a utilitarian is attach any amount of utility to other peoples’ utility function and/or feelings?
Uh, oops. I’m thinking that I could respond with this counterargument: “But 0 funU is really, really bad—you’re just sticking the really bad mark at −3500 while I’m sticking it at zero.”
Sadly, the fact that I could make that sort of my remark reveals that I haven’t actually made much of a claim at all in my post because I haven’t defined what 1 funU is in real world terms. All I’ve really assumed is that funU is additive, which doesn’t make much sense considering human psychology.
There goes that idea.
Attach amounts of utility to possible states of the world. Otherwise no constraints. It is how utilitarianism is probably understood by most people here. Outside LessWrong, different definitions may be predominant.
As you wish: so really happy 3600, happy 3570, sad 3500, tortured 0. Utility functions should be invariant with respect to additive or multiplicative constants. (Any monotonous transformation may work if done for the whole your utility function, but not for parts you are going to sum.) I was objecting to relative differences—in your original setting, assuming additivity (not wrong per se), moving one person from sad to very happy would balance moving two other people from sad to tortured. That seems obviously wrong.