My reflex answer is that I should calculate the average amount of utils that I gain per unit of fun given to a random person and lose per unit of torture applied to a random person. I am not a true utilitarian, so this would be affected by the likelihood that the person I picked was of greater importance to me (causing a higher number of utils be gained/lost for fun/torture, respectively) than a random stranger.
Now, let’s try to frame this somewhat quantitatively. Pretend that the world is divided into really happy people (RHP) who experience, by default, 150 Fun Units (funU) per month, happy people (HP) who experience 100 funU/mo by default, and sad people (SP) who experience only 50 funU/mo by default. The world is composed of .05 RHP, .7 HP, and .25 SP.For modeling purposes, being tortured means that you lose all of your fun, and then your fun comes back at a rate of 10%/mo. There aren’t a significant number of people whom I attach greater-than-stranger importance to, so this doesn’t actually affect the calculation much...except that I think that we have at least some chance of getting an FAI working, and Eliezer might be mentally damaged if he got tortured for a month. Were I actually faced with this choice, I would probably come up with a more accurate calculation, but I’ll estimate that this factor causes me to arbitrarily bump up everyone’s default fun values by 25funU/mo.
Fun lost for average RHP is 175+(175 x .9)+(175 x .8)+(175 x .7), and so on.
Fun lost for average HP is 125+(125 x .9)+(125 x .8)+(125 x .7), and so on.
Fun lost for average SP is 75+(75 x .9)+(75 x .8)+(75 x .7), and so on.
We average the three, weighting each by a factor of .05, .7, and .25, respectively, and get a number, expressed in funU. Anything higher than this would be my answer, and I predict that I would accept the offer regardless of how many people this funU was split amongst.
I am not a true utilitarian, so this would be affected by the likelihood that the person I picked was of greater importance to me (causing a higher number of utils be gained/lost for fun/torture, respectively) than a random stranger.
You needn’t value all people equally to be a true utilitarian, at least in the sense the word is used here.
...really happy people (RHP) who experience, by default, 150 Fun Units (funU) per month, happy people (HP) who experience 100 funU/mo by default, and sad people (SP) who experience only 50 funU/mo by default. … being tortured means that you lose all of your fun...
I think you are seriously underestimating torture by supposing that the difference between really happy (top 5% level) and sad (bottom 25% level) is bigger than between sad and tortured. It should rather be something like: really happy 100 U, happy 70 U, sad 0 U, tortured −3500 U.
You needn’t value all people equally to be a true utilitarian, at least in the sense the word is used here.
Really? Is all I need to do to be a utilitarian is attach any amount of utility to other peoples’ utility function and/or feelings?
I think you are seriously underestimating torture by supposing that the difference between really happy (top 5% level) and sad (bottom 25% level) is bigger than between sad and tortured. It should rather be something like: really happy 100 U, happy 70 U, sad 0 U, tortured −3500 U.
Uh, oops. I’m thinking that I could respond with this counterargument: “But 0 funU is really, really bad—you’re just sticking the really bad mark at −3500 while I’m sticking it at zero.”
Sadly, the fact that I could make that sort of my remark reveals that I haven’t actually made much of a claim at all in my post because I haven’t defined what 1 funU is in real world terms. All I’ve really assumed is that funU is additive, which doesn’t make much sense considering human psychology.
Is all I need to do to be a utilitarian is attach any amount of utility to other peoples’ utility function and/or feelings?
Attach amounts of utility to possible states of the world. Otherwise no constraints. It is how utilitarianism is probably understood by most people here. Outside LessWrong, different definitions may be predominant.
“But 0 funU is really, really bad—you’re just sticking the really bad mark at −3500 while I’m sticking it at zero.”
As you wish: so really happy 3600, happy 3570, sad 3500, tortured 0. Utility functions should be invariant with respect to additive or multiplicative constants. (Any monotonous transformation may work if done for the whole your utility function, but not for parts you are going to sum.) I was objecting to relative differences—in your original setting, assuming additivity (not wrong per se), moving one person from sad to very happy would balance moving two other people from sad to tortured. That seems obviously wrong.
My reflex answer is that I should calculate the average amount of utils that I gain per unit of fun given to a random person and lose per unit of torture applied to a random person. I am not a true utilitarian, so this would be affected by the likelihood that the person I picked was of greater importance to me (causing a higher number of utils be gained/lost for fun/torture, respectively) than a random stranger.
Now, let’s try to frame this somewhat quantitatively. Pretend that the world is divided into really happy people (RHP) who experience, by default, 150 Fun Units (funU) per month, happy people (HP) who experience 100 funU/mo by default, and sad people (SP) who experience only 50 funU/mo by default. The world is composed of .05 RHP, .7 HP, and .25 SP.For modeling purposes, being tortured means that you lose all of your fun, and then your fun comes back at a rate of 10%/mo. There aren’t a significant number of people whom I attach greater-than-stranger importance to, so this doesn’t actually affect the calculation much...except that I think that we have at least some chance of getting an FAI working, and Eliezer might be mentally damaged if he got tortured for a month. Were I actually faced with this choice, I would probably come up with a more accurate calculation, but I’ll estimate that this factor causes me to arbitrarily bump up everyone’s default fun values by 25funU/mo.
Fun lost for average RHP is 175+(175 x .9)+(175 x .8)+(175 x .7), and so on. Fun lost for average HP is 125+(125 x .9)+(125 x .8)+(125 x .7), and so on. Fun lost for average SP is 75+(75 x .9)+(75 x .8)+(75 x .7), and so on.
We average the three, weighting each by a factor of .05, .7, and .25, respectively, and get a number, expressed in funU. Anything higher than this would be my answer, and I predict that I would accept the offer regardless of how many people this funU was split amongst.
Edit: Font coding played havoc with my math.
You needn’t value all people equally to be a true utilitarian, at least in the sense the word is used here.
I think you are seriously underestimating torture by supposing that the difference between really happy (top 5% level) and sad (bottom 25% level) is bigger than between sad and tortured. It should rather be something like: really happy 100 U, happy 70 U, sad 0 U, tortured −3500 U.
Really? Is all I need to do to be a utilitarian is attach any amount of utility to other peoples’ utility function and/or feelings?
Uh, oops. I’m thinking that I could respond with this counterargument: “But 0 funU is really, really bad—you’re just sticking the really bad mark at −3500 while I’m sticking it at zero.”
Sadly, the fact that I could make that sort of my remark reveals that I haven’t actually made much of a claim at all in my post because I haven’t defined what 1 funU is in real world terms. All I’ve really assumed is that funU is additive, which doesn’t make much sense considering human psychology.
There goes that idea.
Attach amounts of utility to possible states of the world. Otherwise no constraints. It is how utilitarianism is probably understood by most people here. Outside LessWrong, different definitions may be predominant.
As you wish: so really happy 3600, happy 3570, sad 3500, tortured 0. Utility functions should be invariant with respect to additive or multiplicative constants. (Any monotonous transformation may work if done for the whole your utility function, but not for parts you are going to sum.) I was objecting to relative differences—in your original setting, assuming additivity (not wrong per se), moving one person from sad to very happy would balance moving two other people from sad to tortured. That seems obviously wrong.