I expected that my intuitive preference for any number of dust specks over torture would be easy to formalize without stretching it too far. Does not seem like it.
On the other hand, given the preference for realism over instrumentalism on this forum, I’m still waiting for a convincing (for an instrumentalist) argument for this preference.
If you want a reason to prefer dust specks for others over torture for yourself, consistently egocentric values can do it. That will also lead you to prefer torture for others over torture for yourself. What about preferring torture for others over dust speck for yourself? It’s psychologically possible, but the true threshold (beyond which one would choose torture for others) seems to lie somewhere between inconvenience for oneself and torture for oneself.
It seems that LW has never had a serious discussion about the likely fact that the true human value system is basically egocentric, with altruism being sharply bounded by the personal costs experienced; nor has there been a discussion about the implications of this for CEV and FAI.
ETA: OK, I see I didn’t say how a person would choose between dust specks for 3^^^3 others versus torture for one other. Will recently mentioned that you should take the preferences of the 3^^^3 into account: would they want someone to be tortured for fifty years, so that none of them got a dust speck in the eye? “Renormalizing” in this way is probably the best way to get a sensible and consistent decision procedure here, if one employs the model of humans as “basically egocentric but with a personal threshold of cost below which altruism is allowed”.
For example, if every but one of trillions of humans was being tortured and had dust specks, would you feel like trading the torture-free human’s freedom from torture for the removal of specks from the tortured. If that were so, then you just are showing a fairly usual preference (inequality is bad!) which is probably fine as an approximation of stuff you could formalize consequentially.
But that’s just an example. Often there’s some context in which your moral intuition is reversed, which is a useful probe.
(usual caveat: haven’t read the sequences)
Topic for discussion: Less Wrongians are frequentists to a greater extent than most folk who are intuitively Bayesian. The phrase “I must update on” is half code for (p<0.05) and half signalling, since presumably you’re “updating” a lot, just like regular humanssssssssssssssssssssssssssssssssss.
Less Wrongians are frequentists to a greater extent than most folk who are intuitively Bayesian. The phrase “I must update on” is half code for (p<0.05) and half signalling, since presumably you’re “updating” a lot, just like regular humans.
When you consciously think “p<.05” do you really believe that the probability given the null hypothesis is less than 1⁄20, or are you just using a scientific-sounding way of saying “there’s pretty good evidence”?
Might this just be that people on LessWrong have (I’m assuming) nearly all studied frequentist statistics in the course of their schooling but most probably have not studied Bayesian statistics?
If I recall correctly Alicorn made a reference to reversing the utilities in this argument… would you think it better for someone to give up a life of the purest and truest happiness, if in exchange they created all of the 10 second or less cat videos that will ever be on youtube throughout all of history and the future?
My intuitions here say yes; it can be worth sacrificing your life (i.e. torturing yourself working at a startup) to create a public good which will do a small amount for a lot of people (i.e. making standard immunization injections also give people immunity to dust specks in their eyes)
Manipulative phrasing. Of course, it will always seem worth torturing yourself, yadda yadda, when framed as a volitional sacrifice. Does your intuition equally answer yes when asked if it is worth killing somebody to do etc etc? Doubt it (and not a deontological phrasing issue)
Certainly there’s a difference between what I said and the traditional phrasing of the dilemma; certainly the idea of sacrificing oneself versus another is a big one.
But the OP was asking for an instrumentalist reason to choose torture over dust specks. It is pretty far-fetched to imagine that literally torturing someone will actually accomplish… well, almost anything, unless they’re a supervillain creating a contrived scenario in which you have to torture them.
When you will actually be trading quality of life for barely-tangible benefit on a large scale is torturing yourself working at a startup. This is an actual decision that people make to make lives miserable in exchange for minor but widespread public goods. And I fully support the actual trades of this sort that people actually make.
That’s my instrumentalist argument for, as a human being, accepting the metaphor of dust specks versus torture, not my philosophical argument for a decision theory that selects it.
Was there any reason to think I didn’t understand exactly what you said the first time? You agree with me and then restate. Fine, but pointless. Additionally, unimaginative re: potential value of torture. Defending lack of imagination in that statement by claiming torture defined in part by primary intent would be inconsistent.
The reason I thought you didn’t understand what I was talking about was that I was calling on examples from day to day life, this is what I took “instrumentalist” to mean, and you starting talking about killing people, which is not an event from day to day life.
If you are interested in continuing this discussion (which if not I won’t object) let’s take this one step at a time; does that difference seem reasonable to you?
The day to day life bit is irrelevant. The volitional aspect is not at all. Take the exact sacrifice you described but make it non-volitional. “torturing yourself working at a startup” becomes slavery when non-volitional. Presumably you find that trade-off less acceptable.
The volitional aspect is the key difference. The fact that your life is rich with examples of volitional sacrifice and poor in examples of forced sacrifice of this type is not some magic result that has something to do with how we treat “real” examples in day to day life. It is entirely because “we” (humans) have tried to minimize the non-volitional sacrifices because they are what we find immoral!
Point number one is: I don’t understand how you can say, when I am making an argument explicitly restricted to instrumental decision theory, how day to day life is irrelevant. Instrumentalism should ONLY care about day to day life.
With respect to forced sacrifice, my intuitions say I should just do the math, and that the reason volition is so important is that the reasonable expectation that one won’t be forced to make sacrifices is a big-ticket public good, meaning the math almost always comes out on its side. I think that you’re saying these choices have been screened off, but I think non-volitional choices have been screened off because they are in general bad trades rather than because “volition” is a magic word that lets you get whatever you want.
Point three, let’s turn this around… say someone is about to spend their entire life being tortured. Would you rescue them, if you knew it meant throwing a harmless dust speck into the eye of everyone ever to exist or be emulated? This should be equivalent, but both of the sacrifices here are forced since, at a minimum, some human beings are sociopaths and wouldn’t agree to take the dust speck.
If you want me to consider volition more closely, can you come up with some forced sacrifice choices that are reasonable exchanges that I might come across if I lived in a different world?
One possible idea: if I was the son of an African warlord, and I had the ability to make my parents’ political decrees more compassionate if I talked to them after they blew off steam torturing people, but I could instead make them torture fewer people by talking to them beforehand.
Here my intuitions say I should let the individuals be tortured in exchange for effecting large scale policy decisions.
I expected that my intuitive preference for any number of dust specks over torture would be easy to formalize without stretching it too far. Does not seem like it.
Well, preferences are pretty easy to fit. Utility(world) = e^-(# of specks) − 1000*(# of people getting tortured)
However, note that this still requires that there is some probability of someone being tortured that you would trade a dust speck for.
It doesn’t work if you continuously increase the the severity of the minor inconvenience/reduce the severity of torture and try to find where the two become qualitatively comparable, as pointed out in this reply. The only way I see it work is to assign zero disutility to specks (I advocated it originally to be at the noise level). Then I thought that it is possible to have the argument work reasonably well even with a non-zero disutility, but at this point I don’t see how.
Utility(world) = e^-(# of specks) + X*e^-(# of people getting tortured), where X is some constant larger than 1/(1-1/e) in the incommensurate case, and less than that in the commensurate limit.
Of course, this assumes some stuff about the number of people getting tortured / specked already—but that can be handled with a simple offset.
I don’t think this addresses the point in the link. What happens when you go from specks to something slightly more nasty, like a pinch? Or slightly increase the time it takes to get rid of the speck? You ought to raise the disutility limit. Or if you reduce the length of torture, you have to lower the disutility amount from torturing one person. Eventually, the two intersect, unless you are willing to make a sharp qualitative boundary between two very similar events.
Yes, the two intersect. That’s what happens when you make things quantitative. Just because we are uncertain about where two things should, morally, intersect, does not mean that the intersection itself should be “fuzzy.”
The point is that without arbitrarily drawing the specks/torture boundary somewhere between x stabbed toes and x+epsilon stabbed toes the suggested utility function does not work.
Hm, how can I help you see why I don’t think this is a problem?
How about this. The following two sentences contain exactly the same content to me:
“Without arbitrarily drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
“Without drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
Why? Because morality is already arbitrary. Every element is arbitrary. The question is not “can we tolerate an arbitrary boundary,” but “should this boundary be here or not?”
Are you saying that you are OK with having x stabbed toes being incommensurate with torture, but x+1 being commensurate ? This would be a very peculiar utility function.
Yes, that is what I am saying. One can deduce from this that I don’t find it so peculiar.
To be clear, this doesn’t reflect at all what goes on in my personal decision-making process, since I’m human. However, I don’t find it any stranger than, say, having torture be arbitrarily 3^3^2 times worse than a dust speck, rather than 3^3^2 + 5.
Sarcasm time: I mean, seriously—are you honestly saying that at 3^3^2 + 1 dust specks, it’s worse than torture, but at 3^3^2 − 1, it’s better? That’s so… arbitrary. What’s so special about those two dust specks? That would be so… peculiar.
As soon as you allow the arbitrary size of a number to be “peculiar,” there is no longer any such thing as a non-peculiar set of preferences. That’s just how consistent preferences work. Discounting sets of preferences on account of “strangeness and arbitrariness” isn’t worth the effort, really.
I expected that my intuitive preference for any number of dust specks over torture would be easy to formalize without stretching it too far. Does not seem like it.
On the other hand, given the preference for realism over instrumentalism on this forum, I’m still waiting for a convincing (for an instrumentalist) argument for this preference.
If you want a reason to prefer dust specks for others over torture for yourself, consistently egocentric values can do it. That will also lead you to prefer torture for others over torture for yourself. What about preferring torture for others over dust speck for yourself? It’s psychologically possible, but the true threshold (beyond which one would choose torture for others) seems to lie somewhere between inconvenience for oneself and torture for oneself.
It seems that LW has never had a serious discussion about the likely fact that the true human value system is basically egocentric, with altruism being sharply bounded by the personal costs experienced; nor has there been a discussion about the implications of this for CEV and FAI.
ETA: OK, I see I didn’t say how a person would choose between dust specks for 3^^^3 others versus torture for one other. Will recently mentioned that you should take the preferences of the 3^^^3 into account: would they want someone to be tortured for fifty years, so that none of them got a dust speck in the eye? “Renormalizing” in this way is probably the best way to get a sensible and consistent decision procedure here, if one employs the model of humans as “basically egocentric but with a personal threshold of cost below which altruism is allowed”.
Do you really have that preference?
For example, if every but one of trillions of humans was being tortured and had dust specks, would you feel like trading the torture-free human’s freedom from torture for the removal of specks from the tortured. If that were so, then you just are showing a fairly usual preference (inequality is bad!) which is probably fine as an approximation of stuff you could formalize consequentially.
But that’s just an example. Often there’s some context in which your moral intuition is reversed, which is a useful probe.
(usual caveat: haven’t read the sequences)
Topic for discussion: Less Wrongians are frequentists to a greater extent than most folk who are intuitively Bayesian. The phrase “I must update on” is half code for (p<0.05) and half signalling, since presumably you’re “updating” a lot, just like regular humanssssssssssssssssssssssssssssssssss.
It’s a psychological trick to induce more updating than is normal. Normal human updating tends to be insufficient).
If I recall correctly Alicorn made a reference to reversing the utilities in this argument… would you think it better for someone to give up a life of the purest and truest happiness, if in exchange they created all of the 10 second or less cat videos that will ever be on youtube throughout all of history and the future?
My intuitions here say yes; it can be worth sacrificing your life (i.e. torturing yourself working at a startup) to create a public good which will do a small amount for a lot of people (i.e. making standard immunization injections also give people immunity to dust specks in their eyes)
Manipulative phrasing. Of course, it will always seem worth torturing yourself, yadda yadda, when framed as a volitional sacrifice. Does your intuition equally answer yes when asked if it is worth killing somebody to do etc etc? Doubt it (and not a deontological phrasing issue)
Certainly there’s a difference between what I said and the traditional phrasing of the dilemma; certainly the idea of sacrificing oneself versus another is a big one.
But the OP was asking for an instrumentalist reason to choose torture over dust specks. It is pretty far-fetched to imagine that literally torturing someone will actually accomplish… well, almost anything, unless they’re a supervillain creating a contrived scenario in which you have to torture them.
When you will actually be trading quality of life for barely-tangible benefit on a large scale is torturing yourself working at a startup. This is an actual decision that people make to make lives miserable in exchange for minor but widespread public goods. And I fully support the actual trades of this sort that people actually make.
That’s my instrumentalist argument for, as a human being, accepting the metaphor of dust specks versus torture, not my philosophical argument for a decision theory that selects it.
Was there any reason to think I didn’t understand exactly what you said the first time? You agree with me and then restate. Fine, but pointless. Additionally, unimaginative re: potential value of torture. Defending lack of imagination in that statement by claiming torture defined in part by primary intent would be inconsistent.
The reason I thought you didn’t understand what I was talking about was that I was calling on examples from day to day life, this is what I took “instrumentalist” to mean, and you starting talking about killing people, which is not an event from day to day life.
If you are interested in continuing this discussion (which if not I won’t object) let’s take this one step at a time; does that difference seem reasonable to you?
The day to day life bit is irrelevant. The volitional aspect is not at all. Take the exact sacrifice you described but make it non-volitional. “torturing yourself working at a startup” becomes slavery when non-volitional. Presumably you find that trade-off less acceptable.
The volitional aspect is the key difference. The fact that your life is rich with examples of volitional sacrifice and poor in examples of forced sacrifice of this type is not some magic result that has something to do with how we treat “real” examples in day to day life. It is entirely because “we” (humans) have tried to minimize the non-volitional sacrifices because they are what we find immoral!
Point number one is: I don’t understand how you can say, when I am making an argument explicitly restricted to instrumental decision theory, how day to day life is irrelevant. Instrumentalism should ONLY care about day to day life.
With respect to forced sacrifice, my intuitions say I should just do the math, and that the reason volition is so important is that the reasonable expectation that one won’t be forced to make sacrifices is a big-ticket public good, meaning the math almost always comes out on its side. I think that you’re saying these choices have been screened off, but I think non-volitional choices have been screened off because they are in general bad trades rather than because “volition” is a magic word that lets you get whatever you want.
Point three, let’s turn this around… say someone is about to spend their entire life being tortured. Would you rescue them, if you knew it meant throwing a harmless dust speck into the eye of everyone ever to exist or be emulated? This should be equivalent, but both of the sacrifices here are forced since, at a minimum, some human beings are sociopaths and wouldn’t agree to take the dust speck.
If you want me to consider volition more closely, can you come up with some forced sacrifice choices that are reasonable exchanges that I might come across if I lived in a different world?
One possible idea: if I was the son of an African warlord, and I had the ability to make my parents’ political decrees more compassionate if I talked to them after they blew off steam torturing people, but I could instead make them torture fewer people by talking to them beforehand.
Here my intuitions say I should let the individuals be tortured in exchange for effecting large scale policy decisions.
Well, preferences are pretty easy to fit. Utility(world) = e^-(# of specks) − 1000*(# of people getting tortured)
However, note that this still requires that there is some probability of someone being tortured that you would trade a dust speck for.
It doesn’t work if you continuously increase the the severity of the minor inconvenience/reduce the severity of torture and try to find where the two become qualitatively comparable, as pointed out in this reply. The only way I see it work is to assign zero disutility to specks (I advocated it originally to be at the noise level). Then I thought that it is possible to have the argument work reasonably well even with a non-zero disutility, but at this point I don’t see how.
Utility(world) = e^-(# of specks) + X*e^-(# of people getting tortured), where X is some constant larger than 1/(1-1/e) in the incommensurate case, and less than that in the commensurate limit.
Of course, this assumes some stuff about the number of people getting tortured / specked already—but that can be handled with a simple offset.
I don’t think this addresses the point in the link. What happens when you go from specks to something slightly more nasty, like a pinch? Or slightly increase the time it takes to get rid of the speck? You ought to raise the disutility limit. Or if you reduce the length of torture, you have to lower the disutility amount from torturing one person. Eventually, the two intersect, unless you are willing to make a sharp qualitative boundary between two very similar events.
Yes, the two intersect. That’s what happens when you make things quantitative. Just because we are uncertain about where two things should, morally, intersect, does not mean that the intersection itself should be “fuzzy.”
The point is that without arbitrarily drawing the specks/torture boundary somewhere between x stabbed toes and x+epsilon stabbed toes the suggested utility function does not work.
Hm, how can I help you see why I don’t think this is a problem?
How about this. The following two sentences contain exactly the same content to me:
“Without arbitrarily drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
“Without drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
Why? Because morality is already arbitrary. Every element is arbitrary. The question is not “can we tolerate an arbitrary boundary,” but “should this boundary be here or not?”
Are you saying that you are OK with having x stabbed toes being incommensurate with torture, but x+1 being commensurate ? This would be a very peculiar utility function.
Yes, that is what I am saying. One can deduce from this that I don’t find it so peculiar.
To be clear, this doesn’t reflect at all what goes on in my personal decision-making process, since I’m human. However, I don’t find it any stranger than, say, having torture be arbitrarily 3^3^2 times worse than a dust speck, rather than 3^3^2 + 5.
Sarcasm time: I mean, seriously—are you honestly saying that at 3^3^2 + 1 dust specks, it’s worse than torture, but at 3^3^2 − 1, it’s better? That’s so… arbitrary. What’s so special about those two dust specks? That would be so… peculiar.
As soon as you allow the arbitrary size of a number to be “peculiar,” there is no longer any such thing as a non-peculiar set of preferences. That’s just how consistent preferences work. Discounting sets of preferences on account of “strangeness and arbitrariness” isn’t worth the effort, really.
I don’t mean peculiar in any negative sense, just that it would not be suitable for goal optimization.
Is that really what you meant? Huh.
Could you elaborate?