Well, for my own part I don’t find the two arguments comparable, because they talk about different things.
Harry’s situation, like real-world situations, is about expected utility calculations. He’s asking the question: “given my best estimates of the probabilities of various outcomes to my actions, and of the utility of those outcomes, including my best estimates of my estimates being wrong, what actions provide the most expected utility?”
But DSvT isn’t like that at all. If I Introduce imperfect information and human cognitive limitations to the dust-specks argument and the whole thing collapses… how do I know there’s actually a choice between torture and lots of dust specks? How do I know how many dust specks there are? How likely is it that whoever gave me this information is lying? And so forth.
This isn’t unique to the dust-specks argument. Any thought experiment that depends for its force on a really really big disutility, but which doesn’t take into account the magnitude of the probability of that disutility or the associated expected disutility, is hard to translate into a world of imperfect information and human cognitive limitations, where probability and expected utility are all we have to work with.
To say that more concretely: if we put Harry in the position of absolutely believing that he must choose between 50 years of torture or 3^^^3 dust specks, should Harry shut up and torture? Well, it probably doesn’t matter: the most likely conclusion from that premise is that Harry is insane.
This, in my mind, puts the entire premise of the DSvT post in a very dark light. Given what we know about the real world and the real limitations of practice (vs theory) then the only effect of the DSvT post is to make people more likely to torture (or to excuse torture) in the real world—which is exactly where the hypothetical is completely inapplicable. It makes the world a slightly worse place for no benefit.
Or at least none that I can see. Is there a benefit I’m missing?
And if not, does this make it a mini-basilisk? Something that’s true but that everyone’s better off having never read?
The costs and benefits seem fairly analogous to those of “trolley problems,” which is well-travelledground at this point, so I won’t try to cover it again.
If you can see a benefit to trolley problems in general, it seems you ought to be able to see the same benefit here. Conversely, if you don’t, then it seems you should have the same objection to trolley problems involving death, torture, murder, and other bad practices.
Personally, I invoke Weber’s Law in these sorts of cases: when a posited delta is smaller than the just-noticeable-difference, I stop having faith in anyone’s intuitions about it, including my own. Anyone who wants to compel me with an argument in such a case needs to do more than appeal to my intuition.
The 3^^^3 bit makes it qualitatively different from any real-world lose-lose hypothetical. Remember that lose-lose hypotheticals are something that people in power, e.g. politicians, have to decide every day.
People in power have to decide about actual cases, which are always about expected utility, and in which knock-on effects must be considered. Most trolley problems have more in common with the DSvT scenario than with real-world cases.
But sure, when you add things like 3^^^3 people to a hypothetical, all normal intuitions go out the window.
Well, for my own part I don’t find the two arguments comparable, because they talk about different things.
Harry’s situation, like real-world situations, is about expected utility calculations. He’s asking the question: “given my best estimates of the probabilities of various outcomes to my actions, and of the utility of those outcomes, including my best estimates of my estimates being wrong, what actions provide the most expected utility?”
But DSvT isn’t like that at all. If I Introduce imperfect information and human cognitive limitations to the dust-specks argument and the whole thing collapses… how do I know there’s actually a choice between torture and lots of dust specks? How do I know how many dust specks there are? How likely is it that whoever gave me this information is lying? And so forth.
This isn’t unique to the dust-specks argument. Any thought experiment that depends for its force on a really really big disutility, but which doesn’t take into account the magnitude of the probability of that disutility or the associated expected disutility, is hard to translate into a world of imperfect information and human cognitive limitations, where probability and expected utility are all we have to work with.
To say that more concretely: if we put Harry in the position of absolutely believing that he must choose between 50 years of torture or 3^^^3 dust specks, should Harry shut up and torture? Well, it probably doesn’t matter: the most likely conclusion from that premise is that Harry is insane.
This, in my mind, puts the entire premise of the DSvT post in a very dark light. Given what we know about the real world and the real limitations of practice (vs theory) then the only effect of the DSvT post is to make people more likely to torture (or to excuse torture) in the real world—which is exactly where the hypothetical is completely inapplicable. It makes the world a slightly worse place for no benefit.
Or at least none that I can see. Is there a benefit I’m missing?
And if not, does this make it a mini-basilisk? Something that’s true but that everyone’s better off having never read?
The costs and benefits seem fairly analogous to those of “trolley problems,” which is well-travelled ground at this point, so I won’t try to cover it again.
If you can see a benefit to trolley problems in general, it seems you ought to be able to see the same benefit here. Conversely, if you don’t, then it seems you should have the same objection to trolley problems involving death, torture, murder, and other bad practices.
Personally, I invoke Weber’s Law in these sorts of cases: when a posited delta is smaller than the just-noticeable-difference, I stop having faith in anyone’s intuitions about it, including my own. Anyone who wants to compel me with an argument in such a case needs to do more than appeal to my intuition.
The 3^^^3 bit makes it qualitatively different from any real-world lose-lose hypothetical. Remember that lose-lose hypotheticals are something that people in power, e.g. politicians, have to decide every day.
People in power have to decide about actual cases, which are always about expected utility, and in which knock-on effects must be considered. Most trolley problems have more in common with the DSvT scenario than with real-world cases.
But sure, when you add things like 3^^^3 people to a hypothetical, all normal intuitions go out the window.