Torture vs. dust specks: I go for dust specks, because it is a reverse lottery. People derive a lot of utility about fantasizing about winning the lottery. Conversely, the disutility of the average person derived from fearing the next time they may be the person tortured is larger than the dust speck. That and sympathetic pain.
Of course it was not in the original definition that people actually know about it. But from my angle every even remotely plausible real life scenario involves that people generally know about it.
Also, social contract theory and slippery slopes. If the social contract allows one person to be tortured, it could be the next time a million. Slippery slopes are not fallacious as long as a mechanism of the slipping can be demonstrated, and the mechanism is here is the lack of categorical—that is, not even one person—ban on torture. Putting it differently, people doing bad things to each other is part of human nature, so human societies naturally slip towards occasional atrocities, and categorical bans are themselves braking mechanism on that kind of slippery slope, and it is not wise to mess with them. Thus, we are all better off if we have a social contract that categorically forbids torture, the disutility deriving from being worried about a future where we are not protected by a categorical ban on torture is larger than the disutility of the dust speck.
That really sounds like just fighting the hypothetical. I mean, in practice, if something approximating the experiment was attempted in the real world, your reasoning is right, but that’s not at all what the thought experiment is about. Do you at least acknowledge that, given that the people involved don’t know about it (and also won’t find out about the torture later), torture is the correct option?
Do you at least acknowledge that, given that the people involved don’t know about it (and also won’t find out about the torture later), torture is the correct option?
This is pretty hard to answer. For moral / ethical questions, I don’t want to get “pure math” but also rely on intuitions, and I cannot really rely on my intuitions here as they are very much social. As in, immoral is what horrifies a lot of people. I don’t really know how to approach it without relying on such intuitions. Surely I can calculate the total sum of utils but how does that quantitative and descriptive approach turn into a qualitative and prescriptive worse/better? I am not at all sure worse entirely equals the result of a utility calculation. It is not unrelated to it either, of course, my basic intuition - that wrong is whatever horrifies a lot of people—does of course correlate to utility as well.
I mean, what else is morality if not some sort of a social condemnation or approval?
Torture vs. dust specks: I go for dust specks, because it is a reverse lottery. People derive a lot of utility about fantasizing about winning the lottery. Conversely, the disutility of the average person derived from fearing the next time they may be the person tortured is larger than the dust speck. That and sympathetic pain.
Of course it was not in the original definition that people actually know about it. But from my angle every even remotely plausible real life scenario involves that people generally know about it.
Also, social contract theory and slippery slopes. If the social contract allows one person to be tortured, it could be the next time a million. Slippery slopes are not fallacious as long as a mechanism of the slipping can be demonstrated, and the mechanism is here is the lack of categorical—that is, not even one person—ban on torture. Putting it differently, people doing bad things to each other is part of human nature, so human societies naturally slip towards occasional atrocities, and categorical bans are themselves braking mechanism on that kind of slippery slope, and it is not wise to mess with them. Thus, we are all better off if we have a social contract that categorically forbids torture, the disutility deriving from being worried about a future where we are not protected by a categorical ban on torture is larger than the disutility of the dust speck.
That really sounds like just fighting the hypothetical. I mean, in practice, if something approximating the experiment was attempted in the real world, your reasoning is right, but that’s not at all what the thought experiment is about. Do you at least acknowledge that, given that the people involved don’t know about it (and also won’t find out about the torture later), torture is the correct option?
This is pretty hard to answer. For moral / ethical questions, I don’t want to get “pure math” but also rely on intuitions, and I cannot really rely on my intuitions here as they are very much social. As in, immoral is what horrifies a lot of people. I don’t really know how to approach it without relying on such intuitions. Surely I can calculate the total sum of utils but how does that quantitative and descriptive approach turn into a qualitative and prescriptive worse/better? I am not at all sure worse entirely equals the result of a utility calculation. It is not unrelated to it either, of course, my basic intuition - that wrong is whatever horrifies a lot of people—does of course correlate to utility as well.
I mean, what else is morality if not some sort of a social condemnation or approval?
If what you really care about is people condemning or approving, shouldn’t you actually be optimising for that instead of “utils”?