Are you perhaps confusing ‘morally wrong’ with ‘a sucky tradeoff that I would prefer not to be bound by’?
Just because torturing one person sucks, just because we find it abhorrent, does not mean that it isn’t the best outcome in various situations. If your definition of ‘moral’ is “best outcome when all things are considered, even though aspects of it suck a lot and are far from ideal”, then yes, torturing someone can in fact be moral. If your definition of ‘moral’ is “those things which I find reprehensible”, then quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.
Are you perhaps confusing ‘morally wrong’ with ‘a sucky tradeoff that I would prefer not to be bound by’?
Nope...because ..
quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.
...because I believe that torturing someone could still instrumentally be the right thing to do on a consequential grounds.
In this scenario, 10^100 people terminally value torturing one person, but I do not care about their preferences, because it is an evil preference.
However, in an alternate scenario, if I had to choose between 10^100 people getting mildly hurt or 1 person getting tortured, I’d choose the one person getting tortured.
In these two scenarios, the preference weights are identical, but in the first scenario the preference of the 10^100 people is evil and therefore irrelevant in my calculations, whereas in the second scenario the needs of 10^100 outweigh the needs of the one.
This is less a discussion about torture, and more a discussion about whose/which preferences matter. Sadistic preferences (involving real harm, not the consensual kink), for example, don’t matter morally—there’s no moral imperative to fulfill those preferences, no “good” done when those preferences are fulfilled and no “evil” resulting from thwarting those preferences.
I think you should temporarily taboo ‘moral’, ‘morality’, and ‘evil’, and simply look at the utility calculations. 10^100 people terminally value something that you ascribe zero or negative value to; therefore, their preferences do not matter to you or will make your universe worse from the standpoint of your utility function.
Which preferences matter? Yours matter to you, and thiers matter to them. There’s no ‘good’ or ‘evil’ in any absolute sense, merely different utility functions that happen to conflict. There’s no utility function which is ‘correct’, except by some arbitrary metric, of which there are many.
Consider another hypothetical utility function: The needs of the 10^100 don’t outweigh the needs of the one, so we let the entire 10^100 suffer when we could eliminate it by inconveniencing one single entity. Neither you nor the 10^100 are happy with this one, but the person about to be tortured may think it’s just fine and dandy...
...I don’t denotatively disagree with anything you’ve said, but I also think you’re sort of missing the point and forgetting the context of the conversation as it was in the preceding comments.
We all have preferences, but we do not always know what our own preferences are. A subset of our preferences (generally those which do not directly reference ourselves) are termed “moral preferences”. The preceding discussion between me and Peter Hurford is an attempt to figure out what our preferences are.
In the above conversation, words like “matter”, “should” and “moral” is understood to mean “the shared preferences of Ishaan, Dentin, and Peter_Hurford which they agree to define as moral”. Since we are all human (and similar in many other ways beyond that), we probably have very similar moral preferences...so any disagreement that arises between us is usually due to one or both of us inaccurately understanding our own preferences.
There’s no ‘good’ or ‘evil’ in any absolute sense
This is technically true, but it’s also often a semantic stopsign which derails discussions of morality. The fact is that the three of us humans have a very similar notion of “good”, and can speak meaningfully about what it is...the implicitly understood background truths of moral nihilism notwithstanding.
It doesn’t do to exclaim “but wait! good and evil are relative!” during every moral discussion...because here, between us three humans, our moral preferences are pretty much in agreement and we’d all be well served by figuring out exactly those preferences are. It’s not like we’re negotiating morality with aliens.
Which preferences matter? Yours matter to you
Precisely...my preferences are all that matter to me, and our preferences are all that matter to us. So if 10^100 sadistic aliens want to torture...so what? We don’t care if they like torture, because we dislike torture and our preferences are all that matter. Who cares about overall utility? “Morality”, for all practical purposes, means shared human morality...or, at least, the shared morality of the humans who are having the discussion.
“Utility” is kind of like “paperclips”...yes, I understand that in the best case scenario it might be possible to create some sort of construct which measures how much “utility” various agent-like objects get from various real world outcomes, but maximizing utility for all agents within this framework is not necessarily my goal...just like maximizing paperclips is not my goal.
For the purposes of this conversation at least. I’ve largely got them taboo’d in general because I find them confusing and full of political connotations; I suspect at least some of that is the problem here as well.
Are you perhaps confusing ‘morally wrong’ with ‘a sucky tradeoff that I would prefer not to be bound by’?
Just because torturing one person sucks, just because we find it abhorrent, does not mean that it isn’t the best outcome in various situations. If your definition of ‘moral’ is “best outcome when all things are considered, even though aspects of it suck a lot and are far from ideal”, then yes, torturing someone can in fact be moral. If your definition of ‘moral’ is “those things which I find reprehensible”, then quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.
Nope...because ..
...because I believe that torturing someone could still instrumentally be the right thing to do on a consequential grounds.
In this scenario, 10^100 people terminally value torturing one person, but I do not care about their preferences, because it is an evil preference.
However, in an alternate scenario, if I had to choose between 10^100 people getting mildly hurt or 1 person getting tortured, I’d choose the one person getting tortured.
In these two scenarios, the preference weights are identical, but in the first scenario the preference of the 10^100 people is evil and therefore irrelevant in my calculations, whereas in the second scenario the needs of 10^100 outweigh the needs of the one.
This is less a discussion about torture, and more a discussion about whose/which preferences matter. Sadistic preferences (involving real harm, not the consensual kink), for example, don’t matter morally—there’s no moral imperative to fulfill those preferences, no “good” done when those preferences are fulfilled and no “evil” resulting from thwarting those preferences.
I think you should temporarily taboo ‘moral’, ‘morality’, and ‘evil’, and simply look at the utility calculations. 10^100 people terminally value something that you ascribe zero or negative value to; therefore, their preferences do not matter to you or will make your universe worse from the standpoint of your utility function.
Which preferences matter? Yours matter to you, and thiers matter to them. There’s no ‘good’ or ‘evil’ in any absolute sense, merely different utility functions that happen to conflict. There’s no utility function which is ‘correct’, except by some arbitrary metric, of which there are many.
Consider another hypothetical utility function: The needs of the 10^100 don’t outweigh the needs of the one, so we let the entire 10^100 suffer when we could eliminate it by inconveniencing one single entity. Neither you nor the 10^100 are happy with this one, but the person about to be tortured may think it’s just fine and dandy...
...I don’t denotatively disagree with anything you’ve said, but I also think you’re sort of missing the point and forgetting the context of the conversation as it was in the preceding comments.
We all have preferences, but we do not always know what our own preferences are. A subset of our preferences (generally those which do not directly reference ourselves) are termed “moral preferences”. The preceding discussion between me and Peter Hurford is an attempt to figure out what our preferences are.
In the above conversation, words like “matter”, “should” and “moral” is understood to mean “the shared preferences of Ishaan, Dentin, and Peter_Hurford which they agree to define as moral”. Since we are all human (and similar in many other ways beyond that), we probably have very similar moral preferences...so any disagreement that arises between us is usually due to one or both of us inaccurately understanding our own preferences.
This is technically true, but it’s also often a semantic stopsign which derails discussions of morality. The fact is that the three of us humans have a very similar notion of “good”, and can speak meaningfully about what it is...the implicitly understood background truths of moral nihilism notwithstanding.
It doesn’t do to exclaim “but wait! good and evil are relative!” during every moral discussion...because here, between us three humans, our moral preferences are pretty much in agreement and we’d all be well served by figuring out exactly those preferences are. It’s not like we’re negotiating morality with aliens.
Precisely...my preferences are all that matter to me, and our preferences are all that matter to us. So if 10^100 sadistic aliens want to torture...so what? We don’t care if they like torture, because we dislike torture and our preferences are all that matter. Who cares about overall utility? “Morality”, for all practical purposes, means shared human morality...or, at least, the shared morality of the humans who are having the discussion.
“Utility” is kind of like “paperclips”...yes, I understand that in the best case scenario it might be possible to create some sort of construct which measures how much “utility” various agent-like objects get from various real world outcomes, but maximizing utility for all agents within this framework is not necessarily my goal...just like maximizing paperclips is not my goal.
So, I’m curious… can you unpack what you mean by “temporarily” in this comment?
For the purposes of this conversation at least. I’ve largely got them taboo’d in general because I find them confusing and full of political connotations; I suspect at least some of that is the problem here as well.