That raises an obvious question: what do you actually do if you find yourself in a Sophie’s choice, especially if the result of the null or default choice is more monstrous to you than the results of the other choices? Refusing to consider a class of decision theory problems is tantamount to precommitting to an unconsidered answer should one of them arise.
Of course, in most cases, people actually do seem to consider horrific choices once they’re actually faced with one; I therefore conclude that the popular response of refusing to make an analysis of such problems is more about signaling than anything else.
That raises an obvious question: what do you actually do if you find yourself in a Sophie’s choice, especially if the result of the null or default choice is more monstrous to you than the results of the other choices?
Well, the correct answer could be that I don’t know what I would do—and even if I knew that I would probably act in a certain way, it wouldn’t be the outcome of any rational deliberation, but just a whimsical reflex from my brain overloaded with the stress of the situation.
You’ll probably agree that there are situations where this would be the only realistic answer. For example, suppose you were about to be shot in a minute and the executioner showed you two bullets and told you do choose which one will end up in your head, and also threatened to kill you in a more painful and gruesome way if you refuse to make your choice clear. What does any decision theory say about this situation? It’s absurd to insist on a rational rule for decision-making here.
Now of course, you can say that I chose an example where whatever the calculus, the numbers end up being equal, since the two options are identical in every relevant respect. But why should we believe that if only the options are sufficiently different, there must be a way to impose an ordering of desirability on them? Why wouldn’t the “answer undefined” response be applicable in a much broader class of situations than just those where consequentialist calculations evaluate all options the same? What property of the universe or logic (or something else?) demands otherwise?
I agree: in some cases, one can’t conclude which of two awful options is least bad (or one can conclude that the difference between them isn’t likely to be worth the effort of investigating further, under the circumstances), and in that case, a random selection between such options is as good as any strategy.
However, ISTM that most trolley problems don’t fall into that category, and that a policy of refusing to consider them on principle is probably a signaling phenomenon (one doesn’t want to appear to endorse killing the innocent, even in such a farfetched hypothetical).
However, ISTM that most trolley problems don’t fall into that category, and that a policy of refusing to consider them on principle is probably a signaling phenomenon (one doesn’t want to appear to endorse killing the innocent, even in such a farfetched hypothetical).
That, however, is more likely to manifest itself in a decisive anti-utilitarian answer, not feigning indecisiveness. People who want to signal that they won’t endorse killing the innocent will say that it’s wrong to actively kill someone even if it saves other lives, so they wouldn’t push the fat man etc. -- and usually this is an honest statement of how they would really act in practice. Expressions of moral intuitions that are loaded with signaling value are usually felt sincerely, and acted upon readily. Similarly, people who refuse to endorse any alternative—who are, I believe, a small minority in the general public—sincerely view the situation as akin to the bullet choice. It might be ultimately due to signaling, but note that among ordinary folks, this sends a very bad signal. It’s not at all good to be perceived as morally indecisive and lacking in principles.
That said, I’d say your theory is applicable to enthusiastic consequentialists too, and actually more so. I have the impression that many people who bite moral bullets based on various consequentialist theories do it for signaling value. They want to signal their rationality, adherence to logic rather than emotion, bravery in face of hostile reactions from people whose moral intuitions get violated, etc. In fact, I’d venture to say that the signaling here is more transparent, since unlike the never-kill-the-innocent folks, they likely wouldn’t be ready to follow what they say in practice [*].
--
[*] - This doesn’t contradict what I wrote above (that signal-loaded moral statements are typically acted upon readily), because these people are signaling to a very different audience than ordinary folks, to whom that statement applies.
IAWYC, except that being perceived as indecisive is only a downside when trying to appear high-status within a group. Signaling moral conflict and indecision among peers or superiors might not get you admired, but it’s a safe choice when the options are ugly (until there’s a group consensus and your conformity is sought).
But yes, again, there’s signaling in both directions, and that’s all it amounts to for most of us talking about trolley. For some people (e.g. heads of state), though, these decisions actually have to be made now and then; I’d prefer that some systematic decision criteria exist for those cases; and I find it interesting to talk about them in the abstract.
That raises an obvious question: what do you actually do if you find yourself in a Sophie’s choice, especially if the result of the null or default choice is more monstrous to you than the results of the other choices? Refusing to consider a class of decision theory problems is tantamount to precommitting to an unconsidered answer should one of them arise.
Of course, in most cases, people actually do seem to consider horrific choices once they’re actually faced with one; I therefore conclude that the popular response of refusing to make an analysis of such problems is more about signaling than anything else.
Well, the correct answer could be that I don’t know what I would do—and even if I knew that I would probably act in a certain way, it wouldn’t be the outcome of any rational deliberation, but just a whimsical reflex from my brain overloaded with the stress of the situation.
You’ll probably agree that there are situations where this would be the only realistic answer. For example, suppose you were about to be shot in a minute and the executioner showed you two bullets and told you do choose which one will end up in your head, and also threatened to kill you in a more painful and gruesome way if you refuse to make your choice clear. What does any decision theory say about this situation? It’s absurd to insist on a rational rule for decision-making here.
Now of course, you can say that I chose an example where whatever the calculus, the numbers end up being equal, since the two options are identical in every relevant respect. But why should we believe that if only the options are sufficiently different, there must be a way to impose an ordering of desirability on them? Why wouldn’t the “answer undefined” response be applicable in a much broader class of situations than just those where consequentialist calculations evaluate all options the same? What property of the universe or logic (or something else?) demands otherwise?
I agree: in some cases, one can’t conclude which of two awful options is least bad (or one can conclude that the difference between them isn’t likely to be worth the effort of investigating further, under the circumstances), and in that case, a random selection between such options is as good as any strategy.
However, ISTM that most trolley problems don’t fall into that category, and that a policy of refusing to consider them on principle is probably a signaling phenomenon (one doesn’t want to appear to endorse killing the innocent, even in such a farfetched hypothetical).
That, however, is more likely to manifest itself in a decisive anti-utilitarian answer, not feigning indecisiveness. People who want to signal that they won’t endorse killing the innocent will say that it’s wrong to actively kill someone even if it saves other lives, so they wouldn’t push the fat man etc. -- and usually this is an honest statement of how they would really act in practice. Expressions of moral intuitions that are loaded with signaling value are usually felt sincerely, and acted upon readily. Similarly, people who refuse to endorse any alternative—who are, I believe, a small minority in the general public—sincerely view the situation as akin to the bullet choice. It might be ultimately due to signaling, but note that among ordinary folks, this sends a very bad signal. It’s not at all good to be perceived as morally indecisive and lacking in principles.
That said, I’d say your theory is applicable to enthusiastic consequentialists too, and actually more so. I have the impression that many people who bite moral bullets based on various consequentialist theories do it for signaling value. They want to signal their rationality, adherence to logic rather than emotion, bravery in face of hostile reactions from people whose moral intuitions get violated, etc. In fact, I’d venture to say that the signaling here is more transparent, since unlike the never-kill-the-innocent folks, they likely wouldn’t be ready to follow what they say in practice [*].
--
[*] - This doesn’t contradict what I wrote above (that signal-loaded moral statements are typically acted upon readily), because these people are signaling to a very different audience than ordinary folks, to whom that statement applies.
IAWYC, except that being perceived as indecisive is only a downside when trying to appear high-status within a group. Signaling moral conflict and indecision among peers or superiors might not get you admired, but it’s a safe choice when the options are ugly (until there’s a group consensus and your conformity is sought).
But yes, again, there’s signaling in both directions, and that’s all it amounts to for most of us talking about trolley. For some people (e.g. heads of state), though, these decisions actually have to be made now and then; I’d prefer that some systematic decision criteria exist for those cases; and I find it interesting to talk about them in the abstract.