You two seem to be making slightly different points here. Matt, I take it you accept that there is some reason to sacrifice yourself (not doing so would be “morally weak”) but that failing to do so would not be blameworthy. That sounds like a fairly mainstream view. In contrast, MrHen seems to be making the stronger claim that there is no reason to save the others at all (unless he has a personal investment in said others).
The idea that [the King is responsible for the deaths] screens off the possibility that [the 11th man is responsible for the deaths] seems to be a version of the single-true-cause fallacy. Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
If you want to prioritize your own life over theirs then you are free to do so, but I think you should own up to the fact that that’s ultimately what you’re doing. Disclaiming responsibility entirely seems like a convenient excuse designed to let you get what you want without having to feel bad about it.
I have to read the single-true-cause fallacy before I can fully reply, but here is a quick ditty to munch on until then:
Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
I disagree with this. The eleventh’s choice is completely irrelevant. The king has a decision to make and just because he makes it the same every single time does not mean the actual decision is different the next time around.
The similar example where the king puts a gun in the eleventh’s hand and says “kill them or I kill you” is when the choice actually becomes the eleventh’s. In this scenario, the eleventh man has to choose to (a) kill the ten or (b) not kill the ten. This is a moral decision.
Of note, whoever actually has to kill the ten has this choice and will probably choose the selfish route. If the king shares the blame with anyone, it will be whoever actually kills the ten. If the eleventh is morally responsible than everyone else watching the event is morally responsible, too.
I don’t understand what coherent theory of causation could make this statement true.
The issue is not causality. The issue is moral responsibility. If I go postal and start shooting people as they run past my house and later tell the police that it was because my neighbor pissed me off, the neighbor may have been one (of many) causes but should not be held morally responsible for my actions.
Likewise, if the king asks someone a question and, in response, kills ten people, I do not see how the question asks makes any different in the assignment of moral responsibility.
Causality does not imply moral responsibility.
Also, having read the link you gave earlier, I can now comment on this:
The idea that [the King is responsible for the deaths] screens off the possibility that [the 11th man is responsible for the deaths] seems to be a version of the single-true-cause fallacy. Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
“Responsible” has two meanings. The first is a cause-effect sense of “these actions precluded these other actions.” This is the same as saying a bowling ball is responsible for the bowling pins falling over.
The other is a moral judgement stating “this person should be held accountable for this evil.” The bowling ball holds no moral responsibility because it was thrown by a bowler.
I am not claiming that the eleventh man was not part of the causal chain that resulted in ten people dying. I am claiming that the eleventh man holds no moral responsibility for the ten people dying. I am not trying to say that the king is the single-true-cause. I am claiming that the king is the one who should be held morally responsible.
To belabor this point with one more example: If I rigged a door to blow up when opened and Jack opened the door while standing next to Jill they are both reduced to goo. Jack is causally responsible for what happened because he opened the door. He is not, however, morally responsible.
The question of when someone does become morally responsible is tricky and I do not have a good example of when I think the line is crossed. I do not, however, pass any blame on the eleventh man for answer a question to which there is no correct answer.
The issue is not causality. The issue is moral responsibility.
Agreed. But I think if you want to separate the two, you need a reasonable account of the distinction. One plausible account relies on reasonably foreseeable consequences to ground responsibility, and this is pretty much my view. It accounts easily for the neighbor, bowling ball, and Jack and Jill cases, but still implies responsibility for the 11th man.
I can accept a view that says that, all things considered, the king has a greater causal influence on the outcome of the 11th man case, and thus bears much greater moral responsibility for it than does the 11th man. But (and this was the point of the no-single-true-cause analogy) I see no reason why this should imply that the 11th man has no responsibility whatsoever, given that the death of 10 innocent others is a clearly foreseeable consequence of his choice.
I still think this is a convenient conclusion designed to let you be selfish without feeling like you’re doing anything wrong.
P.S. FWIW, yes I pretty much do think you’re evil if you’re not willing to sacrifice $100 to save 10 lives in your hostage example. I can understand not being willing to die, even if I think it would be morally better to sacrifice oneself. (And I readily confess that it’s possible that I would take the morally wrong/weak choice if actually faced with this situation.) But for $100 I wouldn’t hesitate.
One plausible account relies on reasonably foreseeable consequences to ground responsibility, and this is pretty much my view.
I can understand that. I have not dug quite so deeply into this area of my ethical map so it could be representing the territory poorly. What little mental exercises I have done have led me to this point.
I guess the example that really puts me in a pickle is asking what would happen if Jack knew the door was rigged but opened it anyway. It makes sense that Jack shares the blame. There seems to be something in me that says the physical action weighs against Jack.
So, if I had to write it up quickly:
Being a physical cause in a chain of events that leads to harm
While knowing the physical action has a high likelihood of leading to harm
Is evil
But, on the other hand:
Being a non-physical cause in a chain of events that leads to harm
While knowing the non-physical action has a high likelihood of leading to harm
Is not necessarily evil but can be sometimes
Weird. That sure seems like an inconsistency to me. Looks like I need to get the mapmaking tools out. The stickiness of the eleventh man is that the king is another moral entity and the king somehow shrouds the eleventh from actually making a moral choice. But I do not have justification for that distinction.
There may yet be justification, but working backwards is not proper. Once I get the whole thing worked out I will report what I find, if you are interested.
My use of the phrase ‘morally weak’ was to describe how I think many/most people would view the choice, not my own personal judgement. I agree with MrHen that the 11th man’s choice is not morally wrong. I was contrasting that with what I think would be the mainstream view that the choice is morally wrong but understandable and not deserving of punishment.
To me this is similar to the trolley problems where you are supposed to choose between taking action and killing one person to save 10 or taking no action and allowing the 10 to die. The one person to be sacrificed is yourself however. I wouldn’t kill the one to save the 10 either (although I view that as more morally wrong than sacrificing yourself). I also generally place much lower moral weight on harm caused by inaction than harm caused by action and the forced choice scenario here presents the 11th man with a situation that I think is similar to one of causing harm by inaction.
As to the act-omission distinction, it would be simple enough to stipulate that the default option is that you die unless you tell the king to kill the other ten. Does this change your willingness to die?
No, that wouldn’t change my decision. It’s the not-sacrificing-your-life that I’m comparing with causing harm by inaction (the inaction being the not-sacrificing) rather than anything specific about the way the question is phrased.
The agency of the king does make a relevant difference in this scenario in my view. It is not exactly equivalent to a scenario where you could sacrifice your life to save 10 people from a fire or car crash. Although I don’t think there is a moral obligation in that case either I do consider the difference morally relevant.
Suppose the king has 10 people prepared to be hung. They are in the gallows with nooses around their neck, standing on a trap door. The king shows you a lever that will open the trap door, and kill the 10 victims. The king informs you that if you do not pull the lever within one hour, the 10 people will be freed and you will be executed.
Here the king has set up the situation, but you will be the last sentient being capable of moral reasoning in the causal chain that kills 10 people. Is your conclusion different in this scenario?
The king here is more diabolical and the scenario you describe is more traumatic. I believe it does change the intuitive moral response to the scenario. I don’t believe it changes my conclusion of the morality of the act. I feel that I’d still direct my moral outrage at the king and absolve the 11th man of moral responsibility.
This is where these kinds of artificial moral thought experiments start to break down though. In real situations analogous to this I believe the uncertainty in the outcomes of various actions (together with other unspecified details of the situation) would overwhelm the ‘pure’ decision made on the basis of the thought experiment. I’m unconvinced of the value of such intuition pumps in enhancing understanding of a problem.
Why is this where the thought experiments suddenly start to break down? Sure, it’s a less convenient world for you, but I don’t see why it’s any more artificial than the original problem, and you didn’t seem to take issue with that.
I have taken issue with the use of thought experiments generally in previous comments, partly because it seems to me that they start to break down rapidly when pushed further into ‘least convenient world’ territory. I’m skeptical in general of the value of thought experiments in revealing philosophical truths of any kind, ethical or otherwise. They are often designed by construction to trigger intuitive judgements based on scenarios so far from actual experience that those judgements are rendered highly untrustworthy.
I answered the original question to say that yes, I did agree that the 11th man was not acting immorally here. I suspect this particular thought experiment is constructed as an intuition pump to generate the opposite conclusion and to the extent that the first commenter is correct that the view that the 11th man has done nothing immoral is a minority position it would seem it serves its purpose.
I’ve attempted to explain why I think the intuition that this is morally questionable is generated and why I think it’s not to be fully trusted. I don’t intend to endorse the use of such thought experiments as a good method for examining moral questions though.
Fair enough. It was mainly the appearance of motivated stopping that I was concerned with.
While I share some general concerns about the reliability of thought experiments, in the absence of a better alternative, the question doesn’t seem to be whether we use them or not, but how we can make best use of them despite their potential flaws.
In order to answer that question, it seems like we might need a better theory of when they’re especially likely to be poor guides than we currently have. It’s not obvious, for example, that their information content increases monotonically in realism. Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.*
As well as trying to frame scenarios in ways that reduce noise/bias in our intuitions, we can also try to correct for the effect of known biases. A good example would be adjusting for scope insensitivty. But we need to be careful about coming up with just-so stories to explain away intuitions we disagree with. E.g. you claim that the altruist intuition is merely a low cost-signal; I claim that the converse is merely self-serving rationalization. Both of these seem like potentially good examples of confirmation bias at work.
Finally, it’s worth bearing in mind that, to the extent that our main concern is that thought experiments provide noisy (rather than biased) data, this could suggest that the solution is more thought experiments rather than fewer (for standard statistical reasons).
* And even if information content did increase with realism, realism doesn’t seem to correspond in any simple way to convenience (as your comments seem to imply). Not least because convenience is a function of one’s favourite theory as much as it is a function of the postulated scenario.
I would be interested in hearing more on this subject. It sounds similar to Hardend Problems Make Brittle Models. Do you have any good jumping points for further reading?
Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.
I don’t consider moral intuitions simple at all though. In fact, in the case of morality I have a suspicion that trying to apply principles derived from simple thought experiments to making moral decisions is likely to produce results roughly as good as trying to catch a baseball by doing differential equations with a pencil. It seems fairly clear to me that our moral intuitions have been carefully honed by evolution to be effective at achieving a purpose (which has nothing much to do with an abstract concept of ‘good’) and when a simplified line of reasoning leads to a conflict with moral intuitions I tend to trust the intuitions more than the reasoning.
There seem to be cases where moral intuitions are maladapted to the modern world and result in decisions that appear sub-optimal, either because they directly conflict with other moral intuitions or because they tend to lead to outcomes that are worse for all parties. I place the evidentiary bar quite high in these cases though—there needs to be a compelling case made for why the moral intuition is to be considered suspect. A thought experiment is unlikely to reach that bar. Carefully collected data and a supporting theory are in with a chance.
I am also wary of bias in what people suggest should be thrown out when such conflicts arise. If our intuitions seem to conflict with a simple conception of altruism, maybe what we need to throw out is the simple conception of altruism as a foundational ‘good’, rather than the intuitions that produce the conflict.
I confess to being somewhat confused now. Your previous comment questioned the relevance of moral intuitions generated by particular types of thought experiments, and argued (on what seem to me pretty thin grounds) against accepting what seemed to be the standard intuition that the 11th man’s not-sacrificing is morally questionable.
In contrast, this comment extols the virtues of moral intuitions, and argues that we need a compelling case to abandon them. I’m sure you have a good explanation for the different standards you seem to be applying to intuitive judgments in each case, but I hope you’ll understand if I say this appears a little contradictory at the moment.
P.S. Is anyone else sick to death of the baseball/differential equations example? I doubt I’ll actually follow through on this, but I’m seriously tempted to automatically vote down anyone who uses it from now on, just because it’s becoming so overused around here.
P.P.S. On re-reading, the word “simple” in the sentence you quoted was utterly redundant. It shouldn’t have been there. Apologies for any confusion that may have caused.
I made a few claims in my original post: i) I don’t think the 11th man is acting immorally by saving himself over the 10; ii) most people would think he is acting immorally; iii) most people would choose to save themselves if actually confronted with this situation; iv) most people would consider the 11th man’s moral failing to be forgivable. I don’t have hard evidence for any claim except i), they are just my impressions.
The contradiction I see here is mostly in the conflict between what most people say they would do and what they would actually do. One possible resolution of the conflict is to say that self-sacrifice is the morally right thing to do but that most people are morally weak. Another possible resolution is to say that self-sacrifice is not a morally superior choice and therefore most people would actually not be acting immorally in this situation by not self-sacrificing. I lean towards the latter and would attempt to explain the conflict by saying that people see more value in signaling altruism cheaply (by saying they would self-sacrifice in an imaginary scenario) than in actually being altruistic in a real scenario. There is a genuine conflict here but I would resolve it by saying people have a tendency to over-value altruism in hypothetical moral scenarios relative to in actual moral decisions. I actually believe that this tendency is harmful and leads to worse outcomes but a full explanation of my thinking there would be a much longer post than I have time for right now.
Conflicts can exist between different moral intuitions when faced with an actual moral decision and resolving them is not simple but that’s a different case than conflicts between intuitions of what imaginary others should do in imagined scenarios and intuitions about what one should do oneself in a real scenario.
If you have a better alternative to the baseball/differential equations example I’d happily use it. It’s the first example that sprang to mind, probably due to it’s being commonly used here.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
If I say “X-ing is wrong, but actually, if ever faced with this situation I would quite possibly end up X-ing because I’m selfish/weak” (which is what I and others have said elsewhere) then (a) there’s no conflict to resolve; and (b) it doesn’t make much sense to claim that my judgment that “X is wrong” is a cheap signal of altruism. In fact I’ve just signaled the opposite.
Now, if people changing their moral judgments from “X-ing is wrong” to “X-ing is permissible”, then I agree that there’s a conflict to resolve. But it seems that cognitive dissonance provides an explanation of this behavior at least as good as cheap talk.
FWIW, If you want a self-interested explanation of the stated judgment that “X-ing is wrong”, I wonder whether moral censure (i.e. trying to convince others that they shouldn’t X, even though you will ultimately X) would be a better one than signaling. Not necessarily mutually exclusive I guess.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
Judgements that a choice is morally wrong are clearly not the same thing as predictions about whether people would make that choice. The way I view morality though a wide gulf between the two is indicative of a problem to be resolved. I see the purpose of morality as providing a framework for solving something analogous to an iterated prisoners dilemma. If we can all agree to impose certain restrictions on our own actions because we all expect to do better if everyone sticks to the rules then we have a system of morality.
Humans have a complex interplay of instinctive moral intuitions and cultural norms that together form a moral framework that exists because it provides a reasonably stable solution to living in mutually beneficial societies. That doesn’t mean it can’t be improved, just that its very existence implies that it works reasonably well.
The problem then with a moral dilemma that appears to present a wide gap between what people say should be done and what people would actually do is that it suggests a flaw in the moral framework. A stable framework will generally require that decisions that people can agree are right (in that we’d expect on average to be better off if we all followed them) are also decisions that people can plausibly commit to taking if faced with the problem. It’s like the pre-commitment problem discussed before on less wrong. You might wish to argue for an idealized morality that sets standards for what people should do that are not what most people would do but then you have to make a plausible case for why what people actually do is wrong. Further, I’d argue you have to make a case for how your system could actually be implemented with actual people in a stable fashion—an idealized morality that is not achievable with actual people is not very interesting to me.
Ultimately I don’t take a utilitarian view of morality—that what is ‘good’ is what maximizes utility across all agents. I take an ‘enlightened self interest’ view—that what is ‘good’ is what all agents can agree is a framework that will tend to lead to better expected outcomes for each individual if each individual constrains his own immediate self interest in certain ways.
There are heaps and heaps of consequentialist/utilitarian views that don’t maximize utility uncritically across everybody. It sounds like you prefer something in the neighborhood of agent-favoring morality, but ethical egoism is a consequentialist view too.
Based on discussions I’ve had here I get the impression that most people consider ‘utilitarianism’, unqualified, to imply equal weighting for all people in the utility function to be maximized. Even where equal weighting is not implied (the existence of the ‘utility monster’ as a problem for some variants acknowledges that weights are not necessarily equal) it seems that utilitarianism has a unique weighting for all agents and that what is ‘right’ is what maximizes some globally agreed upon utility function. I don’t accept either premise so I’m fairly sure I’m not a utilitarian.
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies. I haven’t found a description of an ethical theory that I feel comfortable identifying my views with so far, though ethical egoism seems somewhat close from the little I’ve read on Wikipedia (it’s what I ended up putting down on Yvain’s survey).
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies.
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.
They are often designed by construction to trigger intuitive judgments based on scenarios so far from actual experience that those judgments are rendered highly untrustworthy.
Agreed; however it’s important to distinguish between this sort of appeal-to-intuition and the more rigorous sort of thought experiment that appeals to reasoning (e.g. Einstein’s famous Gedankenexperimente).
I don’t believe it changes my conclusion of the morality of the act.
Given that your defense of the morality was based on the inaction of not self sacrificing, and that in this scenario inaction means self sacrifice and you have to actively kill the other 10 people to avoid it, what reasoning supports keeping the same conclusion?
I’m comparing the inaction to the not-self-sacrificing, not to the lack of action. I attempted to clarify the distinction when I said the similarity was not ‘anything specific about the way the question is phrased’.
The similarity is not about the causality but about the cost paid. In many ‘morality of inaction’ problems the cost to self is usually so low as to be neglected but in fact all actions carry a cost. I see the problem not as primarily one of determining causality but more as a cost-benefit analysis. Inaction is usually the ‘zero-cost’ option, action carries a cost (which may be very small, like pressing a button, or extremely large, like jumping in front of a moving trolley). The benefit is conferred directly on other parties and indirectly on yourself according to what value you place on the welfare of others (and possibly according to other criteria).
I think our moral intuition is primed to distinguish between freely chosen actions taken to benefit ourselves that ignore fairly direct negative consequences on others (which we generally view as morally wrong) and refraining from taking actions that would harm ourselves but would fairly directly benefit others (which may or may not be viewed as morally wrong but are generally seen as ‘less wrong’ than the former). We also seem primed to associate direct action with agency and free choice (since that is usually what it represents) and so directly taken actions tend to lead to events being viewed as the former rather than the latter.
I believe the moral ‘dilemma’ represented by carefully constructed thought experiments like this represents a conflict between our ‘agency recognizing’ intuition that attempts to distinguish directly taken action from inaction and our judgement of sins of commission vs. omission. Given that the unusual part of the dilemma is the forced choice imposed by a third party (the evil king) it seems likely that the moral intuition that is primed to react to agency is more likely to be making flawed judgements.
I see the problem not as primarily one of determining causality but more as a cost-benefit analysis.
This makes sense to me, but it seems to run counter to the nature of MrHen’s original claim that the issue is lack of responsibility. For example, if it’s all about CBA, then you would presumably be more uneasy about MrHen’s hostage example ($100 vs. 10 lives) than he seems to be. Presumably also you would become even more uneasy were it $10, or $1, whereas MrHen’s argument seems to suggest that all of this is irrelevant because you’re not responsible either way.
In this example I wouldn’t hold someone morally responsible for the murders if they failed to pay $100 ransom—that responsibility still lies firmly with the person taking the hostages. Depending on the circumstances I would probably consider it morally questionable to fail to pay such a low cost for such a high benefit to others though. That’s a little different to the question of moral responsibility for the deaths however.
Note that I also don’t consider an example like this morally equivalent to not donating $100 to a charity that is expected to save 10 lives as a utilitarian/consequentialist view of morality would tend to hold.
OK, I think I’m sort of with you now, but I’m just want to be clear about the nature of the similarity claim you’re making. Is it that:
you think there’s some sort of justificatory similarity between not-sacrificing and harm-by-inaction such that you those who are inclined to allow harm-by-inaction, should therefore also be more willing to allow not-sacrificing; or is it just that
you just happen to hold both the view that harm-by-inaction is allowed and the view that not-sacrificing is allowed, but the justifications for these views are independent (i.e. it’s merely a contingent surface similarity)?
I originally assumed you were claiming something along the lines of 1. but I’m struggling to see how such a link is supposed to work, so maybe I’ve misinterpreted you’re intention.
you think there’s some sort of justificatory similarity between not-sacrificing and harm-by-inaction such that you those who are inclined to allow harm-by-inaction, should therefore also be more willing to allow not-sacrificing
Yes. I’d generally hold that it is not morally wrong to allow harm-by-inaction: there is not a general moral obligation to act to prevent harm. In real moral dilemmas there is a continuum of cost to the harm-preventing action and when that cost is low relative to the harm prevented it would be morally good to perform that action but not morally required. At extremely low cost relative to harm things become a little fuzzy and inaction borders on an immoral choice. When the cost of the action is extremely high (likely or certain self-sacrifice) then there is no fuzziness and inaction is clearly morally allowed (not-sacrificing by jumping in front of a trolley cart to save 10 is not immoral).
Given inaction being morally permitted in the trolley case, I have difficulty imagining a coherent moral system that would then say that it was not permissible for the 11th man to save himself. The evil king does change the problem but I can only see it making not-sacrificing more rather than less morally acceptable. I can conceive of coherent moral systems that would allow the 11th man to save himself but would require the trolley jumper to sacrifice himself. I have difficulty conceiving of the reverse. That’s not to say that one doesn’t exist, it’s just sufficiently removed from my own moral sense that it doesn’t present itself to me.
That would fall in the territory I describe as fuzzy above. At a sufficiently low cost inaction begins to seem morally questionable. That is largely driven by intuition though and I’m skeptical of attempts to scale it up and draw moral conclusions. I believe there are reasons the intuition exists that do not scale up simply. In other words, scaling up from this to conclude that if a very small cost is obligatory to save a single person then a very large cost is obligatory to save a million people is faulty reasoning in my opinion.
You two seem to be making slightly different points here. Matt, I take it you accept that there is some reason to sacrifice yourself (not doing so would be “morally weak”) but that failing to do so would not be blameworthy. That sounds like a fairly mainstream view. In contrast, MrHen seems to be making the stronger claim that there is no reason to save the others at all (unless he has a personal investment in said others).
The idea that [the King is responsible for the deaths] screens off the possibility that [the 11th man is responsible for the deaths] seems to be a version of the single-true-cause fallacy. Sure, the king is responsible, but given the king’s actions, it’s the 11th man’s choice that directly determines whether the others will live or not.
If you want to prioritize your own life over theirs then you are free to do so, but I think you should own up to the fact that that’s ultimately what you’re doing. Disclaiming responsibility entirely seems like a convenient excuse designed to let you get what you want without having to feel bad about it.
I have to read the single-true-cause fallacy before I can fully reply, but here is a quick ditty to munch on until then:
I disagree with this. The eleventh’s choice is completely irrelevant. The king has a decision to make and just because he makes it the same every single time does not mean the actual decision is different the next time around.
The similar example where the king puts a gun in the eleventh’s hand and says “kill them or I kill you” is when the choice actually becomes the eleventh’s. In this scenario, the eleventh man has to choose to (a) kill the ten or (b) not kill the ten. This is a moral decision.
Of note, whoever actually has to kill the ten has this choice and will probably choose the selfish route. If the king shares the blame with anyone, it will be whoever actually kills the ten. If the eleventh is morally responsible than everyone else watching the event is morally responsible, too.
I don’t understand what coherent theory of causation could make this statement true.
If they could stop it, then yes, they are.
The issue is not causality. The issue is moral responsibility. If I go postal and start shooting people as they run past my house and later tell the police that it was because my neighbor pissed me off, the neighbor may have been one (of many) causes but should not be held morally responsible for my actions.
Likewise, if the king asks someone a question and, in response, kills ten people, I do not see how the question asks makes any different in the assignment of moral responsibility.
Causality does not imply moral responsibility.
Also, having read the link you gave earlier, I can now comment on this:
“Responsible” has two meanings. The first is a cause-effect sense of “these actions precluded these other actions.” This is the same as saying a bowling ball is responsible for the bowling pins falling over.
The other is a moral judgement stating “this person should be held accountable for this evil.” The bowling ball holds no moral responsibility because it was thrown by a bowler.
I am not claiming that the eleventh man was not part of the causal chain that resulted in ten people dying. I am claiming that the eleventh man holds no moral responsibility for the ten people dying. I am not trying to say that the king is the single-true-cause. I am claiming that the king is the one who should be held morally responsible.
To belabor this point with one more example: If I rigged a door to blow up when opened and Jack opened the door while standing next to Jill they are both reduced to goo. Jack is causally responsible for what happened because he opened the door. He is not, however, morally responsible.
The question of when someone does become morally responsible is tricky and I do not have a good example of when I think the line is crossed. I do not, however, pass any blame on the eleventh man for answer a question to which there is no correct answer.
Agreed. But I think if you want to separate the two, you need a reasonable account of the distinction. One plausible account relies on reasonably foreseeable consequences to ground responsibility, and this is pretty much my view. It accounts easily for the neighbor, bowling ball, and Jack and Jill cases, but still implies responsibility for the 11th man.
I can accept a view that says that, all things considered, the king has a greater causal influence on the outcome of the 11th man case, and thus bears much greater moral responsibility for it than does the 11th man. But (and this was the point of the no-single-true-cause analogy) I see no reason why this should imply that the 11th man has no responsibility whatsoever, given that the death of 10 innocent others is a clearly foreseeable consequence of his choice.
I still think this is a convenient conclusion designed to let you be selfish without feeling like you’re doing anything wrong.
P.S. FWIW, yes I pretty much do think you’re evil if you’re not willing to sacrifice $100 to save 10 lives in your hostage example. I can understand not being willing to die, even if I think it would be morally better to sacrifice oneself. (And I readily confess that it’s possible that I would take the morally wrong/weak choice if actually faced with this situation.) But for $100 I wouldn’t hesitate.
I can understand that. I have not dug quite so deeply into this area of my ethical map so it could be representing the territory poorly. What little mental exercises I have done have led me to this point.
I guess the example that really puts me in a pickle is asking what would happen if Jack knew the door was rigged but opened it anyway. It makes sense that Jack shares the blame. There seems to be something in me that says the physical action weighs against Jack.
So, if I had to write it up quickly:
Being a physical cause in a chain of events that leads to harm
While knowing the physical action has a high likelihood of leading to harm
Is evil
But, on the other hand:
Being a non-physical cause in a chain of events that leads to harm
While knowing the non-physical action has a high likelihood of leading to harm
Is not necessarily evil but can be sometimes
Weird. That sure seems like an inconsistency to me. Looks like I need to get the mapmaking tools out. The stickiness of the eleventh man is that the king is another moral entity and the king somehow shrouds the eleventh from actually making a moral choice. But I do not have justification for that distinction.
There may yet be justification, but working backwards is not proper. Once I get the whole thing worked out I will report what I find, if you are interested.
Good luck with the map-making! I’d certainly be interested to know what you find, if and when you find it.
My use of the phrase ‘morally weak’ was to describe how I think many/most people would view the choice, not my own personal judgement. I agree with MrHen that the 11th man’s choice is not morally wrong. I was contrasting that with what I think would be the mainstream view that the choice is morally wrong but understandable and not deserving of punishment.
To me this is similar to the trolley problems where you are supposed to choose between taking action and killing one person to save 10 or taking no action and allowing the 10 to die. The one person to be sacrificed is yourself however. I wouldn’t kill the one to save the 10 either (although I view that as more morally wrong than sacrificing yourself). I also generally place much lower moral weight on harm caused by inaction than harm caused by action and the forced choice scenario here presents the 11th man with a situation that I think is similar to one of causing harm by inaction.
Sorry, my bad. Thanks for clearing that up.
As to the act-omission distinction, it would be simple enough to stipulate that the default option is that you die unless you tell the king to kill the other ten. Does this change your willingness to die?
No, that wouldn’t change my decision. It’s the not-sacrificing-your-life that I’m comparing with causing harm by inaction (the inaction being the not-sacrificing) rather than anything specific about the way the question is phrased.
The agency of the king does make a relevant difference in this scenario in my view. It is not exactly equivalent to a scenario where you could sacrifice your life to save 10 people from a fire or car crash. Although I don’t think there is a moral obligation in that case either I do consider the difference morally relevant.
Suppose the king has 10 people prepared to be hung. They are in the gallows with nooses around their neck, standing on a trap door. The king shows you a lever that will open the trap door, and kill the 10 victims. The king informs you that if you do not pull the lever within one hour, the 10 people will be freed and you will be executed.
Here the king has set up the situation, but you will be the last sentient being capable of moral reasoning in the causal chain that kills 10 people. Is your conclusion different in this scenario?
The king here is more diabolical and the scenario you describe is more traumatic. I believe it does change the intuitive moral response to the scenario. I don’t believe it changes my conclusion of the morality of the act. I feel that I’d still direct my moral outrage at the king and absolve the 11th man of moral responsibility.
This is where these kinds of artificial moral thought experiments start to break down though. In real situations analogous to this I believe the uncertainty in the outcomes of various actions (together with other unspecified details of the situation) would overwhelm the ‘pure’ decision made on the basis of the thought experiment. I’m unconvinced of the value of such intuition pumps in enhancing understanding of a problem.
Why is this where the thought experiments suddenly start to break down? Sure, it’s a less convenient world for you, but I don’t see why it’s any more artificial than the original problem, and you didn’t seem to take issue with that.
I have taken issue with the use of thought experiments generally in previous comments, partly because it seems to me that they start to break down rapidly when pushed further into ‘least convenient world’ territory. I’m skeptical in general of the value of thought experiments in revealing philosophical truths of any kind, ethical or otherwise. They are often designed by construction to trigger intuitive judgements based on scenarios so far from actual experience that those judgements are rendered highly untrustworthy.
I answered the original question to say that yes, I did agree that the 11th man was not acting immorally here. I suspect this particular thought experiment is constructed as an intuition pump to generate the opposite conclusion and to the extent that the first commenter is correct that the view that the 11th man has done nothing immoral is a minority position it would seem it serves its purpose.
I’ve attempted to explain why I think the intuition that this is morally questionable is generated and why I think it’s not to be fully trusted. I don’t intend to endorse the use of such thought experiments as a good method for examining moral questions though.
Fair enough. It was mainly the appearance of motivated stopping that I was concerned with.
While I share some general concerns about the reliability of thought experiments, in the absence of a better alternative, the question doesn’t seem to be whether we use them or not, but how we can make best use of them despite their potential flaws.
In order to answer that question, it seems like we might need a better theory of when they’re especially likely to be poor guides than we currently have. It’s not obvious, for example, that their information content increases monotonically in realism. Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.*
As well as trying to frame scenarios in ways that reduce noise/bias in our intuitions, we can also try to correct for the effect of known biases. A good example would be adjusting for scope insensitivty. But we need to be careful about coming up with just-so stories to explain away intuitions we disagree with. E.g. you claim that the altruist intuition is merely a low cost-signal; I claim that the converse is merely self-serving rationalization. Both of these seem like potentially good examples of confirmation bias at work.
Finally, it’s worth bearing in mind that, to the extent that our main concern is that thought experiments provide noisy (rather than biased) data, this could suggest that the solution is more thought experiments rather than fewer (for standard statistical reasons).
* And even if information content did increase with realism, realism doesn’t seem to correspond in any simple way to convenience (as your comments seem to imply). Not least because convenience is a function of one’s favourite theory as much as it is a function of the postulated scenario.
I would be interested in hearing more on this subject. It sounds similar to Hardend Problems Make Brittle Models. Do you have any good jumping points for further reading?
I don’t, but I’d second the call for any good suggestions.
I don’t consider moral intuitions simple at all though. In fact, in the case of morality I have a suspicion that trying to apply principles derived from simple thought experiments to making moral decisions is likely to produce results roughly as good as trying to catch a baseball by doing differential equations with a pencil. It seems fairly clear to me that our moral intuitions have been carefully honed by evolution to be effective at achieving a purpose (which has nothing much to do with an abstract concept of ‘good’) and when a simplified line of reasoning leads to a conflict with moral intuitions I tend to trust the intuitions more than the reasoning.
There seem to be cases where moral intuitions are maladapted to the modern world and result in decisions that appear sub-optimal, either because they directly conflict with other moral intuitions or because they tend to lead to outcomes that are worse for all parties. I place the evidentiary bar quite high in these cases though—there needs to be a compelling case made for why the moral intuition is to be considered suspect. A thought experiment is unlikely to reach that bar. Carefully collected data and a supporting theory are in with a chance.
I am also wary of bias in what people suggest should be thrown out when such conflicts arise. If our intuitions seem to conflict with a simple conception of altruism, maybe what we need to throw out is the simple conception of altruism as a foundational ‘good’, rather than the intuitions that produce the conflict.
I confess to being somewhat confused now. Your previous comment questioned the relevance of moral intuitions generated by particular types of thought experiments, and argued (on what seem to me pretty thin grounds) against accepting what seemed to be the standard intuition that the 11th man’s not-sacrificing is morally questionable.
In contrast, this comment extols the virtues of moral intuitions, and argues that we need a compelling case to abandon them. I’m sure you have a good explanation for the different standards you seem to be applying to intuitive judgments in each case, but I hope you’ll understand if I say this appears a little contradictory at the moment.
P.S. Is anyone else sick to death of the baseball/differential equations example? I doubt I’ll actually follow through on this, but I’m seriously tempted to automatically vote down anyone who uses it from now on, just because it’s becoming so overused around here.
P.P.S. On re-reading, the word “simple” in the sentence you quoted was utterly redundant. It shouldn’t have been there. Apologies for any confusion that may have caused.
I made a few claims in my original post: i) I don’t think the 11th man is acting immorally by saving himself over the 10; ii) most people would think he is acting immorally; iii) most people would choose to save themselves if actually confronted with this situation; iv) most people would consider the 11th man’s moral failing to be forgivable. I don’t have hard evidence for any claim except i), they are just my impressions.
The contradiction I see here is mostly in the conflict between what most people say they would do and what they would actually do. One possible resolution of the conflict is to say that self-sacrifice is the morally right thing to do but that most people are morally weak. Another possible resolution is to say that self-sacrifice is not a morally superior choice and therefore most people would actually not be acting immorally in this situation by not self-sacrificing. I lean towards the latter and would attempt to explain the conflict by saying that people see more value in signaling altruism cheaply (by saying they would self-sacrifice in an imaginary scenario) than in actually being altruistic in a real scenario. There is a genuine conflict here but I would resolve it by saying people have a tendency to over-value altruism in hypothetical moral scenarios relative to in actual moral decisions. I actually believe that this tendency is harmful and leads to worse outcomes but a full explanation of my thinking there would be a much longer post than I have time for right now.
Conflicts can exist between different moral intuitions when faced with an actual moral decision and resolving them is not simple but that’s a different case than conflicts between intuitions of what imaginary others should do in imagined scenarios and intuitions about what one should do oneself in a real scenario.
If you have a better alternative to the baseball/differential equations example I’d happily use it. It’s the first example that sprang to mind, probably due to it’s being commonly used here.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
If I say “X-ing is wrong, but actually, if ever faced with this situation I would quite possibly end up X-ing because I’m selfish/weak” (which is what I and others have said elsewhere) then (a) there’s no conflict to resolve; and (b) it doesn’t make much sense to claim that my judgment that “X is wrong” is a cheap signal of altruism. In fact I’ve just signaled the opposite.
Now, if people changing their moral judgments from “X-ing is wrong” to “X-ing is permissible”, then I agree that there’s a conflict to resolve. But it seems that cognitive dissonance provides an explanation of this behavior at least as good as cheap talk.
FWIW, If you want a self-interested explanation of the stated judgment that “X-ing is wrong”, I wonder whether moral censure (i.e. trying to convince others that they shouldn’t X, even though you will ultimately X) would be a better one than signaling. Not necessarily mutually exclusive I guess.
Judgements that a choice is morally wrong are clearly not the same thing as predictions about whether people would make that choice. The way I view morality though a wide gulf between the two is indicative of a problem to be resolved. I see the purpose of morality as providing a framework for solving something analogous to an iterated prisoners dilemma. If we can all agree to impose certain restrictions on our own actions because we all expect to do better if everyone sticks to the rules then we have a system of morality.
Humans have a complex interplay of instinctive moral intuitions and cultural norms that together form a moral framework that exists because it provides a reasonably stable solution to living in mutually beneficial societies. That doesn’t mean it can’t be improved, just that its very existence implies that it works reasonably well.
The problem then with a moral dilemma that appears to present a wide gap between what people say should be done and what people would actually do is that it suggests a flaw in the moral framework. A stable framework will generally require that decisions that people can agree are right (in that we’d expect on average to be better off if we all followed them) are also decisions that people can plausibly commit to taking if faced with the problem. It’s like the pre-commitment problem discussed before on less wrong. You might wish to argue for an idealized morality that sets standards for what people should do that are not what most people would do but then you have to make a plausible case for why what people actually do is wrong. Further, I’d argue you have to make a case for how your system could actually be implemented with actual people in a stable fashion—an idealized morality that is not achievable with actual people is not very interesting to me.
Ultimately I don’t take a utilitarian view of morality—that what is ‘good’ is what maximizes utility across all agents. I take an ‘enlightened self interest’ view—that what is ‘good’ is what all agents can agree is a framework that will tend to lead to better expected outcomes for each individual if each individual constrains his own immediate self interest in certain ways.
There are heaps and heaps of consequentialist/utilitarian views that don’t maximize utility uncritically across everybody. It sounds like you prefer something in the neighborhood of agent-favoring morality, but ethical egoism is a consequentialist view too.
Based on discussions I’ve had here I get the impression that most people consider ‘utilitarianism’, unqualified, to imply equal weighting for all people in the utility function to be maximized. Even where equal weighting is not implied (the existence of the ‘utility monster’ as a problem for some variants acknowledges that weights are not necessarily equal) it seems that utilitarianism has a unique weighting for all agents and that what is ‘right’ is what maximizes some globally agreed upon utility function. I don’t accept either premise so I’m fairly sure I’m not a utilitarian.
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies. I haven’t found a description of an ethical theory that I feel comfortable identifying my views with so far, though ethical egoism seems somewhat close from the little I’ve read on Wikipedia (it’s what I ended up putting down on Yvain’s survey).
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
You may find this paper on consequentialism and decision procedures interesting.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.
Agreed; however it’s important to distinguish between this sort of appeal-to-intuition and the more rigorous sort of thought experiment that appeals to reasoning (e.g. Einstein’s famous Gedankenexperimente).
Given that your defense of the morality was based on the inaction of not self sacrificing, and that in this scenario inaction means self sacrifice and you have to actively kill the other 10 people to avoid it, what reasoning supports keeping the same conclusion?
I’m comparing the inaction to the not-self-sacrificing, not to the lack of action. I attempted to clarify the distinction when I said the similarity was not ‘anything specific about the way the question is phrased’.
The similarity is not about the causality but about the cost paid. In many ‘morality of inaction’ problems the cost to self is usually so low as to be neglected but in fact all actions carry a cost. I see the problem not as primarily one of determining causality but more as a cost-benefit analysis. Inaction is usually the ‘zero-cost’ option, action carries a cost (which may be very small, like pressing a button, or extremely large, like jumping in front of a moving trolley). The benefit is conferred directly on other parties and indirectly on yourself according to what value you place on the welfare of others (and possibly according to other criteria).
I think our moral intuition is primed to distinguish between freely chosen actions taken to benefit ourselves that ignore fairly direct negative consequences on others (which we generally view as morally wrong) and refraining from taking actions that would harm ourselves but would fairly directly benefit others (which may or may not be viewed as morally wrong but are generally seen as ‘less wrong’ than the former). We also seem primed to associate direct action with agency and free choice (since that is usually what it represents) and so directly taken actions tend to lead to events being viewed as the former rather than the latter.
I believe the moral ‘dilemma’ represented by carefully constructed thought experiments like this represents a conflict between our ‘agency recognizing’ intuition that attempts to distinguish directly taken action from inaction and our judgement of sins of commission vs. omission. Given that the unusual part of the dilemma is the forced choice imposed by a third party (the evil king) it seems likely that the moral intuition that is primed to react to agency is more likely to be making flawed judgements.
This makes sense to me, but it seems to run counter to the nature of MrHen’s original claim that the issue is lack of responsibility. For example, if it’s all about CBA, then you would presumably be more uneasy about MrHen’s hostage example ($100 vs. 10 lives) than he seems to be. Presumably also you would become even more uneasy were it $10, or $1, whereas MrHen’s argument seems to suggest that all of this is irrelevant because you’re not responsible either way.
Am I understanding you correctly?
In this example I wouldn’t hold someone morally responsible for the murders if they failed to pay $100 ransom—that responsibility still lies firmly with the person taking the hostages. Depending on the circumstances I would probably consider it morally questionable to fail to pay such a low cost for such a high benefit to others though. That’s a little different to the question of moral responsibility for the deaths however.
Note that I also don’t consider an example like this morally equivalent to not donating $100 to a charity that is expected to save 10 lives as a utilitarian/consequentialist view of morality would tend to hold.
Well, you are certainly understanding me correctly.
OK, I think I’m sort of with you now, but I’m just want to be clear about the nature of the similarity claim you’re making. Is it that:
you think there’s some sort of justificatory similarity between not-sacrificing and harm-by-inaction such that you those who are inclined to allow harm-by-inaction, should therefore also be more willing to allow not-sacrificing; or is it just that
you just happen to hold both the view that harm-by-inaction is allowed and the view that not-sacrificing is allowed, but the justifications for these views are independent (i.e. it’s merely a contingent surface similarity)?
I originally assumed you were claiming something along the lines of 1. but I’m struggling to see how such a link is supposed to work, so maybe I’ve misinterpreted you’re intention.
Yes. I’d generally hold that it is not morally wrong to allow harm-by-inaction: there is not a general moral obligation to act to prevent harm. In real moral dilemmas there is a continuum of cost to the harm-preventing action and when that cost is low relative to the harm prevented it would be morally good to perform that action but not morally required. At extremely low cost relative to harm things become a little fuzzy and inaction borders on an immoral choice. When the cost of the action is extremely high (likely or certain self-sacrifice) then there is no fuzziness and inaction is clearly morally allowed (not-sacrificing by jumping in front of a trolley cart to save 10 is not immoral).
Given inaction being morally permitted in the trolley case, I have difficulty imagining a coherent moral system that would then say that it was not permissible for the 11th man to save himself. The evil king does change the problem but I can only see it making not-sacrificing more rather than less morally acceptable. I can conceive of coherent moral systems that would allow the 11th man to save himself but would require the trolley jumper to sacrifice himself. I have difficulty conceiving of the reverse. That’s not to say that one doesn’t exist, it’s just sufficiently removed from my own moral sense that it doesn’t present itself to me.
OK, I see where you’re coming from now. (We still have strongly differing intuitions about this, but that’s a separate matter.)
This thought experiment among other things convinces me that omission vs. commission is a sliding scale.
That would fall in the territory I describe as fuzzy above. At a sufficiently low cost inaction begins to seem morally questionable. That is largely driven by intuition though and I’m skeptical of attempts to scale it up and draw moral conclusions. I believe there are reasons the intuition exists that do not scale up simply. In other words, scaling up from this to conclude that if a very small cost is obligatory to save a single person then a very large cost is obligatory to save a million people is faulty reasoning in my opinion.