An obvious third alternative to the trolley problem: If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself, you don’t push the other guy.
But if you’re not fat enough, then yes, of course you push the fat guy onto the tracks, without hesitation. And you plead guilty to his murder. And you go to jail. One person dying and one person going to jail is preferable to 5 people dying. Or, if you’re so afraid of going to jail that you would rather die, then you can also jump onto the tracks after pushing the fat man, to be extra sure of being able to stop the trolley.
Technically, you should jump onto the tracks if by doing so you think that you can increase the probability of saving the 5 people by more than 20 percent.
I still consider myself a “Classical Utilitarian”, by the way, even though I am aware of some of the disturbing consequences of this belief system.
And I agree with the main point of your post, and upvoted it. But the real purpose of trolley problems is to explore edge-cases of moral systems, not to advocate or justify real life policy.
An obvious third alternative to the trolley problem: If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself, you don’t push the other guy.
To yourself, your own life could be significantly more important than life of others, so the tradeoff is different and you can’t easily argue that moral worth of 5 people is greater than moral worth of your own life, while when you consider 6 arbitrary people, it does obviously follow that (expected) moral worth of 5 people is greater than moral worth of 1.
To yourself, your own life could be significantly more important than life of others, so the tradeoff is different
If that’s just a matter of personal utility functions then the problem vanishes into tautology. You act in accordance with whatever that function says; or in everyday language, you do whatever you want—you will anyway. If not:
and you can’t easily argue that moral worth of 5 people is greater than moral worth of your own life, while when you consider 6 arbitrary people, it does obviously follow that (expected) moral worth of 5 people is greater than moral worth of 1.
You can very easily argue that, and most moral systems do, including utilitarianism. (Eg. greater love hath no man etc.) You may personally not like the conclusion that you should jump in front of the trolley, but that doesn’t change the calculation, it just means you’re not doing the shutting up part. Or as it is written, “You gotta do what you gotta do”.
If that’s just a matter of personal utility functions then the problem vanishes into tautology. You act in accordance with whatever that function says; or in everyday language, you do whatever you want—you will anyway.
Not at all necessarily, as you can also err. It’s not easy to figure out what you want. Hence difficult to argue, as compared to 5>1 arbitrary people.
You can very easily argue that [your life is less valuable than that of 5 strangers], and most moral systems do, including utilitarianism.
But not as easily, and I don’t even agree it’s a correct conclusion, while for 5>1 arbitrary people I’d guess almost everyone agrees.
But not as easily, and I don’t even agree it’s a correct conclusion, while for 5>1 arbitrary people I’d guess almost everyone agrees.
That just the part where one fails to shut up while calculating. No version of moral utilitarianism that I’ve seen distinguishes the agent making the decision from any other. If the calculation obliges you to push the fat man, it equally obliges you to jump if you’re the fat man. If a surgeon should harvest a healthy person’s organs to save five other people, the healthy person should volunteer for the sacrifice. Religions typically enjoin self-sacrifice. The only prominent systems I can think of that take the opposite line are Objectivism and Nietzsche’s writings, and nobody has boomed them here.
Just from knowing that A and B are some randomly chosen fruits, I can’t make a value judgment and declare that I prefer A to B, because states of knowledge are identical. But if it’s known that A=apple and B=kiwi, then I can well prefer A to B. Likewise, it’s not possible to have a preference between two people based on identical states of knowledge about them, but it’s possible to do so if we know more. People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
The trolley problem isn’t about what people generally prefer, but about exploring moral principles and intuitions with a thought experiment. Moral systems, with the exceptions I noted, generally do not prefer the agent. Beyond the agent, they usually do prefer close people to distant ones (Christianity being an exception here), but in the trolley problem, all of the people are distant to the agent.
I already pointed out that most moral principles do not specially favour the agent, while most people’s preferences do. Nobody wants to be the one who dies that others may live, yet some people have made that decision. Whatever moral principles and intuitions are, therefore, they are something different from “what people prefer”.
But I am fairly sure you know all this already, and I am at a loss to see where you are going with this.
Nobody wants to be the one who dies that others may live, yet some people have made that decision.
Was that a good decision (not a rhetorical question)? Who judges? I understand that aggregated preference of humanity has a neutral point of view, and so in any given situation prefers lives of 5 given normal people to life of 1 given normal person. But is there any good reason to be interested in this valuation in making your own decisions?
Note that having preference for your own life over lives of others could still lead to decisions similar to those you’d expect from a neutral-point-of-view preference. Through logical correlation of decisions made by different people, your decision to follow a given principle makes other people follow it in similar situations, which might benefit you enough for the causal effect of (say) losing your own life to be overweighted by that acausal effect of having your life saved counterfactually. This would be exactly the case where one personally prefers to die so that others may live (so that others could’ve died so that you could’ve lived). It’s not all about preference, even perfectly selfish agents would choose to self-sacrifice, given some assumptions.
That was a normative, not descriptive note. If all people acted according to a better decision theory, their actions would (presumably—I still don’t have good understanding of this) look like having a neutral point of view, despite their preferences remaining self-centered. Of course, if we have most people act as they actually do, then any given person won’t have enough acausal control over others.
Fair enough. The only small note I’d like to add is that the phrase “if all people acted according to a [sufficiently] better decision theory” does not seem to quite convey how distant from reality—or just realism—such a proposition is. It’s less in the ballpark of “if everyone had IQ 230″ than in that of “if everyone uploaded and then took the time to thoroughly grok and rewrite their own code”.
I don’t think that’s true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that’s desirable. If you are precommited to choosing option A no matter what, it doesn’t matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
ah, but what about a scenario where the only way to save the 5 people is to sacrifice the life of someone who is thin enough to fit through a small opening? eating ice cream would be a bad idea in that case.
All this shows is that it’s possible to construct two thought experiments which require precommitment to mutually exclusive courses of action in order to succeed. Knowing of only one, you would precommit to the correct course of action, but knowing both, what are your options? Reject the concept of a correct moral answer, reject the concept of thought experiments, reject one of the two thought experiments, or reject one of the premises of either thought experiment?
I think I would reject a premise; that the course of action offered is the one and only way to help. Either that, or bite the bullet and accept that there are actual situations in which a moral system will condemn all options—almost the beginnings of a proof of incompleteness of moral theory.
Of course, it doesn’t show that all possible moral theories are incomplete, just that any theory which founders on a trolley problem is potentially incomplete—but then, something tells me that given a moral theory, it wouldn’t be hard to describe a trolley problem that is unsolvable in that theory.
An obvious third alternative to the trolley problem: If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself, you don’t push the other guy.
But if you’re not fat enough, then yes, of course you push the fat guy onto the tracks, without hesitation. And you plead guilty to his murder. And you go to jail. One person dying and one person going to jail is preferable to 5 people dying. Or, if you’re so afraid of going to jail that you would rather die, then you can also jump onto the tracks after pushing the fat man, to be extra sure of being able to stop the trolley.
Technically, you should jump onto the tracks if by doing so you think that you can increase the probability of saving the 5 people by more than 20 percent.
Here is an interesting blog post on the topic of third alternatives to philosophical puzzles like these: http://tailsteak.com/archive.php?num=497
I still consider myself a “Classical Utilitarian”, by the way, even though I am aware of some of the disturbing consequences of this belief system.
And I agree with the main point of your post, and upvoted it. But the real purpose of trolley problems is to explore edge-cases of moral systems, not to advocate or justify real life policy.
To yourself, your own life could be significantly more important than life of others, so the tradeoff is different and you can’t easily argue that moral worth of 5 people is greater than moral worth of your own life, while when you consider 6 arbitrary people, it does obviously follow that (expected) moral worth of 5 people is greater than moral worth of 1.
If that’s just a matter of personal utility functions then the problem vanishes into tautology. You act in accordance with whatever that function says; or in everyday language, you do whatever you want—you will anyway. If not:
You can very easily argue that, and most moral systems do, including utilitarianism. (Eg. greater love hath no man etc.) You may personally not like the conclusion that you should jump in front of the trolley, but that doesn’t change the calculation, it just means you’re not doing the shutting up part. Or as it is written, “You gotta do what you gotta do”.
Not at all necessarily, as you can also err. It’s not easy to figure out what you want. Hence difficult to argue, as compared to 5>1 arbitrary people.
But not as easily, and I don’t even agree it’s a correct conclusion, while for 5>1 arbitrary people I’d guess almost everyone agrees.
That just the part where one fails to shut up while calculating. No version of moral utilitarianism that I’ve seen distinguishes the agent making the decision from any other. If the calculation obliges you to push the fat man, it equally obliges you to jump if you’re the fat man. If a surgeon should harvest a healthy person’s organs to save five other people, the healthy person should volunteer for the sacrifice. Religions typically enjoin self-sacrifice. The only prominent systems I can think of that take the opposite line are Objectivism and Nietzsche’s writings, and nobody has boomed them here.
Just from knowing that A and B are some randomly chosen fruits, I can’t make a value judgment and declare that I prefer A to B, because states of knowledge are identical. But if it’s known that A=apple and B=kiwi, then I can well prefer A to B. Likewise, it’s not possible to have a preference between two people based on identical states of knowledge about them, but it’s possible to do so if we know more. People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
The trolley problem isn’t about what people generally prefer, but about exploring moral principles and intuitions with a thought experiment. Moral systems, with the exceptions I noted, generally do not prefer the agent. Beyond the agent, they usually do prefer close people to distant ones (Christianity being an exception here), but in the trolley problem, all of the people are distant to the agent.
What’s that about, if not what people prefer?
I already pointed out that most moral principles do not specially favour the agent, while most people’s preferences do. Nobody wants to be the one who dies that others may live, yet some people have made that decision. Whatever moral principles and intuitions are, therefore, they are something different from “what people prefer”.
But I am fairly sure you know all this already, and I am at a loss to see where you are going with this.
Was that a good decision (not a rhetorical question)? Who judges? I understand that aggregated preference of humanity has a neutral point of view, and so in any given situation prefers lives of 5 given normal people to life of 1 given normal person. But is there any good reason to be interested in this valuation in making your own decisions?
Note that having preference for your own life over lives of others could still lead to decisions similar to those you’d expect from a neutral-point-of-view preference. Through logical correlation of decisions made by different people, your decision to follow a given principle makes other people follow it in similar situations, which might benefit you enough for the causal effect of (say) losing your own life to be overweighted by that acausal effect of having your life saved counterfactually. This would be exactly the case where one personally prefers to die so that others may live (so that others could’ve died so that you could’ve lived). It’s not all about preference, even perfectly selfish agents would choose to self-sacrifice, given some assumptions.
Acausal relationships between human agents are astronomically overestimated on LW.
That was a normative, not descriptive note. If all people acted according to a better decision theory, their actions would (presumably—I still don’t have good understanding of this) look like having a neutral point of view, despite their preferences remaining self-centered. Of course, if we have most people act as they actually do, then any given person won’t have enough acausal control over others.
Fair enough. The only small note I’d like to add is that the phrase “if all people acted according to a [sufficiently] better decision theory” does not seem to quite convey how distant from reality—or just realism—such a proposition is. It’s less in the ballpark of “if everyone had IQ 230″ than in that of “if everyone uploaded and then took the time to thoroughly grok and rewrite their own code”.
I don’t think that’s true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that’s desirable. If you are precommited to choosing option A no matter what, it doesn’t matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
This is not a problem of IQ.
This implies that if you are designing an AI that is expected to encounter trolley-like problems, it should precommit to eating lots of ice cream.
ah, but what about a scenario where the only way to save the 5 people is to sacrifice the life of someone who is thin enough to fit through a small opening? eating ice cream would be a bad idea in that case.
All this shows is that it’s possible to construct two thought experiments which require precommitment to mutually exclusive courses of action in order to succeed. Knowing of only one, you would precommit to the correct course of action, but knowing both, what are your options? Reject the concept of a correct moral answer, reject the concept of thought experiments, reject one of the two thought experiments, or reject one of the premises of either thought experiment?
I think I would reject a premise; that the course of action offered is the one and only way to help. Either that, or bite the bullet and accept that there are actual situations in which a moral system will condemn all options—almost the beginnings of a proof of incompleteness of moral theory.
Of course, it doesn’t show that all possible moral theories are incomplete, just that any theory which founders on a trolley problem is potentially incomplete—but then, something tells me that given a moral theory, it wouldn’t be hard to describe a trolley problem that is unsolvable in that theory.