Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good. Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people.
These objections suggest that you are actually applying consequentialism already! You are worrying that other consequences of killing one person to save five might outweigh the benefit saving four lives, which is exactly the sort of thing a good consequentialist should worry about.
I discharge number 3 and number 4 objection, as a situation where the problem is ill-defined. That is, the ammount of knowledge supposed to have is inverosimile or unkown. And yes, I think the fat guy case is a case of an ethical injunction. But doesn’t it slip the predictive power of consequentialism? It may not. I’m more concerned on the problems written below.
I do think you should act for a better outcome. I disagree in completeness and transitiveness of values. http://en.wikipedia.org/wiki/Rational_choice_theory#Other_assumptions
That’s the cause that utility is not cuantifiable, thus there’s not a calculation to show which action is right, thus there’s not a best possible action. The problem is that action is highly chaotic (sensitive) to non rational variables, because there are some actions where it is impossible to decide, but something has to be decided.
Look, how about the first example here http://en.wikipedia.org/wiki/Framing_effect_(psychology)?
I understand that you would choose the same in the first and second question. But what would you choose? A(=C) or B(=D). The answer should be none, just find a way where the 600 hundred people will keep alive. In the meantime, where that option is not possible, there are politics.
By the way, if you believe in utility maximization, explain me Arrow’s theorem. I think it disproves utilitarianism.
These objections suggest that you are actually applying consequentialism already! You are worrying that other consequences of killing one person to save five might outweigh the benefit saving four lives, which is exactly the sort of thing a good consequentialist should worry about.
I discharge number 3 and number 4 objection, as a situation where the problem is ill-defined. That is, the ammount of knowledge supposed to have is inverosimile or unkown. And yes, I think the fat guy case is a case of an ethical injunction. But doesn’t it slip the predictive power of consequentialism? It may not. I’m more concerned on the problems written below.
I do think you should act for a better outcome. I disagree in completeness and transitiveness of values. http://en.wikipedia.org/wiki/Rational_choice_theory#Other_assumptions That’s the cause that utility is not cuantifiable, thus there’s not a calculation to show which action is right, thus there’s not a best possible action. The problem is that action is highly chaotic (sensitive) to non rational variables, because there are some actions where it is impossible to decide, but something has to be decided. Look, how about the first example here http://en.wikipedia.org/wiki/Framing_effect_(psychology)? I understand that you would choose the same in the first and second question. But what would you choose? A(=C) or B(=D). The answer should be none, just find a way where the 600 hundred people will keep alive. In the meantime, where that option is not possible, there are politics.
By the way, if you believe in utility maximization, explain me Arrow’s theorem. I think it disproves utilitarianism.