By the way, I haven’t read about consequentialism. Is so wrong in so many levels!
Firstly, it is impossible to assign a numbered utility to each action. That is just not understanding human brain.
Secondly, it is impossible to sum up utilities, give me an example where summing different people utilities make any sense.
Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good.
Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people. The soberby of people might not be a good idea to make them feel they have the power.
Fifthly, if utility is not cuantitative, the logic of morality can’t be a computation. That’s my point. The discovery of reallity might be a calculation, because you go outside to see.
On the whole, what a disappointement. This page is so great I can’t understand why it understands so poorly morality. I recommend you to read, for example, Weber, who has a detailed theory of value in societies. Or Sartre for a complexity in defining what’s right.
Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good. Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people.
These objections suggest that you are actually applying consequentialism already! You are worrying that other consequences of killing one person to save five might outweigh the benefit saving four lives, which is exactly the sort of thing a good consequentialist should worry about.
I discharge number 3 and number 4 objection, as a situation where the problem is ill-defined. That is, the ammount of knowledge supposed to have is inverosimile or unkown. And yes, I think the fat guy case is a case of an ethical injunction. But doesn’t it slip the predictive power of consequentialism? It may not. I’m more concerned on the problems written below.
I do think you should act for a better outcome. I disagree in completeness and transitiveness of values. http://en.wikipedia.org/wiki/Rational_choice_theory#Other_assumptions
That’s the cause that utility is not cuantifiable, thus there’s not a calculation to show which action is right, thus there’s not a best possible action. The problem is that action is highly chaotic (sensitive) to non rational variables, because there are some actions where it is impossible to decide, but something has to be decided.
Look, how about the first example here http://en.wikipedia.org/wiki/Framing_effect_(psychology)?
I understand that you would choose the same in the first and second question. But what would you choose? A(=C) or B(=D). The answer should be none, just find a way where the 600 hundred people will keep alive. In the meantime, where that option is not possible, there are politics.
By the way, if you believe in utility maximization, explain me Arrow’s theorem. I think it disproves utilitarianism.
Firstly, it is impossible to assign a numbered utility to each action. That is just not understanding human brain.
Of course, the brain isn’t perfect. The fact that humans can’t always or even can’t usually apply truths doesn’t make them untrue.
Secondly, it is impossible to sum up utilities, give me an example where summing different people utilities make any sense.
Pressing a button kills one person, not pressing the button kills two people. utility(1 death) + utility(1 death) < utility(1 death)
Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good.
Assuming it’s bad to teach consequentialism to people doesn’t make consequentialism wrong. It’s bad to teach people how to make bombs but that doesn’t mean the knowledge to create bombs is incorrect. See Ethical Injunctions
Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people.
Such thought experiments often make unlikely assumptions such as perfect knowledge of consequences. That doesn’t make the conclusions of those thought experiments wrong, it just constrains them to unlikely situations.
Fifthly, if utility is not quantitative, the logic of morality can’t be a computation.
Qualitative analysis is still computable. If humans can do something it is computable.
The discovery of reallity might be a calculation, because you go outside to see.
By the way, I haven’t read about consequentialism. Is so wrong in so many levels! Firstly, it is impossible to assign a numbered utility to each action. That is just not understanding human brain. Secondly, it is impossible to sum up utilities, give me an example where summing different people utilities make any sense. Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good. Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people. The soberby of people might not be a good idea to make them feel they have the power. Fifthly, if utility is not cuantitative, the logic of morality can’t be a computation. That’s my point. The discovery of reallity might be a calculation, because you go outside to see. On the whole, what a disappointement. This page is so great I can’t understand why it understands so poorly morality. I recommend you to read, for example, Weber, who has a detailed theory of value in societies. Or Sartre for a complexity in defining what’s right.
These objections suggest that you are actually applying consequentialism already! You are worrying that other consequences of killing one person to save five might outweigh the benefit saving four lives, which is exactly the sort of thing a good consequentialist should worry about.
I discharge number 3 and number 4 objection, as a situation where the problem is ill-defined. That is, the ammount of knowledge supposed to have is inverosimile or unkown. And yes, I think the fat guy case is a case of an ethical injunction. But doesn’t it slip the predictive power of consequentialism? It may not. I’m more concerned on the problems written below.
I do think you should act for a better outcome. I disagree in completeness and transitiveness of values. http://en.wikipedia.org/wiki/Rational_choice_theory#Other_assumptions That’s the cause that utility is not cuantifiable, thus there’s not a calculation to show which action is right, thus there’s not a best possible action. The problem is that action is highly chaotic (sensitive) to non rational variables, because there are some actions where it is impossible to decide, but something has to be decided. Look, how about the first example here http://en.wikipedia.org/wiki/Framing_effect_(psychology)? I understand that you would choose the same in the first and second question. But what would you choose? A(=C) or B(=D). The answer should be none, just find a way where the 600 hundred people will keep alive. In the meantime, where that option is not possible, there are politics.
By the way, if you believe in utility maximization, explain me Arrow’s theorem. I think it disproves utilitarianism.
Of course, the brain isn’t perfect. The fact that humans can’t always or even can’t usually apply truths doesn’t make them untrue.
Pressing a button kills one person, not pressing the button kills two people. utility(1 death) + utility(1 death) < utility(1 death)
Assuming it’s bad to teach consequentialism to people doesn’t make consequentialism wrong. It’s bad to teach people how to make bombs but that doesn’t mean the knowledge to create bombs is incorrect. See Ethical Injunctions
Such thought experiments often make unlikely assumptions such as perfect knowledge of consequences. That doesn’t make the conclusions of those thought experiments wrong, it just constrains them to unlikely situations.
Qualitative analysis is still computable. If humans can do something it is computable.
Solomonoff induction is a formalized model of prediction of future events.