And my rejection of utilitarianism starts by asking how it is possible to sum up utilities for different people. It is adding apples and oranges. There is no natural exchange rate.
Utilities for different people don’t come into it. The question is, how much to you now is a contemporaneous person worth, versus someone in a hundred generations? (Or did you mean utilities of different people?)
You have me confused now. I like apples. Joe likes oranges. Mary wishes to maximize total utility. Should she plant an orange tree or an apple tree?
She should consider the world as it would be if she planted the apple tree, and the world as it would be if she planted the orange tree, and see which one is better as she measures value. (Another option is to plant neither, of course.)
The idea isn’t to add up and maximize everyone’s utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent’s utility function. But utility isn’t just selfish profit. I can value that there be happiness even if I don’t ever get to know about it.
I agree with Kant (perhaps for the only time). Asking how much Joe is worth to Mary is verbotten.
We don’t always have the luxury of not choosing. What should Mary do in a trolley problem where she has to direct a train at either you or Joe (or else you both die)? That said . . .
“Worth” needs to be understood in an expansive sense. Kant is probably right that Mary shouldn’t think, “Now, from whom can I extract more profit for my selfish ends? That’s the one I’ll save.” The things she ought to consider are probably the ones we’re accustomed to thinking of as “selfless”. But she can’t evade making a decision.
The idea isn’t to add up and maximize everyone’s utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent’s utility function. But utility isn’t just selfish profit. I can value that there be happiness even if I don’t ever get to know about it.
Ok, I think we can agree to agree. Revealed preference doesn’t prevent me from incorporating utilitarianish snippets of other peoples utility judgments into my own preferences. I am allowed to be benevolent. But simple math and logic prevent me from doing it all-out, the way that Bentham suggested.
But simple math and logic prevent me from doing it all-out, the way that
Bentham suggested.
What simple math and logic? Utilitarianism seems pretty silly to me too—but adding different people’s utilities together is hardly a show-stopping problem.
The problem I see with utilitarianism is that it is a distant ideal. Ideals of moral behaviour normally work best when they act like a carrot which is slightly out of reach. Utilitarianism conflicts with people’s basic drives. It turns everyone into a sinner.
If you preach utilitarianism, people just think you are trying to manipulate them into giving away all their stuff. Usually that is true—promoters of utilitarianism are usually poorer folk who are after the rich people’s stuff—and have found a moral philosophy that helps them get at it.
Politicians often say they will tax the rich and give the money to the poor. This is because they want the poor people’s votes. Utilitarianism is the ethical equivalent of that. Leaders sometimes publicly promote such policies if they want the support of the masses in order to gain power.
Forbidden is not a fundamental property of the world, it is imposed by theorists with agendas.
Samuelson, Wald, von Neumann, Savage, and the other founders of “revealed preference” forbid us to ask how much Joe (or anything else) is worth, independent of an agent with preferences, such as Mary.
Emmanuel Kant, and anyone else who takes “The categorical imperative” at all seriously, forbids us to ask what Joe is worth to Mary, though we may ask what Joe’s cat Maru is worth to Mary.
I knew I shouldn’t have gotten involved in this thread.
Utilities for different people don’t come into it. The question is, how much to you now is a contemporaneous person worth, versus someone in a hundred generations? (Or did you mean utilities of different people?)
You have me confused now. I like apples. Joe likes oranges. Mary wishes to maximize total utility. Should she plant an orange tree or an apple tree?
I agree with Kant (perhaps for the only time). Asking how much Joe is worth to Mary is verbotten.
She should consider the world as it would be if she planted the apple tree, and the world as it would be if she planted the orange tree, and see which one is better as she measures value. (Another option is to plant neither, of course.)
The idea isn’t to add up and maximize everyone’s utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent’s utility function. But utility isn’t just selfish profit. I can value that there be happiness even if I don’t ever get to know about it.
We don’t always have the luxury of not choosing. What should Mary do in a trolley problem where she has to direct a train at either you or Joe (or else you both die)? That said . . .
“Worth” needs to be understood in an expansive sense. Kant is probably right that Mary shouldn’t think, “Now, from whom can I extract more profit for my selfish ends? That’s the one I’ll save.” The things she ought to consider are probably the ones we’re accustomed to thinking of as “selfless”. But she can’t evade making a decision.
Ok, I think we can agree to agree. Revealed preference doesn’t prevent me from incorporating utilitarianish snippets of other peoples utility judgments into my own preferences. I am allowed to be benevolent. But simple math and logic prevent me from doing it all-out, the way that Bentham suggested.
Now, which one of us has to tell Eliezer?
What simple math and logic? Utilitarianism seems pretty silly to me too—but adding different people’s utilities together is hardly a show-stopping problem.
The problem I see with utilitarianism is that it is a distant ideal. Ideals of moral behaviour normally work best when they act like a carrot which is slightly out of reach. Utilitarianism conflicts with people’s basic drives. It turns everyone into a sinner.
If you preach utilitarianism, people just think you are trying to manipulate them into giving away all their stuff. Usually that is true—promoters of utilitarianism are usually poorer folk who are after the rich people’s stuff—and have found a moral philosophy that helps them get at it.
Politicians often say they will tax the rich and give the money to the poor. This is because they want the poor people’s votes. Utilitarianism is the ethical equivalent of that. Leaders sometimes publicly promote such policies if they want the support of the masses in order to gain power.
No, what is forbidden is to ask how much Joe is worth in an absolute sense, independent of an agent like Mary.
Utility is not a fundamental property of the world, it is perceived by agents with preferences.
This is rapidly becoming surreal.
Forbidden is not a fundamental property of the world, it is imposed by theorists with agendas.
Samuelson, Wald, von Neumann, Savage, and the other founders of “revealed preference” forbid us to ask how much Joe (or anything else) is worth, independent of an agent with preferences, such as Mary.
Emmanuel Kant, and anyone else who takes “The categorical imperative” at all seriously, forbids us to ask what Joe is worth to Mary, though we may ask what Joe’s cat Maru is worth to Mary.
I knew I shouldn’t have gotten involved in this thread.
Immanuel Kant says we that can’t ask what Joe is worth to Mary?
So what? Why should anyone heed that advice? It is silly.
Utilities of different people, yes. He’s complaining that they don’t add up.