You used a discount rate of 0. (That is, a hypothetical life a hundred generations from now deserves exactly as much of my consideration as people alive today). That totally discredits your calculation.
What makes a life at one time worth more than a life at a different time?
Mathematical well-behavedness may demand this if the universal expansion is not slowing down.
Reciprocity. Future folks aren’t concerned about my wishes, so why should I be concerned about theirs?
What makes a life at one time worth the same as a life at a different time?
In a sense, these are flip answers, because I am not really a utilitarian to begin with. And my rejection of utilitarianism starts by asking how it is possible to sum up utilities for different people. It is adding apples and oranges. There is no natural exchange rate. Utilities are like subjective probabilities of different people—it might make sense to compute a weighted average, but how do you justify your weighting scheme?
I suspect that discussing this topic carefully would take too much of my time from other responsibilities, but I hope this sketch has at least given you some things to think about.
And my rejection of utilitarianism starts by asking how it is possible to sum up utilities for different people. It is adding apples and oranges. There is no natural exchange rate.
Utilities for different people don’t come into it. The question is, how much to you now is a contemporaneous person worth, versus someone in a hundred generations? (Or did you mean utilities of different people?)
You have me confused now. I like apples. Joe likes oranges. Mary wishes to maximize total utility. Should she plant an orange tree or an apple tree?
She should consider the world as it would be if she planted the apple tree, and the world as it would be if she planted the orange tree, and see which one is better as she measures value. (Another option is to plant neither, of course.)
The idea isn’t to add up and maximize everyone’s utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent’s utility function. But utility isn’t just selfish profit. I can value that there be happiness even if I don’t ever get to know about it.
I agree with Kant (perhaps for the only time). Asking how much Joe is worth to Mary is verbotten.
We don’t always have the luxury of not choosing. What should Mary do in a trolley problem where she has to direct a train at either you or Joe (or else you both die)? That said . . .
“Worth” needs to be understood in an expansive sense. Kant is probably right that Mary shouldn’t think, “Now, from whom can I extract more profit for my selfish ends? That’s the one I’ll save.” The things she ought to consider are probably the ones we’re accustomed to thinking of as “selfless”. But she can’t evade making a decision.
The idea isn’t to add up and maximize everyone’s utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent’s utility function. But utility isn’t just selfish profit. I can value that there be happiness even if I don’t ever get to know about it.
Ok, I think we can agree to agree. Revealed preference doesn’t prevent me from incorporating utilitarianish snippets of other peoples utility judgments into my own preferences. I am allowed to be benevolent. But simple math and logic prevent me from doing it all-out, the way that Bentham suggested.
But simple math and logic prevent me from doing it all-out, the way that
Bentham suggested.
What simple math and logic? Utilitarianism seems pretty silly to me too—but adding different people’s utilities together is hardly a show-stopping problem.
The problem I see with utilitarianism is that it is a distant ideal. Ideals of moral behaviour normally work best when they act like a carrot which is slightly out of reach. Utilitarianism conflicts with people’s basic drives. It turns everyone into a sinner.
If you preach utilitarianism, people just think you are trying to manipulate them into giving away all their stuff. Usually that is true—promoters of utilitarianism are usually poorer folk who are after the rich people’s stuff—and have found a moral philosophy that helps them get at it.
Politicians often say they will tax the rich and give the money to the poor. This is because they want the poor people’s votes. Utilitarianism is the ethical equivalent of that. Leaders sometimes publicly promote such policies if they want the support of the masses in order to gain power.
Forbidden is not a fundamental property of the world, it is imposed by theorists with agendas.
Samuelson, Wald, von Neumann, Savage, and the other founders of “revealed preference” forbid us to ask how much Joe (or anything else) is worth, independent of an agent with preferences, such as Mary.
Emmanuel Kant, and anyone else who takes “The categorical imperative” at all seriously, forbids us to ask what Joe is worth to Mary, though we may ask what Joe’s cat Maru is worth to Mary.
I knew I shouldn’t have gotten involved in this thread.
Considered. Not convinced. If that was intended as an argument, then EY was having a very bad day.
He is welcome to his opinion but he is not welcome to substitute his for mine.
The ending was particularly bizarre. It sounded like he was saying that treasury bills don’t pay enough interest to make up for the risk that the US may not be here 300 years from now. But we should, for example, consider the projected enjoyment of people we imagine visiting our nature preserves 500 years from now, as if their enjoyment were as important as our own, not discounting at all for the risk that they may not even exist.
But we should, for example, consider the projected enjoyment of people we imagine visiting our nature preserves 500 years from now, as if their enjoyment were as important as our own, not discounting at all for the risk that they may not even exist.
Eliezer doesn’t disagree: as he says more than once, he’s talking about pure preferences, intrinsic values. Other risks do need to be incorporated, but it seems better to do so directly, rather than through a discounting heuristic. Larks seems to implicitly be doing this with his P(AGI) = 10^-9.
Time travel, the past “still existing”—and utilitariainism? I don’t buy any of that either—but in the context of artificial intelligence, I do agree that building discounting functions into the agent’s ultimate values looks like bad news.
Discounting functions arise because agents don’t know about the future—and can’t predict or control it very well. However, the extent to which they can’t predict or control it is a function of the circumstances and their own capabilities. If you wire temporal discounting into the ultimate preferences of super-Deep Blue—then it can’t ever self-improve to push its prediction horizon further out as it gets more computing power! You are unnecessarily building limitations into it. Better to have no temporal discounting wired in—and let the machine itself figure out to what extent it can predict and control the future—and so figure out the relative value of the present.
What makes a life at one time worth more than a life at a different time?
Distance.
Tradition in the field of economics
Mathematical well-behavedness may demand this if the universal expansion is not slowing down.
Reciprocity. Future folks aren’t concerned about my wishes, so why should I be concerned about theirs?
What makes a life at one time worth the same as a life at a different time?
In a sense, these are flip answers, because I am not really a utilitarian to begin with. And my rejection of utilitarianism starts by asking how it is possible to sum up utilities for different people. It is adding apples and oranges. There is no natural exchange rate. Utilities are like subjective probabilities of different people—it might make sense to compute a weighted average, but how do you justify your weighting scheme?
I suspect that discussing this topic carefully would take too much of my time from other responsibilities, but I hope this sketch has at least given you some things to think about.
Utilities for different people don’t come into it. The question is, how much to you now is a contemporaneous person worth, versus someone in a hundred generations? (Or did you mean utilities of different people?)
You have me confused now. I like apples. Joe likes oranges. Mary wishes to maximize total utility. Should she plant an orange tree or an apple tree?
I agree with Kant (perhaps for the only time). Asking how much Joe is worth to Mary is verbotten.
She should consider the world as it would be if she planted the apple tree, and the world as it would be if she planted the orange tree, and see which one is better as she measures value. (Another option is to plant neither, of course.)
The idea isn’t to add up and maximize everyone’s utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent’s utility function. But utility isn’t just selfish profit. I can value that there be happiness even if I don’t ever get to know about it.
We don’t always have the luxury of not choosing. What should Mary do in a trolley problem where she has to direct a train at either you or Joe (or else you both die)? That said . . .
“Worth” needs to be understood in an expansive sense. Kant is probably right that Mary shouldn’t think, “Now, from whom can I extract more profit for my selfish ends? That’s the one I’ll save.” The things she ought to consider are probably the ones we’re accustomed to thinking of as “selfless”. But she can’t evade making a decision.
Ok, I think we can agree to agree. Revealed preference doesn’t prevent me from incorporating utilitarianish snippets of other peoples utility judgments into my own preferences. I am allowed to be benevolent. But simple math and logic prevent me from doing it all-out, the way that Bentham suggested.
Now, which one of us has to tell Eliezer?
What simple math and logic? Utilitarianism seems pretty silly to me too—but adding different people’s utilities together is hardly a show-stopping problem.
The problem I see with utilitarianism is that it is a distant ideal. Ideals of moral behaviour normally work best when they act like a carrot which is slightly out of reach. Utilitarianism conflicts with people’s basic drives. It turns everyone into a sinner.
If you preach utilitarianism, people just think you are trying to manipulate them into giving away all their stuff. Usually that is true—promoters of utilitarianism are usually poorer folk who are after the rich people’s stuff—and have found a moral philosophy that helps them get at it.
Politicians often say they will tax the rich and give the money to the poor. This is because they want the poor people’s votes. Utilitarianism is the ethical equivalent of that. Leaders sometimes publicly promote such policies if they want the support of the masses in order to gain power.
No, what is forbidden is to ask how much Joe is worth in an absolute sense, independent of an agent like Mary.
Utility is not a fundamental property of the world, it is perceived by agents with preferences.
This is rapidly becoming surreal.
Forbidden is not a fundamental property of the world, it is imposed by theorists with agendas.
Samuelson, Wald, von Neumann, Savage, and the other founders of “revealed preference” forbid us to ask how much Joe (or anything else) is worth, independent of an agent with preferences, such as Mary.
Emmanuel Kant, and anyone else who takes “The categorical imperative” at all seriously, forbids us to ask what Joe is worth to Mary, though we may ask what Joe’s cat Maru is worth to Mary.
I knew I shouldn’t have gotten involved in this thread.
Immanuel Kant says we that can’t ask what Joe is worth to Mary?
So what? Why should anyone heed that advice? It is silly.
Utilities of different people, yes. He’s complaining that they don’t add up.
For 2, perhaps consider:
http://lesswrong.com/lw/n2/against_discount_rates/
Considered. Not convinced. If that was intended as an argument, then EY was having a very bad day.
He is welcome to his opinion but he is not welcome to substitute his for mine.
The ending was particularly bizarre. It sounded like he was saying that treasury bills don’t pay enough interest to make up for the risk that the US may not be here 300 years from now. But we should, for example, consider the projected enjoyment of people we imagine visiting our nature preserves 500 years from now, as if their enjoyment were as important as our own, not discounting at all for the risk that they may not even exist.
Eliezer doesn’t disagree: as he says more than once, he’s talking about pure preferences, intrinsic values. Other risks do need to be incorporated, but it seems better to do so directly, rather than through a discounting heuristic. Larks seems to implicitly be doing this with his P(AGI) = 10^-9.
Time travel, the past “still existing”—and utilitariainism? I don’t buy any of that either—but in the context of artificial intelligence, I do agree that building discounting functions into the agent’s ultimate values looks like bad news.
Discounting functions arise because agents don’t know about the future—and can’t predict or control it very well. However, the extent to which they can’t predict or control it is a function of the circumstances and their own capabilities. If you wire temporal discounting into the ultimate preferences of super-Deep Blue—then it can’t ever self-improve to push its prediction horizon further out as it gets more computing power! You are unnecessarily building limitations into it. Better to have no temporal discounting wired in—and let the machine itself figure out to what extent it can predict and control the future—and so figure out the relative value of the present.