Concise version: If we have some maximum utility per unit space (reasonable, since there is maximum entropy, and therefore probably a maximum information, per unit space), and we do not break the speed of light, our maximum possible utility will expand polynomially. If we discount future utility exponentially, like the 10-year doubling time of the economy can suggest, the merely polynomial growth gets damped exponentially and we don’t care about the far future.
Big problem: Assumes exponential discounting. However this can also be seen as a reductio of exponential discounting—we don’t want to ignore what happens 50 years from now, and we exhibit many behaviors typical of caring about the far future. There’s also a sound genetic basis for caring about our descendants, which implies non-exponential discounting programmed into us.
That’s tricky, since utility is defined as the stuff that gets maximized—and it can be extended beyond just consequentialism. What it relies on is the function-like properties of “goodness” of the relevant domain (world histories and world states being notable domains).
So a reductio of utility would have to contrast the function-ish properties of utility with some really compelling non-function-ish properties of our judgement of goodness. An example would be if if world state A was better than B, which was better than C, but C was better than A. This qualifies as “tricky” :P
Why should the objection “actual humans don’t work that way” work to dismiss exponential discounting, but not work to dismiss utility maximization? Humans have no genetic basis to either maximize their utility OR discount exponentially.
My (admittedly sketchy) understanding of the argument for exponential discounting is that any other function leaves you vulnerable to a money pump, IOW the only rational way for a utility maximizer to behave is to have that discount function. Is there a counter-argument?
Ah, that’s a good point—to have a constant utility function over time, things have to look proportionately the same at any time, so if there’s discounting it should be exponential. So I agree, this post is an argument against making strict utility maximizers with constant utility functions and also discounting. So I guess the options are to have either non-constant utility functions or no discounting. (random links!)
It seems very difficult to argue for a “flat” discount function, even if (as I can do only with some difficulty) one sees things from a utilitarian standpoint: I am not indifferent between gaining 1 utilon right away, versus gaining 1 utilon in one hundred years.
Probing to see where this intuition comes from, the first answer seems to be “because I’m not at all sure I’ll still be around in one hundred years”. The farther in the future the consequences of a present decision, the more uncertain they are.
My (admittedly sketchy) understanding of the argument for exponential discounting is that any other function leaves you vulnerable to a money pump, IOW the only rational way for a utility maximizer to behave is to have that discount function. Is there a counter-argument?
Do things become any clearer if you figure that some of what looks like time-discounting is actually risk-aversion with regard to future uncertainty? Ice cream now or more ice cream tomorrow? Well tomorrow I might have a stomach bug and I know I don’t now, so I’ll take it now. In this case, changing the discounting as information becomes available makes perfect sense.
Yes, there’s actually a literature on how exponential discounting combined with uncertainty can look like hyperbolic discounting. There are apparently two lines of thought on this:
There is less hyperbolic discounting than it seems. What has been observed as “irrational” hyperbolic discounting is actually just rational decision making using exponential discounting when faced with uncertainty. See Can We Really Observe Hyperbolic Discounting?
Evolution has baked hyperbolic discounting into us because it actually approximates optimal decision making in “typical” situations. See Uncertainty and Hyperbolic Discounting.
People are willing to pay people future money that increases exponentially in exchange for money now (stock trends bear this out and many other sorts of investments are inherently exponential). If we make the (bad, unendorsed by me) simplification that utility is proportional to money, people are willing to pay an exponential amount of future utility for current utility—that is, they discount the value of future utility.
We do have rent funds, or your local countries equivalent.
We want to educate our grandchildren, for a time after we expect us already to be dead.
We value fundamental research (i.e. give status to) just for the possibility that something interesting for the future which may not even be helpful for us comes out.
Point being: we want, people just fail to do so. Because of a lack of rationality, and knowledge. Which is the reason why LW ever came into being.
Now, you can argue about “far”. But with more brain than me, it should not be that difficult to make the same point.
Concise version:
If we have some maximum utility per unit space (reasonable, since there is maximum entropy, and therefore probably a maximum information, per unit space), and we do not break the speed of light, our maximum possible utility will expand polynomially. If we discount future utility exponentially, like the 10-year doubling time of the economy can suggest, the merely polynomial growth gets damped exponentially and we don’t care about the far future.
Big problem:
Assumes exponential discounting. However this can also be seen as a reductio of exponential discounting—we don’t want to ignore what happens 50 years from now, and we exhibit many behaviors typical of caring about the far future. There’s also a sound genetic basis for caring about our descendants, which implies non-exponential discounting programmed into us.
Alternately, as a reductio of being strict utility maximizers.
Or or or maybe it’s a reductio of multiplication!
As of right now, I’d rather be a utility maximizer than an exponential discounter.
Oh, don’t worry. Your time preferences being inconsistent, you’ll eventually come around to a different point of view. :)
That’s tricky, since utility is defined as the stuff that gets maximized—and it can be extended beyond just consequentialism. What it relies on is the function-like properties of “goodness” of the relevant domain (world histories and world states being notable domains).
So a reductio of utility would have to contrast the function-ish properties of utility with some really compelling non-function-ish properties of our judgement of goodness. An example would be if if world state A was better than B, which was better than C, but C was better than A. This qualifies as “tricky” :P
Why should the objection “actual humans don’t work that way” work to dismiss exponential discounting, but not work to dismiss utility maximization? Humans have no genetic basis to either maximize their utility OR discount exponentially.
My (admittedly sketchy) understanding of the argument for exponential discounting is that any other function leaves you vulnerable to a money pump, IOW the only rational way for a utility maximizer to behave is to have that discount function. Is there a counter-argument?
Ah, that’s a good point—to have a constant utility function over time, things have to look proportionately the same at any time, so if there’s discounting it should be exponential. So I agree, this post is an argument against making strict utility maximizers with constant utility functions and also discounting. So I guess the options are to have either non-constant utility functions or no discounting. (random links!)
It seems very difficult to argue for a “flat” discount function, even if (as I can do only with some difficulty) one sees things from a utilitarian standpoint: I am not indifferent between gaining 1 utilon right away, versus gaining 1 utilon in one hundred years.
Probing to see where this intuition comes from, the first answer seems to be “because I’m not at all sure I’ll still be around in one hundred years”. The farther in the future the consequences of a present decision, the more uncertain they are.
I guess you’re referring to this post by Eliezer? If so, see the comment I just made there.
Do things become any clearer if you figure that some of what looks like time-discounting is actually risk-aversion with regard to future uncertainty? Ice cream now or more ice cream tomorrow? Well tomorrow I might have a stomach bug and I know I don’t now, so I’ll take it now. In this case, changing the discounting as information becomes available makes perfect sense.
Yes, there’s actually a literature on how exponential discounting combined with uncertainty can look like hyperbolic discounting. There are apparently two lines of thought on this:
There is less hyperbolic discounting than it seems. What has been observed as “irrational” hyperbolic discounting is actually just rational decision making using exponential discounting when faced with uncertainty. See Can We Really Observe Hyperbolic Discounting?
Evolution has baked hyperbolic discounting into us because it actually approximates optimal decision making in “typical” situations. See Uncertainty and Hyperbolic Discounting.
May I ask how the doubling time of the economy can suggest how we discount future utility?
People are willing to pay people future money that increases exponentially in exchange for money now (stock trends bear this out and many other sorts of investments are inherently exponential). If we make the (bad, unendorsed by me) simplification that utility is proportional to money, people are willing to pay an exponential amount of future utility for current utility—that is, they discount the value of future utility.
Name three.
Okay, I thought of some. Exercise completed!
For a typical value of “we”:
We do have rent funds, or your local countries equivalent.
We want to educate our grandchildren, for a time after we expect us already to be dead.
We value fundamental research (i.e. give status to) just for the possibility that something interesting for the future which may not even be helpful for us comes out.
Point being: we want, people just fail to do so. Because of a lack of rationality, and knowledge. Which is the reason why LW ever came into being.
Now, you can argue about “far”. But with more brain than me, it should not be that difficult to make the same point.