In order this to be true forever, the world would have to never end, which would mean that there’s infinite utility no matter what you do.
If this is false eventually, there is no paradox. Whether or not It’s worth while to invest for a few centuries is an open question, but if it turns out it is, that’s no reason to abandon the idea of comparing charities.
In order this to be true forever, the world would have to never end, which would mean that there’s infinite utility no matter what you do.
That doesn’t sound right… even if I’m expecting an infinite future I think I’d still want to live a good existence rather than a mediocre one (but with >0 utility). So it does matter what I do.
Say I have two options:
A, which offers on average 1.. utilon per second? (Are utilons measures of utility of a time period, or instantaneous utility?)
B, which offers on average 2 utilons / s
The limits as t approaches infinity are U(A) = t, U(B) = 2t. Both are “infinite” but B is yet larger than A, and therefore “better”.
You can switch between A and B just by rearranging when events happen. For example, imagine that there are two planets moving in opposite directions. One is a Utopia, the other is a Distopia. From the point of reference of the Utopia, time is slowed down in the Distopia, so the world is worth living in. From the point of reference of the Distopia, it’s reversed.
This gets even worse when you start dealing with expected utility. As messed up as the idea is that the order of events matter, there at least is an order. With expected utility, there is no inherent order.
The best I can do is assign the priors for infinite utility to zero, and make my priors fall off fast enough to make sure expected utility always converges. I’ve managed to prove that my posteriors will also always have a converging expected utility.
Problem: You don’t care very much about future people.
Method 2: Taking the average over all time (specifically the limit as t goes to infinity of the integral of utility from 0 to t, divided by t)
Conclusion which may be problematic: If humanity does not live forever, nothing we do matters.
Caveat: Depending on our anthropics, we can argue that the universe is infinite in time or space with probability 1, in which case there are an infinite number of copies of humanity, and so we can always calculate the average. This seems like the right approach to me. (In general, using the same math for your ethics and your anthropics has nice consequences, like avoiding most versions of Pascal’s Mugging.)
Excessive selfishness, sure. Some degree of selfishness is required as self-defense, currently, otherwise all your own needs are subsumed by supplying others’ wants.. Even in a completely symmetric society with everybody acting more for others’ good than their own is worse than one where everybody takes care of their own needs first—because each individual generally knows their own needs and wants better than anyone else does.
I don’t know the needs and wants of the future. I can’t know them particularly well. I have worse and worse uncertainty the farther away in time that is. Unless we’re talking about species-extinction level of events, I damn well should punt to those better informed, those closer to the problems.
It also is intuitive that we would like to care more about future people.
Not to me. Heck. I’m not entirely sure what it means to care about a person who doesn’t exist yet, and where my choices will influence which of many possible versions will exist.
each individual generally knows their own needs and wants better than anyone else does.
I don’t know the needs and wants of the future.
Expected-utility calculation already takes that into effect. Uncertainty about whether an action will be beneficial translates into a lower expected utility. Discounting, on top of that, is double counting.
Knowledge is a fact about probabilities, not utilities.
Not to me.
Let’s hope our different intuitions are resolvable.
I’m not entirely sure what it means to care about a person who doesn’t exist yet, and where my choices will influence which of many possible versions will exist.
Surely it’s not much more difficult than caring about a person who your choices will dramatically change?
If you have a set E = {X, Y, Z...} of possible actions, A (in E) is the utility-maximising action iff for all other B in E, the limit
dt’%20-%20\int_0%5Et%20{Eu(B,%20t’)dt’%20\right))
is greater than zero, or approaches zero from the positive side. Caveat: I have no evidence this doesn’t implode in some way, perhaps by the limit being undefined. This is just a stupid idea to consider. A possibly equivalent formulation is
Side comment: that math equation image generator you used is freakin’ excellent. The image itself is generated based from the URL, so you don’t have to worry about hosting. Editor is here.
Functions whose limit is +infinity and -infinity can be distinguished, so your good there.
I think it’s the same as my second: As long as the probability given both actions of a humanity lasting forever is nonzero, and the differences of expected utilities far in the future is nonzero, nothing that happens in the first million billion years matters.
The difference in expected utility would have to decrease slow enough (slower than exponential?) to not converge, not just be nonzero. [Which would be why exponential discounting “works”...]
However I would be surprised to see many decisions with that kind of lasting impact. The probability of an action having some effect at time t in the future “decays exponentially” with t (assuming p(Effect_t | Effect_{t-1}, Action) is approximately constant), so the difference in expected utility will in general fall off exponentially and therefore converge anyway. Exceptions would be choices where the utilities of the likely effects increase in magnitude (exponentially?) as t increases.
Anyway I don’t see infinities as an inherent problem under this scheme. In particular if we don’t live forever, everything we do does indeed matter. If we do live forever, what we do does matter, excepts how it affects us might not if we anticipate causing “permanant” gain by doing something.
Can’t think about the underlying idea right now due to headache, but instead of talking about any sort of limit, just say that it’s eventually positive, if that’s what you mean.
In order this to be true forever, the world would have to never end, which would mean that there’s infinite utility no matter what you do.
If this is false eventually, there is no paradox. Whether or not It’s worth while to invest for a few centuries is an open question, but if it turns out it is, that’s no reason to abandon the idea of comparing charities.
That doesn’t sound right… even if I’m expecting an infinite future I think I’d still want to live a good existence rather than a mediocre one (but with >0 utility). So it does matter what I do.
Say I have two options:
A, which offers on average 1.. utilon per second? (Are utilons measures of utility of a time period, or instantaneous utility?)
B, which offers on average 2 utilons / s
The limits as t approaches infinity are U(A) = t, U(B) = 2t. Both are “infinite” but B is yet larger than A, and therefore “better”.
You can switch between A and B just by rearranging when events happen. For example, imagine that there are two planets moving in opposite directions. One is a Utopia, the other is a Distopia. From the point of reference of the Utopia, time is slowed down in the Distopia, so the world is worth living in. From the point of reference of the Distopia, it’s reversed.
This gets even worse when you start dealing with expected utility. As messed up as the idea is that the order of events matter, there at least is an order. With expected utility, there is no inherent order.
The best I can do is assign the priors for infinite utility to zero, and make my priors fall off fast enough to make sure expected utility always converges. I’ve managed to prove that my posteriors will also always have a converging expected utility.
So we need to formalize this, obviously.
Method 1: Exponential discounting.
Problem: You don’t care very much about future people.
Method 2: Taking the average over all time (specifically the limit as t goes to infinity of the integral of utility from 0 to t, divided by t)
Conclusion which may be problematic: If humanity does not live forever, nothing we do matters.
Caveat: Depending on our anthropics, we can argue that the universe is infinite in time or space with probability 1, in which case there are an infinite number of copies of humanity, and so we can always calculate the average. This seems like the right approach to me. (In general, using the same math for your ethics and your anthropics has nice consequences, like avoiding most versions of Pascal’s Mugging.)
Why is this a problem? This seems to match reality for most people.
So does selfishness and irrationality. We would like to avoid those. It also is intuitive that we would like to care more about future people.
Excessive selfishness, sure. Some degree of selfishness is required as self-defense, currently, otherwise all your own needs are subsumed by supplying others’ wants.. Even in a completely symmetric society with everybody acting more for others’ good than their own is worse than one where everybody takes care of their own needs first—because each individual generally knows their own needs and wants better than anyone else does.
I don’t know the needs and wants of the future. I can’t know them particularly well. I have worse and worse uncertainty the farther away in time that is. Unless we’re talking about species-extinction level of events, I damn well should punt to those better informed, those closer to the problems.
Not to me. Heck. I’m not entirely sure what it means to care about a person who doesn’t exist yet, and where my choices will influence which of many possible versions will exist.
Expected-utility calculation already takes that into effect. Uncertainty about whether an action will be beneficial translates into a lower expected utility. Discounting, on top of that, is double counting.
Knowledge is a fact about probabilities, not utilities.
Let’s hope our different intuitions are resolvable.
Surely it’s not much more difficult than caring about a person who your choices will dramatically change?
How about this:
If you have a set E = {X, Y, Z...} of possible actions, A (in E) is the utility-maximising action iff for all other B in E, the limit
dt’%20-%20\int_0%5Et%20{Eu(B,%20t’)dt’%20\right))is greater than zero, or approaches zero from the positive side. Caveat: I have no evidence this doesn’t implode in some way, perhaps by the limit being undefined. This is just a stupid idea to consider. A possibly equivalent formulation is
%20\implies%20\left(\int_0%5Et%20Eu(A,%20t’)dt’%20\geq%20\int_0%5Et%20Eu(B,%20t’)dt’\right))The inequality being greater or equal allows for two or more actions being equivalent, which is unlikely but possible.
Side comment: that math equation image generator you used is freakin’ excellent. The image itself is generated based from the URL, so you don’t have to worry about hosting. Editor is here.
I prefer this one, which automatically generates the link syntax to paste into a LW comment. There’s a short discussion of all this on the wiki.
Functions whose limit is +infinity and -infinity can be distinguished, so your good there.
I think it’s the same as my second: As long as the probability given both actions of a humanity lasting forever is nonzero, and the differences of expected utilities far in the future is nonzero, nothing that happens in the first million billion years matters.
The difference in expected utility would have to decrease slow enough (slower than exponential?) to not converge, not just be nonzero. [Which would be why exponential discounting “works”...]
However I would be surprised to see many decisions with that kind of lasting impact. The probability of an action having some effect at time t in the future “decays exponentially” with t (assuming p(Effect_t | Effect_{t-1}, Action) is approximately constant), so the difference in expected utility will in general fall off exponentially and therefore converge anyway. Exceptions would be choices where the utilities of the likely effects increase in magnitude (exponentially?) as t increases.
Anyway I don’t see infinities as an inherent problem under this scheme. In particular if we don’t live forever, everything we do does indeed matter. If we do live forever, what we do does matter, excepts how it affects us might not if we anticipate causing “permanant” gain by doing something.
Can’t think about the underlying idea right now due to headache, but instead of talking about any sort of limit, just say that it’s eventually positive, if that’s what you mean.
Bostrom would disagree with your conclusion that infinities are unproblematic for utilitarian ethics: http://www.nickbostrom.com/ethics/infinite.pdf