If you have a set E = {X, Y, Z...} of possible actions, A (in E) is the utility-maximising action iff for all other B in E, the limit
dt’%20-%20\int_0%5Et%20{Eu(B,%20t’)dt’%20\right))
is greater than zero, or approaches zero from the positive side. Caveat: I have no evidence this doesn’t implode in some way, perhaps by the limit being undefined. This is just a stupid idea to consider. A possibly equivalent formulation is
Side comment: that math equation image generator you used is freakin’ excellent. The image itself is generated based from the URL, so you don’t have to worry about hosting. Editor is here.
Functions whose limit is +infinity and -infinity can be distinguished, so your good there.
I think it’s the same as my second: As long as the probability given both actions of a humanity lasting forever is nonzero, and the differences of expected utilities far in the future is nonzero, nothing that happens in the first million billion years matters.
The difference in expected utility would have to decrease slow enough (slower than exponential?) to not converge, not just be nonzero. [Which would be why exponential discounting “works”...]
However I would be surprised to see many decisions with that kind of lasting impact. The probability of an action having some effect at time t in the future “decays exponentially” with t (assuming p(Effect_t | Effect_{t-1}, Action) is approximately constant), so the difference in expected utility will in general fall off exponentially and therefore converge anyway. Exceptions would be choices where the utilities of the likely effects increase in magnitude (exponentially?) as t increases.
Anyway I don’t see infinities as an inherent problem under this scheme. In particular if we don’t live forever, everything we do does indeed matter. If we do live forever, what we do does matter, excepts how it affects us might not if we anticipate causing “permanant” gain by doing something.
Can’t think about the underlying idea right now due to headache, but instead of talking about any sort of limit, just say that it’s eventually positive, if that’s what you mean.
How about this:
If you have a set E = {X, Y, Z...} of possible actions, A (in E) is the utility-maximising action iff for all other B in E, the limit
dt’%20-%20\int_0%5Et%20{Eu(B,%20t’)dt’%20\right))is greater than zero, or approaches zero from the positive side. Caveat: I have no evidence this doesn’t implode in some way, perhaps by the limit being undefined. This is just a stupid idea to consider. A possibly equivalent formulation is
%20\implies%20\left(\int_0%5Et%20Eu(A,%20t’)dt’%20\geq%20\int_0%5Et%20Eu(B,%20t’)dt’\right))The inequality being greater or equal allows for two or more actions being equivalent, which is unlikely but possible.
Side comment: that math equation image generator you used is freakin’ excellent. The image itself is generated based from the URL, so you don’t have to worry about hosting. Editor is here.
I prefer this one, which automatically generates the link syntax to paste into a LW comment. There’s a short discussion of all this on the wiki.
Functions whose limit is +infinity and -infinity can be distinguished, so your good there.
I think it’s the same as my second: As long as the probability given both actions of a humanity lasting forever is nonzero, and the differences of expected utilities far in the future is nonzero, nothing that happens in the first million billion years matters.
The difference in expected utility would have to decrease slow enough (slower than exponential?) to not converge, not just be nonzero. [Which would be why exponential discounting “works”...]
However I would be surprised to see many decisions with that kind of lasting impact. The probability of an action having some effect at time t in the future “decays exponentially” with t (assuming p(Effect_t | Effect_{t-1}, Action) is approximately constant), so the difference in expected utility will in general fall off exponentially and therefore converge anyway. Exceptions would be choices where the utilities of the likely effects increase in magnitude (exponentially?) as t increases.
Anyway I don’t see infinities as an inherent problem under this scheme. In particular if we don’t live forever, everything we do does indeed matter. If we do live forever, what we do does matter, excepts how it affects us might not if we anticipate causing “permanant” gain by doing something.
Can’t think about the underlying idea right now due to headache, but instead of talking about any sort of limit, just say that it’s eventually positive, if that’s what you mean.