How do you incorporate potential infinite value into a utility function? If you assigned some non-zero probability of the second law of thermodynamics being violated sometime in the future, how should that change how much you value a long future?
[Question] Potential infinite value
I agree with Nate’s suggestion to recognize that Utility is an imperfect model, and perhaps the wrong one to use in infinite situations.
But also, you can get a long way with some form of discounting and declining marginal value for any real measurable thing.
Related: The St. Petersburg Paradox
One answer is to not try, and to instead treat infinite utility as an instance in which utility is a leaky abstraction. The concept of utility has descriptive value when modeling scenarios in which an agent chooses between actions that produce different distinct outcomes, and where the agent has a tendency to choose some actions over others based on the outcomes the agent expects those actions to produce. In such scenarios, you can construct a utility function for the agent as a tool for modeling the agent’s behavior. Utility, as a concept, acts as a prediction-making tool with which irrelevant features of the physical environment are abstracted away.
Even in clearly-defined decision-modeling problems, the abstraction of a utility function will frequently give imperfect results due to phenomena such as cyclical preferences and hyperbolic discounting. But things get much worse when you consider infinities. What configuration of matter and energy could you point to and say, “that’s an agent experiencing infinite utility?” An agent that has a finite size and lasts for a finite amount of time would not be able to have an experience with infinite contents, much less be able to exhibit a tendency toward those infinite contents in its decision-making. “Infinite utility” doesn’t correspond to any conceivable state of affairs. At infinity, the concept of utility breaks down and isn’t useful for world modeling.