One answer is to not try, and to instead treat infinite utility as an instance in which utility is a leaky abstraction. The concept of utility has descriptive value when modeling scenarios in which an agent chooses between actions that produce different distinct outcomes, and where the agent has a tendency to choose some actions over others based on the outcomes the agent expects those actions to produce. In such scenarios, you can construct a utility function for the agent as a tool for modeling the agent’s behavior. Utility, as a concept, acts as a prediction-making tool with which irrelevant features of the physical environment are abstracted away.
Even in clearly-defined decision-modeling problems, the abstraction of a utility function will frequently give imperfect results due to phenomena such as cyclical preferences and hyperbolic discounting. But things get much worse when you consider infinities. What configuration of matter and energy could you point to and say, “that’s an agent experiencing infinite utility?” An agent that has a finite size and lasts for a finite amount of time would not be able to have an experience with infinite contents, much less be able to exhibit a tendency toward those infinite contents in its decision-making. “Infinite utility” doesn’t correspond to any conceivable state of affairs. At infinity, the concept of utility breaks down and isn’t useful for world modeling.
One answer is to not try, and to instead treat infinite utility as an instance in which utility is a leaky abstraction. The concept of utility has descriptive value when modeling scenarios in which an agent chooses between actions that produce different distinct outcomes, and where the agent has a tendency to choose some actions over others based on the outcomes the agent expects those actions to produce. In such scenarios, you can construct a utility function for the agent as a tool for modeling the agent’s behavior. Utility, as a concept, acts as a prediction-making tool with which irrelevant features of the physical environment are abstracted away.
Even in clearly-defined decision-modeling problems, the abstraction of a utility function will frequently give imperfect results due to phenomena such as cyclical preferences and hyperbolic discounting. But things get much worse when you consider infinities. What configuration of matter and energy could you point to and say, “that’s an agent experiencing infinite utility?” An agent that has a finite size and lasts for a finite amount of time would not be able to have an experience with infinite contents, much less be able to exhibit a tendency toward those infinite contents in its decision-making. “Infinite utility” doesn’t correspond to any conceivable state of affairs. At infinity, the concept of utility breaks down and isn’t useful for world modeling.