I can think of an infinite utility scenario. Say the AI figures out a way to run arbitrarily powerful computations in constant time. Say it’s utility function is over survival and happiness of humans. Say it runs an infinite loop (in constant time), consisting of a formal system containing implementations of human minds, which it can prove will have some minimum happiness, forever. Thus, it can make predictions about its utility a thousand years from now just as accurately as ones about a billion years from now, or n, where n is an finite number of years. Summing the future utility of the choice to turn on the computer, from zero to infinity, would be an infinite result. Contrived I know, but the point stands.
I can think of an infinite utility scenario. Say the AI figures out a way to run arbitrarily powerful computations in constant time. Say it’s utility function is over survival and happiness of humans. Say it runs an infinite loop (in constant time), consisting of a formal system containing implementations of human minds, which it can prove will have some minimum happiness, forever. Thus, it can make predictions about its utility a thousand years from now just as accurately as ones about a billion years from now, or n, where n is an finite number of years. Summing the future utility of the choice to turn on the computer, from zero to infinity, would be an infinite result. Contrived I know, but the point stands.