The paper is really useless. The entire methodology of requiring some non-zero computable bound on the probability that the function with a given godel number will turn out to be correct is deeply flawed. The failure is really about the inability of a computable function to check if two Godel numbers code for the same function not about utilities and probability. Similarly insisting that the Utilities be bounded below by a computable function on the GODEL NUMBERS of the computable functions is unrealistic.
Note that one implicitly expects that if you consider longer and longer sequences of good events followed by nothing the utility will continue to rise. They basically rule out all the reasonable unbounded utility functions by fiat by requiring the infinite sequence of good events to have finite utility.
I mean consider the following really simply model. At each time step I either receive a 1 or a 0 bit from the environment. The utility is the number of consequtive 1′s that appear before the first 0. The probability measure is the standard coin flip measure. Everything is nice and every Borel set of outcomes has a well defined expected value but the utility function goes off to infinity and indeed is undefined on the infinite sequence of 1′s.
Awful paper but hard for non-experts to see where it gets the model wrong.
The right analysis is simply that we want a utility function that is L1 integrable on the space of outcomes with respect to the probability measure. That is enough to get rid of Pascal’s mugging.
The paper is really useless. The entire methodology of requiring some non-zero computable bound on the probability that the function with a given godel number will turn out to be correct is deeply flawed. The failure is really about the inability of a computable function to check if two Godel numbers code for the same function not about utilities and probability. Similarly insisting that the Utilities be bounded below by a computable function on the GODEL NUMBERS of the computable functions is unrealistic.
Note that one implicitly expects that if you consider longer and longer sequences of good events followed by nothing the utility will continue to rise. They basically rule out all the reasonable unbounded utility functions by fiat by requiring the infinite sequence of good events to have finite utility.
I mean consider the following really simply model. At each time step I either receive a 1 or a 0 bit from the environment. The utility is the number of consequtive 1′s that appear before the first 0. The probability measure is the standard coin flip measure. Everything is nice and every Borel set of outcomes has a well defined expected value but the utility function goes off to infinity and indeed is undefined on the infinite sequence of 1′s.
Awful paper but hard for non-experts to see where it gets the model wrong.
The right analysis is simply that we want a utility function that is L1 integrable on the space of outcomes with respect to the probability measure. That is enough to get rid of Pascal’s mugging.