However, if you and I have the seed of a super-intelligence in front of us, waiting only on our specifying a utility function and for us to press the “start” button, then if we can individually specify what we want for the world in the form of a utility function, then it would prove easy for us to work around the first of the two gotchas you point out.
As for the second gotcha, if we were at all pressed for time, I’d go ahead with my normalization method on the theory that the probability of the sum’s turning out to be exactly zero is very low.
I am interested however in hearing from readers who are better at math than I: how can the normalization method can be improved to remove the two gotchas?
ADDED. What I wrote so far in this comment fails to get at the heart of the matter. The purpose of a utility function is to encode preferences. Restricting our discourse to utility functions such that for every o in O, U(o) is a real number greater than zero and less than one does not restrict the kinds of preferences that can be encoded. And when we do that, every utility function in our universe of discourse can be normalized using the method already given—free from the two gotchas you pointed out. (In other words, instead of describing a gotcha-free method for normalizing arbitrary utility functions, I propose that we simply avoid defining certain utility functions that might be trigger one of the gotchas.)
Specifically, if o_worst is the worst outcome according to the agent under discussion and o_best is its best outcome, set U(o_worst)=0, U(o_best)=1 and for every other outcome o, set U(o) = p where p is the probability for which the agent is indifferent between o and the lottery [p, o_best; 1-p, o_worst].
Specifically, if o_worst is the worst outcome according to the agent under discussion and o_best is its best outcome, set U(o_worst)=0, U(o_best)=1 and for the other outcomes o, set U(o) = p where p is the probability for which the agent is indifferent between o and the lottery [p, o_best; 1-p, o_worst].
Good catch.
However, if you and I have the seed of a super-intelligence in front of us, waiting only on our specifying a utility function and for us to press the “start” button, then if we can individually specify what we want for the world in the form of a utility function, then it would prove easy for us to work around the first of the two gotchas you point out.
As for the second gotcha, if we were at all pressed for time, I’d go ahead with my normalization method on the theory that the probability of the sum’s turning out to be exactly zero is very low.
I am interested however in hearing from readers who are better at math than I: how can the normalization method can be improved to remove the two gotchas?
ADDED. What I wrote so far in this comment fails to get at the heart of the matter. The purpose of a utility function is to encode preferences. Restricting our discourse to utility functions such that for every o in O, U(o) is a real number greater than zero and less than one does not restrict the kinds of preferences that can be encoded. And when we do that, every utility function in our universe of discourse can be normalized using the method already given—free from the two gotchas you pointed out. (In other words, instead of describing a gotcha-free method for normalizing arbitrary utility functions, I propose that we simply avoid defining certain utility functions that might be trigger one of the gotchas.)
Specifically, if o_worst is the worst outcome according to the agent under discussion and o_best is its best outcome, set U(o_worst)=0, U(o_best)=1 and for every other outcome o, set U(o) = p where p is the probability for which the agent is indifferent between o and the lottery [p, o_best; 1-p, o_worst].
That’s a nice workaround!