Or, to put it differently, it doesn’t fix the problem :-).
Exactly.
If we clamp all the r’s beyond rk to zero …
Intuitively, this does seem to be the right sort of approach—either you bound everything, with a maximum utility and a minimum probability, or you bound nothing.
Intuitively, this does seem to be the right sort of approach
It’s provably the right approach.
Let the allocation I described (with whatever choice of k optimizes the result) be R. Suppose it isn’t globally optimal, and let R’ be strictly better. R’ may have infinitely many nonzero rj, but can in any case be approximated arbitrarily closely by an R″ with only finitely many nonzero rj; do so, closely enough that R″ is still strictly better than R. Well, having only finitely many nonzero rj, R″ is no better than one of my candidates and so in particular isn’t better than R, contradiction.
a finite computer can’t store an infinite number of models [...]
For sure. Nor, indeed, can our finite brains. (This is one reason why our actual utility functions, in so far as we have them, probably are bounded. Of course that isn’t a good reason to use bounded utility functions in theoretical analyses unless all we’re hoping to do is to understand the behaviour of a single human brain.)
Exactly.
Intuitively, this does seem to be the right sort of approach—either you bound everything, with a maximum utility and a minimum probability, or you bound nothing.
It’s provably the right approach.
Let the allocation I described (with whatever choice of k optimizes the result) be R. Suppose it isn’t globally optimal, and let R’ be strictly better. R’ may have infinitely many nonzero rj, but can in any case be approximated arbitrarily closely by an R″ with only finitely many nonzero rj; do so, closely enough that R″ is still strictly better than R. Well, having only finitely many nonzero rj, R″ is no better than one of my candidates and so in particular isn’t better than R, contradiction.
I wasn’t doubting your math, I was doubting the underlying assumption of a bounded utility function.
Of course, if we want to get technical, a finite computer can’t store an infinite number of models of chocolate anyway.
I can defend that assumption: It is impossible for an expected utility maximizer to have an unbounded utility function, given only the assumption that the space of lotteries is complete. http://lesswrong.com/lw/gr6/vnm_agents_and_lotteries_involving_an_infinite/
Oh, I see. OK.
For sure. Nor, indeed, can our finite brains. (This is one reason why our actual utility functions, in so far as we have them, probably are bounded. Of course that isn’t a good reason to use bounded utility functions in theoretical analyses unless all we’re hoping to do is to understand the behaviour of a single human brain.)