I agree: not constraining the ri to be non-negative is absurd. I apologize if I wasn’t clear about that.
This fixes the problem, except that [...]
Or, to put it differently, it doesn’t fix the problem :-). Roughly speaking avg log = log n − 1 so the point at which we start getting negative r’s is somewhere near r=alpha.n.
If we clamp all the r’s beyond rk to zero then the optimum r-vector has the same formula as before but with k instead of n everywhere. The resulting utility is obviously an increasing function of k (because it’s an optimum over a space that’s an increasing function of k) so the best we can do is to choose the biggest k that makes rk non-negative; that is, that makes r/k + alpha (avg log—log k) non-negative; that is, that makes k^k/k! ⇐ exp(r/alpha). k^k/k! is fairly close to exp(k) so this is very roughly k ⇐ r/alpha.
Or, to put it differently, it doesn’t fix the problem :-).
Exactly.
If we clamp all the r’s beyond rk to zero …
Intuitively, this does seem to be the right sort of approach—either you bound everything, with a maximum utility and a minimum probability, or you bound nothing.
Intuitively, this does seem to be the right sort of approach
It’s provably the right approach.
Let the allocation I described (with whatever choice of k optimizes the result) be R. Suppose it isn’t globally optimal, and let R’ be strictly better. R’ may have infinitely many nonzero rj, but can in any case be approximated arbitrarily closely by an R″ with only finitely many nonzero rj; do so, closely enough that R″ is still strictly better than R. Well, having only finitely many nonzero rj, R″ is no better than one of my candidates and so in particular isn’t better than R, contradiction.
a finite computer can’t store an infinite number of models [...]
For sure. Nor, indeed, can our finite brains. (This is one reason why our actual utility functions, in so far as we have them, probably are bounded. Of course that isn’t a good reason to use bounded utility functions in theoretical analyses unless all we’re hoping to do is to understand the behaviour of a single human brain.)
I agree: not constraining the ri to be non-negative is absurd. I apologize if I wasn’t clear about that.
Or, to put it differently, it doesn’t fix the problem :-). Roughly speaking avg log = log n − 1 so the point at which we start getting negative r’s is somewhere near r=alpha.n.
If we clamp all the r’s beyond rk to zero then the optimum r-vector has the same formula as before but with k instead of n everywhere. The resulting utility is obviously an increasing function of k (because it’s an optimum over a space that’s an increasing function of k) so the best we can do is to choose the biggest k that makes rk non-negative; that is, that makes r/k + alpha (avg log—log k) non-negative; that is, that makes k^k/k! ⇐ exp(r/alpha). k^k/k! is fairly close to exp(k) so this is very roughly k ⇐ r/alpha.
Exactly.
Intuitively, this does seem to be the right sort of approach—either you bound everything, with a maximum utility and a minimum probability, or you bound nothing.
It’s provably the right approach.
Let the allocation I described (with whatever choice of k optimizes the result) be R. Suppose it isn’t globally optimal, and let R’ be strictly better. R’ may have infinitely many nonzero rj, but can in any case be approximated arbitrarily closely by an R″ with only finitely many nonzero rj; do so, closely enough that R″ is still strictly better than R. Well, having only finitely many nonzero rj, R″ is no better than one of my candidates and so in particular isn’t better than R, contradiction.
I wasn’t doubting your math, I was doubting the underlying assumption of a bounded utility function.
Of course, if we want to get technical, a finite computer can’t store an infinite number of models of chocolate anyway.
I can defend that assumption: It is impossible for an expected utility maximizer to have an unbounded utility function, given only the assumption that the space of lotteries is complete. http://lesswrong.com/lw/gr6/vnm_agents_and_lotteries_involving_an_infinite/
Oh, I see. OK.
For sure. Nor, indeed, can our finite brains. (This is one reason why our actual utility functions, in so far as we have them, probably are bounded. Of course that isn’t a good reason to use bounded utility functions in theoretical analyses unless all we’re hoping to do is to understand the behaviour of a single human brain.)