Given that I originally failed to understand VNM, I doubt I’m out of anyone’s league. I’m just saying that I have a good enough general background that if a commonly held assumption seems to lead to ridiculous conclusions, and there is a simple way, with sound technical justifications, to avoid those conclusions, but that involves rejecting the assumption, I am willing to reject the assumption rather than assuming that my reasoning is incorrect. This might be a bad idea, but as long as I post my rejection so that everyone can tell me why I’m stupid, it seems to work reasonably.
Also, I certainly don’t have a solid handle on how AGI should work, but I can see the different relevant components and the progress that people seem to be making on a couple of the fronts.
But to answer your question, the bound is on min |u(x)-u(y)| / max |u(a)-u(b)|, where the min is over alternatives x and y that you would consider sufficiently different to be worth distinguishing. This gets around the fact that u is only defined up to affine transformations, since ratios of differences are invariant under affine transformations. The point is basically that if p is very small and x and y are alternatives that are different enough that I would take (1-p)x+pa over (1-p)y+pb for ANY possible a and b (even very bad a and very good b), then by expected utility maximization I must have (1-p)u(x)+pu(a) > (1-p)u(y)+pu(b) for all a and b. Algebraic manipulation of this gives a bound on p/(1-p), which is basically p for small p, and then finding the optimum bound over x, y, a, and b gives the result claimed at the beginning of this paragraph.
The point is basically that if p is very small and x and y are alternatives that are different enough that I would take (1-p)x+pa over (1-p)y+pb for ANY possible a and b (even very bad a and very good b)
Hold on, aren’t you assuming your conclusion here? Unless the utility function is bounded already, a and b can be arbitrarily large values, and can always be made large enough to alter the inequality, regardless of the values of x, y, and p. That is, there are no possible alternatives x and y for which your statement holds.
My claim is that IF we don’t care about probabilities that are smaller than p (and I would claim that we shouldn’t care about probabilities less than, say, 1 in 3^^^3), then the utility function is bounded. This is because if we don’t care about probabilities smaller than p, then for any two objects x and y that are sufficiently different that we DO care about one over the other, we must have
(1-p)x+pa > (1-p)y+pb,
no matter how bad a is and how good b is. I am being somewhat sloppy in my language here. Probably differences in outcome can very continuously, so I can always find x and y that are so similar to each other that I do start carrying about other equally irrelevant differences in the distribution of outcomes. For instance, I would always choose spending 5 minutes of time and having a 10^-100 chance of being tortured for 10^1000 years over spending 10 minutes of time. But 5 minutes + chance of torture probably isn’t superior to 5+10^-1000 minutes. What I really mean is that, if x and y are sufficiently different that probabilities smaller than p just don’t come into the picture, then I can bound the range of my utility function in terms of |u(x)-u(y)| and p.
So just to be clear, my claim is that [not caring about small probabilities] implies [utility function is bounded]. This I can prove mathematically.
The other part of my claim [which I can’t prove mathematically] is that it doesn’t make sense to care about small probabilities. Of course, you can care about small probabilities while still having consistent preferences (heck, just pick some quantity that doesn’t have an obvious upper bound and maximize its expected value). But I would have a hard time believing that that is the utility function corresponding to your true set of preferences.
Given that I originally failed to understand VNM, I doubt I’m out of anyone’s league. I’m just saying that I have a good enough general background that if a commonly held assumption seems to lead to ridiculous conclusions, and there is a simple way, with sound technical justifications, to avoid those conclusions, but that involves rejecting the assumption, I am willing to reject the assumption rather than assuming that my reasoning is incorrect. This might be a bad idea, but as long as I post my rejection so that everyone can tell me why I’m stupid, it seems to work reasonably.
Also, I certainly don’t have a solid handle on how AGI should work, but I can see the different relevant components and the progress that people seem to be making on a couple of the fronts.
But to answer your question, the bound is on min |u(x)-u(y)| / max |u(a)-u(b)|, where the min is over alternatives x and y that you would consider sufficiently different to be worth distinguishing. This gets around the fact that u is only defined up to affine transformations, since ratios of differences are invariant under affine transformations. The point is basically that if p is very small and x and y are alternatives that are different enough that I would take (1-p)x+pa over (1-p)y+pb for ANY possible a and b (even very bad a and very good b), then by expected utility maximization I must have (1-p)u(x)+pu(a) > (1-p)u(y)+pu(b) for all a and b. Algebraic manipulation of this gives a bound on p/(1-p), which is basically p for small p, and then finding the optimum bound over x, y, a, and b gives the result claimed at the beginning of this paragraph.
Hold on, aren’t you assuming your conclusion here? Unless the utility function is bounded already, a and b can be arbitrarily large values, and can always be made large enough to alter the inequality, regardless of the values of x, y, and p. That is, there are no possible alternatives x and y for which your statement holds.
My claim is that IF we don’t care about probabilities that are smaller than p (and I would claim that we shouldn’t care about probabilities less than, say, 1 in 3^^^3), then the utility function is bounded. This is because if we don’t care about probabilities smaller than p, then for any two objects x and y that are sufficiently different that we DO care about one over the other, we must have
(1-p)x+pa > (1-p)y+pb,
no matter how bad a is and how good b is. I am being somewhat sloppy in my language here. Probably differences in outcome can very continuously, so I can always find x and y that are so similar to each other that I do start carrying about other equally irrelevant differences in the distribution of outcomes. For instance, I would always choose spending 5 minutes of time and having a 10^-100 chance of being tortured for 10^1000 years over spending 10 minutes of time. But 5 minutes + chance of torture probably isn’t superior to 5+10^-1000 minutes. What I really mean is that, if x and y are sufficiently different that probabilities smaller than p just don’t come into the picture, then I can bound the range of my utility function in terms of |u(x)-u(y)| and p.
So just to be clear, my claim is that [not caring about small probabilities] implies [utility function is bounded]. This I can prove mathematically.
The other part of my claim [which I can’t prove mathematically] is that it doesn’t make sense to care about small probabilities. Of course, you can care about small probabilities while still having consistent preferences (heck, just pick some quantity that doesn’t have an obvious upper bound and maximize its expected value). But I would have a hard time believing that that is the utility function corresponding to your true set of preferences.