I don’t think it’s linear in the average Joe story, either; if there’s one threshold level of belief which changes his behavior, then utility is constant for levels of belief on either side of that threshold and discontinuous in between.
A rational agent can have its behavior depend on a threshold crossing of belief, but if there’s some belief that grants it utility in itself (e.g. Joe likes to believe he is attractive), the utility it gains from that belief has to be linear with the level of belief. Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.
Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.
This doesn’t sound right. Could you describe the Dutch-booking procedure explicitly? Assume that believing P with probability p gives me utility U(p)=p^2+C.
An additive constant seems meaningless here: if Joe gets C utilons no matter what p is, then those utilons are unrelated to p or to P—Joe’s behavior should be identical if U(p)=p^2, so for simplicity I’ll ignore the C.
Now, suppose Joe currently believes he is not attractive. A surgery has a .5 chance of making him attractive and a .5 chance of doing nothing. This surgery is worth U(.5)-U(0)=.25 utilons to Joe; he’ll pay up to that amount for it.
Suppose instead the surgeon promises to try again, once, if the first surgery fails. Then Joe’s overall chance of becoming attractive is .75, so he’ll pay U(.75)-U(0)=.75^2=0.5625 for the deal.
Suppose Joe has taken the first deal, and the surgeon offers to upgrade it to the second. Joe is willing to pay up to the difference in prices for the upgrade, so he’ll pay .5625-.25=.3125 for the upgrade.
Joe buys the upgrade. The surgeon performs the first surgery. Joe wakes up and learns that the surgery failed. Joe is entitled to a second surgery, thanks to that .3125-utility purchase of the upgrade. But the second surgery is now worth only .25 utility to him! The surgeon offers to buy that second surgery back from him at a cost of .26 utility. Joe accepts. Joe has spent a net of .0525 utility on an upgrade that gave him no benefit.
As a sanity check, let’s look at how it would go if Joe’s U(p)=p. The single surgery is worth .5. The double surgery is worth .75. Joe will pay up to .25 utility for the upgrade. After the first surgery fails, the upgrade is worth .5 utility. Joe does not regret his purchase.
You’re missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward. If Joe expects to have the surgery but to never find out whether or not it worked, then its value is U(0.5)-U(0)=0.25. On the other hand, if he expects to be told whether it worked or not, then he ends up with a belief-score or either 0 or 1, not 0.5, so its value is (0.5*U(1.0) + 0.5*U(0)) - U(0) = 0.5.
Suppose Joe is uncertain whether he’s attractive or not—he assigns it a probability of 1⁄3. Someone offers to tell him the true answer. If Joe’s utility-of-belief function is U(p)=p^2, then being told the answer is worth ((1/3)*U(1) + (2/3)*U(0)) - U(1/3) = ((1/3)*1 + (2/3)*0) - (1/9) = 2⁄9, so he takes the offer. If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)*sqrt(1) + (2/3)*sqrt(0)) - sqrt(1/3) = −0.244, so he plugs his ears.
Okay, here we go. I’ve possibly reinvented the wheel here, but maybe I’ve come up with a simple, original result. That’d be cool. Or I’m interestingly wrong.
We wish to show that superlinear utility-of-belief functions, or equivalently ones that would cause an agent to prefer ignorance, lead to inconsistency.
Suppose Joe equally wants to believe each of two propositions, P and Q, to be true, with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x. Without loss of generality, we set U(0) to 0 and U(1) to 1. Both propositions concern events that will invisibly occur at some known future time.
Joe anticipates that he will eventually be given the following choice, which will completely determine P and Q:
Option 1: P xor Q. Joe won’t know which one is true, so he believes each of them is true with probability 1⁄2. So he has U(1/2)+U(1/2)=2*U(1/2) utility. By assumption this is greater than 1. So let 2*U(1/2) − 1 = k.
Option 2: One proposition will become definitely true. The other will become true with probability p, where p is chosen to be greater than 0 but less than U-inverse(k). Joe will know which proposition is which. Joe’s utility would be less than U(1) + U(U-inverse(k)), or less than 1 + 2*U(1/2) − 1, or less than 2*U(1/2).
Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2*U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2*U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to x*U(1). If he were to modify his utility function such that U’(x) = x*U(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function.
Thus we can say that all superlinear utility functions are inherently unstable, in that an agent with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x, may increase its expected U by modifying to U’(x) = x*U(1) for all x.
The strongest possible constraint we can give for inherent stability of a utility-of-belief function is that, with utility-of-belief function U, an agent can never improve its U-utility by switching to any other utility function, except under cases wherein it anticipates being modeled by an outside entity. If we removed this exception, no non-degenerate utility-of-belief function could be called stable because we could always posit an outside entity that punishes agents modeled to have specific utility functions. The linear utility of belief function satisfies this condition, since it behaves identically whether it is maximizing the probability of P or its U(p(P)), so it always anticipates itself maximizing its own utility function. We have just shown that no superlinear function satisfies this constraint.
But by conservation of expected evidence, no agent with a linear or sublinear utility-of-belief function can increase its expected utility-of-belief by hiding evidence from itself.
Therefore, a rational agent with a stable utility function cannot make itself happier by hiding evidence from itself, unless it is being modeled by an outside entity.
Thanks for taking the time to try puzzling this out, but I suspect it’s just interestingly wrong. The magic seems to be happening in this paragraph:
Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to xU(1). If he were to modify his utility function such that U’(x) = xU(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function.
I don’t see where U(1/2+p/2) comes from; should that be U(1)+U(p)? I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has. Finally, what does it matter whether a change increases expected utility under the new function? It’s only utility under the old function that matters—changing utility function to almost anything maximizes the new function, including degenerate utility functions like number of paperclips.
Joe doesn’t know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.
I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has.
Not sure what you mean here.
Finally, what does it matter whether a change increases expected utility under the new function?
It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U’, but Joe-with-U’ can’t maximize U’ by changing back to U.
That’s interesting. The one problem that I have is it’s rather unclear when a belief is evaluated for the purposes of utility. Which is to say, does Joe care about his belief at time t=now, or t=now+delta, or over all time? It seems obvious that most utility functions that care only about the present moment would have to be dynamically inconsistent, whether or not they mention belief.
Thanks, that’s a good point. In fact, it’s possible we can reduce the whole thing to the observation that it matters when utility of belief function is evaluated if and only if it’s nonlinear.
Apologies; I realize this is both not very clearly written, and full of holes when considered as a formal proof. I have a decent excuse in that I had to rush out the door to go to the HPMOR meetup right after writing it. Rereading it now, it still looks like a sketch of a compelling proof, so if neither jimrandomh nor any lurkers see any obvious problems, I’ll write it up as a longer paper, with more rigorous math and better explanations.
You’re missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward.
Good point.
If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)sqrt(1) + (2/3)sqrt(0)) - sqrt(1/3) = −0.244, so he plugs his ears.
I agree here.
But I still suspect that if your U(p) is anything other than linear on p, you can get Dutch-booked. I’ll try to come back with a proof, or at least an argument.
I don’t think it’s linear in the average Joe story, either; if there’s one threshold level of belief which changes his behavior, then utility is constant for levels of belief on either side of that threshold and discontinuous in between.
A rational agent can have its behavior depend on a threshold crossing of belief, but if there’s some belief that grants it utility in itself (e.g. Joe likes to believe he is attractive), the utility it gains from that belief has to be linear with the level of belief. Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.
This doesn’t sound right. Could you describe the Dutch-booking procedure explicitly? Assume that believing P with probability p gives me utility U(p)=p^2+C.
An additive constant seems meaningless here: if Joe gets C utilons no matter what p is, then those utilons are unrelated to p or to P—Joe’s behavior should be identical if U(p)=p^2, so for simplicity I’ll ignore the C.
Now, suppose Joe currently believes he is not attractive. A surgery has a .5 chance of making him attractive and a .5 chance of doing nothing. This surgery is worth U(.5)-U(0)=.25 utilons to Joe; he’ll pay up to that amount for it.
Suppose instead the surgeon promises to try again, once, if the first surgery fails. Then Joe’s overall chance of becoming attractive is .75, so he’ll pay U(.75)-U(0)=.75^2=0.5625 for the deal.
Suppose Joe has taken the first deal, and the surgeon offers to upgrade it to the second. Joe is willing to pay up to the difference in prices for the upgrade, so he’ll pay .5625-.25=.3125 for the upgrade.
Joe buys the upgrade. The surgeon performs the first surgery. Joe wakes up and learns that the surgery failed. Joe is entitled to a second surgery, thanks to that .3125-utility purchase of the upgrade. But the second surgery is now worth only .25 utility to him! The surgeon offers to buy that second surgery back from him at a cost of .26 utility. Joe accepts. Joe has spent a net of .0525 utility on an upgrade that gave him no benefit.
As a sanity check, let’s look at how it would go if Joe’s U(p)=p. The single surgery is worth .5. The double surgery is worth .75. Joe will pay up to .25 utility for the upgrade. After the first surgery fails, the upgrade is worth .5 utility. Joe does not regret his purchase.
You’re missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward. If Joe expects to have the surgery but to never find out whether or not it worked, then its value is U(0.5)-U(0)=0.25. On the other hand, if he expects to be told whether it worked or not, then he ends up with a belief-score or either 0 or 1, not 0.5, so its value is (0.5*U(1.0) + 0.5*U(0)) - U(0) = 0.5.
Suppose Joe is uncertain whether he’s attractive or not—he assigns it a probability of 1⁄3. Someone offers to tell him the true answer. If Joe’s utility-of-belief function is U(p)=p^2, then being told the answer is worth ((1/3)*U(1) + (2/3)*U(0)) - U(1/3) = ((1/3)*1 + (2/3)*0) - (1/9) = 2⁄9, so he takes the offer. If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)*sqrt(1) + (2/3)*sqrt(0)) - sqrt(1/3) = −0.244, so he plugs his ears.
Okay, here we go. I’ve possibly reinvented the wheel here, but maybe I’ve come up with a simple, original result. That’d be cool. Or I’m interestingly wrong.
We wish to show that superlinear utility-of-belief functions, or equivalently ones that would cause an agent to prefer ignorance, lead to inconsistency.
Suppose Joe equally wants to believe each of two propositions, P and Q, to be true, with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x. Without loss of generality, we set U(0) to 0 and U(1) to 1. Both propositions concern events that will invisibly occur at some known future time.
Joe anticipates that he will eventually be given the following choice, which will completely determine P and Q:
Option 1: P xor Q. Joe won’t know which one is true, so he believes each of them is true with probability 1⁄2. So he has U(1/2)+U(1/2)=2*U(1/2) utility. By assumption this is greater than 1. So let 2*U(1/2) − 1 = k.
Option 2: One proposition will become definitely true. The other will become true with probability p, where p is chosen to be greater than 0 but less than U-inverse(k). Joe will know which proposition is which. Joe’s utility would be less than U(1) + U(U-inverse(k)), or less than 1 + 2*U(1/2) − 1, or less than 2*U(1/2).
Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2*U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2*U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to x*U(1). If he were to modify his utility function such that U’(x) = x*U(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function.
Thus we can say that all superlinear utility functions are inherently unstable, in that an agent with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x, may increase its expected U by modifying to U’(x) = x*U(1) for all x.
The strongest possible constraint we can give for inherent stability of a utility-of-belief function is that, with utility-of-belief function U, an agent can never improve its U-utility by switching to any other utility function, except under cases wherein it anticipates being modeled by an outside entity. If we removed this exception, no non-degenerate utility-of-belief function could be called stable because we could always posit an outside entity that punishes agents modeled to have specific utility functions. The linear utility of belief function satisfies this condition, since it behaves identically whether it is maximizing the probability of P or its U(p(P)), so it always anticipates itself maximizing its own utility function. We have just shown that no superlinear function satisfies this constraint.
But by conservation of expected evidence, no agent with a linear or sublinear utility-of-belief function can increase its expected utility-of-belief by hiding evidence from itself.
Therefore, a rational agent with a stable utility function cannot make itself happier by hiding evidence from itself, unless it is being modeled by an outside entity.
Thanks for taking the time to try puzzling this out, but I suspect it’s just interestingly wrong. The magic seems to be happening in this paragraph:
I don’t see where U(1/2+p/2) comes from; should that be U(1)+U(p)? I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has. Finally, what does it matter whether a change increases expected utility under the new function? It’s only utility under the old function that matters—changing utility function to almost anything maximizes the new function, including degenerate utility functions like number of paperclips.
Joe doesn’t know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.
Not sure what you mean here.
It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U’, but Joe-with-U’ can’t maximize U’ by changing back to U.
That’s interesting. The one problem that I have is it’s rather unclear when a belief is evaluated for the purposes of utility. Which is to say, does Joe care about his belief at time t=now, or t=now+delta, or over all time? It seems obvious that most utility functions that care only about the present moment would have to be dynamically inconsistent, whether or not they mention belief.
Thanks, that’s a good point. In fact, it’s possible we can reduce the whole thing to the observation that it matters when utility of belief function is evaluated if and only if it’s nonlinear.
Apologies; I realize this is both not very clearly written, and full of holes when considered as a formal proof. I have a decent excuse in that I had to rush out the door to go to the HPMOR meetup right after writing it. Rereading it now, it still looks like a sketch of a compelling proof, so if neither jimrandomh nor any lurkers see any obvious problems, I’ll write it up as a longer paper, with more rigorous math and better explanations.
Did you ever end up writing it up? I think I’d follow more easily if you went a little slower and gave some concrete examples.
Good point.
I agree here.
But I still suspect that if your U(p) is anything other than linear on p, you can get Dutch-booked. I’ll try to come back with a proof, or at least an argument.