Wei, I understand the paper probably less well than you do, but I wanted to comment that p~, which you call r, is not what Robin calls a pre-prior. He uses the term pre-prior for what he calls q. p~ is simply a prior over an expanded state space created by taking into consideration all possible prior assignments. Now equation 2, the rationality condition, says that q must equal p~ (at least for some calculations), so maybe it all comes out to the same thing.
Equation 1 defines p~ in terms of the conventional prior p. Suppressing the index i since we have only one agent in this example, it says that p~(E|p) = p(E). The only relevant event E is A=heads, and p represents the prior assignment. So we have the two definitions for p~.
p~(A=heads | p=O) = O(A=heads)
p~(A=heads | p=P) = P(A=heads)
The first equals 0.6 and the second equals 0.4.
Then the rationality condition, equation 2, says
q(E | p) = p~(E | p)
and from this, your equations follow, with r substituted for q:
As you conclude, there is no way to satisfy these equations with the assumptions you have made on q, namely that the A event and the p-assigning events are independent, since the values of q in the two equations will be equal, but the RHS’s are 0.6 and 0.4 respectively.
I think you’re right that the descriptive (as opposed to prescriptive) result in this case demonstrates that the programmer was irrational. Indeed it doesn’t make sense to program his AI that way, not if he wants it to “track truth”.
Wei, I understand the paper probably less well than you do, but I wanted to comment that p~, which you call r, is not what Robin calls a pre-prior. He uses the term pre-prior for what he calls q. p~ is simply a prior over an expanded state space created by taking into consideration all possible prior assignments. Now equation 2, the rationality condition, says that q must equal p~ (at least for some calculations), so maybe it all comes out to the same thing.
Equation 1 defines p~ in terms of the conventional prior p. Suppressing the index i since we have only one agent in this example, it says that p~(E|p) = p(E). The only relevant event E is A=heads, and p represents the prior assignment. So we have the two definitions for p~.
p~(A=heads | p=O) = O(A=heads)
p~(A=heads | p=P) = P(A=heads)
The first equals 0.6 and the second equals 0.4.
Then the rationality condition, equation 2, says
q(E | p) = p~(E | p)
and from this, your equations follow, with r substituted for q:
q (A=heads | p=O) = p~(A=heads | p=O) = O(A=heads)
q (A=heads | p=P) = p~(A=heads | p=P) = P(A=heads)
As you conclude, there is no way to satisfy these equations with the assumptions you have made on q, namely that the A event and the p-assigning events are independent, since the values of q in the two equations will be equal, but the RHS’s are 0.6 and 0.4 respectively.
I think you’re right that the descriptive (as opposed to prescriptive) result in this case demonstrates that the programmer was irrational. Indeed it doesn’t make sense to program his AI that way, not if he wants it to “track truth”.
Yes, it looks like I got a bit confused about the notation. Thanks for the correction, and for showing how the mathematical formalism works in detail.