Yes, you seem to understand the idea, at least as far as in what you’ve written above. Remember that pretty much all of the ways we express what it is to be rational are via constraints, e.g., probs sum to one, update over time via conditionalization, and so on. If you once satisfied the constraints but find you no longer do, well then the obvious plan would be to move your beliefs back to what they would be had you not made whatever errors led you to violate the constraints. In this case, if your pre-prior is consistent and reasonable, but doesn’t satisfy the pre-rationality condition relative to your prior, the obvious plan is to update your prior to be whatever the pre-rationality condition says it should be (treating prior “P” as just a label).
In this case, if your pre-prior is consistent and reasonable, but doesn’t satisfy the pre-rationality condition relative to your prior, the obvious plan is to update your prior to be whatever the pre-rationality condition says it should be (treating prior “P” as just a label).
That doesn’t seem to work in the specific example I gave. If the “optimistic” AI updates its prior to be whatever the pre-rationality condition says it should be, it will just get back the same prior O, because according to its pre-prior (denoted r in my example), it’s actual prior O is just fine, and the reason it’s not pre-rational is that in the counterfactual case where the B coin landed tails, it would have gotten assigned the prior P.
Or am I misinterpreting your proposed solution? (ETA: Can you make your solution formal?)
Yes, you seem to understand the idea, at least as far as in what you’ve written above. Remember that pretty much all of the ways we express what it is to be rational are via constraints, e.g., probs sum to one, update over time via conditionalization, and so on. If you once satisfied the constraints but find you no longer do, well then the obvious plan would be to move your beliefs back to what they would be had you not made whatever errors led you to violate the constraints. In this case, if your pre-prior is consistent and reasonable, but doesn’t satisfy the pre-rationality condition relative to your prior, the obvious plan is to update your prior to be whatever the pre-rationality condition says it should be (treating prior “P” as just a label).
That doesn’t seem to work in the specific example I gave. If the “optimistic” AI updates its prior to be whatever the pre-rationality condition says it should be, it will just get back the same prior O, because according to its pre-prior (denoted r in my example), it’s actual prior O is just fine, and the reason it’s not pre-rational is that in the counterfactual case where the B coin landed tails, it would have gotten assigned the prior P.
Or am I misinterpreting your proposed solution? (ETA: Can you make your solution formal?)