Does this mean that Riana’s AI isn’t pre-rational?
According to my understanding of Robin’s definition, yes.
Or that Riana’s AI isn’t pre-rational with respect to the lottery ticket?
I don’t think Robin defined what it would mean for someone to be pre-rational “with respect” to something. You’re either pre-rational, or not.
Can Riana’s AI and Sally’s AI agree on the causal circumstances that led to their existence, while still disagreeing on the probability that Sally’s AI’s lottery ticket will win?
I’m not totally sure what you’re asking here. Do you mean can they, assuming they are pre-rational, or just can they in general? I think the answers are no and yes, respectively.
I think the point you’re making is that just saying Riana’s AI and Sally’s AI are both lacking pre-rationality isn’t very satisfactory, and that perhaps we need some way to conclude that Riana’s AI is rational while Sally’s AI is not.
That would be one possible approach to answering the “what to do” question that I asked at the end of my post. Another approach I was thinking about is to apply Nesov’s “trading across possible worlds” idea to this. Riana’s AI could infer that if it were to change its beliefs to be more like Sally’s AI, then due the the symmetry in the situation, Sally’s AI would (counterfactually) change its beliefs to be more like Riana’s AI. This could in some (perhaps most?) circumstances make both of them better off according to their own priors.
I similarly suspect that if I had been born into the Dark Ages, then “I” would have made many far less rational probability assignments;
This example is not directly analogous to the previous one, because the medieval you might agree that the current you is the more rational one, just like the current you might agree that a future you is more rational.
According to my understanding of Robin’s definition, yes.
I don’t think Robin defined what it would mean for someone to be pre-rational “with respect” to something. You’re either pre-rational, or not.
I’m not totally sure what you’re asking here. Do you mean can they, assuming they are pre-rational, or just can they in general? I think the answers are no and yes, respectively.
I think the point you’re making is that just saying Riana’s AI and Sally’s AI are both lacking pre-rationality isn’t very satisfactory, and that perhaps we need some way to conclude that Riana’s AI is rational while Sally’s AI is not.
That would be one possible approach to answering the “what to do” question that I asked at the end of my post. Another approach I was thinking about is to apply Nesov’s “trading across possible worlds” idea to this. Riana’s AI could infer that if it were to change its beliefs to be more like Sally’s AI, then due the the symmetry in the situation, Sally’s AI would (counterfactually) change its beliefs to be more like Riana’s AI. This could in some (perhaps most?) circumstances make both of them better off according to their own priors.
This example is not directly analogous to the previous one, because the medieval you might agree that the current you is the more rational one, just like the current you might agree that a future you is more rational.