As an example, I think it should be possible to learn to use a source of randomness in rock-paper-scissors against someone who can perfectly predict your decision, but not the extra randomness.
In order to do that, you have to think of doing that. (Seeing randomness might be hard—seeing ‘I have information I don’t think they have, and I don’t think they can read minds, so they can’t predict this’ makes more sense intuitively.) In practice, I don’t think people conduct exploration like this.
PCDT, faced with the possibility of encountering a Newcomblike problem at some point,
Similarly, I think a lot of agents consider possibilities after they encounter them at least once. This might help solve the cost of simulation/computation.
Radical Probabalism and InfraBayes are plausibly two orthogonal dimensions of generalization for rationality. Ultimately we want to generalize in both directions,
In order to do that, you have to think of doing that. (Seeing randomness might be hard—seeing ‘I have information I don’t think they have, and I don’t think they can read minds, so they can’t predict this’ makes more sense intuitively.) In practice, I don’t think people conduct exploration like this.
Similarly, I think a lot of agents consider possibilities after they encounter them at least once. This might help solve the cost of simulation/computation.
I’m glad this was highlighted.