I don’t know if you understand pre-rationality correctly because I can’t parse Robin’s paper either, but your setup looks like another of those unstoppable force, unmovable object mindfucks. If an agent’s beliefs about the source of their prior don’t satisfy the pre-rationality condition together with the prior, you’ve got an agent with inconsistent beliefs about the world, plain and simple. It can be Dutch booked, etc. Which way to resolve the inconsistency is entirely up to programmer ingenuity: you can privilege the prior above the pre-prior, or the other way round, or use some other neat trick. Of course this doesn’t apply to humans at all.
I don’t know if you understand pre-rationality correctly because I can’t parse Robin’s paper either, but your setup looks like another of those unstoppable force, unmovable object mindfucks. If an agent’s beliefs about the source of their prior don’t satisfy the pre-rationality condition together with the prior, you’ve got an agent with inconsistent beliefs about the world, plain and simple. It can be Dutch booked, etc. Which way to resolve the inconsistency is entirely up to programmer ingenuity: you can privilege the prior above the pre-prior, or the other way round, or use some other neat trick. Of course this doesn’t apply to humans at all.