If I found myself in this kind of scenario then it would imply that I was very wrong about how I reason about anthropics in an ensemble universe (as with Pascal’s mugging or any sort of situation where an agent has enough computing power to take control of that much of my measure such that I find myself in a contrived philosophical experiment).
I see some reasons for this perspective but I’m not sure.
On the one hand, I don’t know much about the distribution of agent preferences in an ensemble universe. But there may be enough long towers of nested simulations of agents like us to compensate for this.
I see some reasons for this perspective but I’m not sure.
On the one hand, I don’t know much about the distribution of agent preferences in an ensemble universe. But there may be enough long towers of nested simulations of agents like us to compensate for this.