We don’t live in a universe that’s nice or just all the time, so perhaps there are nightmare scenarios in our future. Not all traps have an escape. However, I think this one does, for two reasons.
(1) all the reasons that RobinHanson mentioned;
(2) we seem really confused about how consciousness works, which suggests there are large ‘unknown unknowns’ in play. It seems very likely that if we extrapolate our confused models of consciousness into extreme scenarios such as this, we’ll get even more confused results.
I think the elephant in the room is the purpose of the simulation.
Bostrom takes it as a given that future intelligences will be interested in running ancestor simulations. Why is that? If some future posthuman civilization truly masters physics, consciousness, and technology, I don’t see them using it to play SimUniverse. That’s what we would do with limitless power; it’s taking our unextrapolated, 2017 volition and asking what we’d do if we were gods. But that’s like asking a 5-year-old what he wants to do when he grows up, then taking the answer seriously.
Ancestor simulations sound cool to us- heck, they sound amazingly interesting to me, but I strongly suspect posthumans would find better uses for their resources.
Instead, I think we should try to reason about the purpose of a simulation from first principles.
Here’s an excerpt from Principia Qualia, Appendix F: