How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?
A world simulating itself would be highly unlikely. It’s theoretically possible to have a universe simulate itself, since it can do things like just read from records instead of simulating the computer itself, but it’s not really feasible. You have to simulate a smaller or more coarse-grained universe than the one you’re in.
I’m not sure where you’re going with this. We clearly have a universe, although it’s possible that it’s being simulated at lower detail than it appears. If you had a universe simulating itself, you’d have to simulate N minds, the computer, and the rest of the universe. The computer also simulates N minds, the computer, and the rest of the universe, so in order for it to work correctly, it needs to be simulated at the same detail as the N minds, the computer, and the rest of the universe combined. It’s one thing to simulate the computer to the same detail as everything else combined, but you have to do it including the computer. You’re simulating N minds an infinite number of times.
I thought that you meant “more course grained” according to the experience of the conscious entities in the simulation, not “more course grained” in the sense of including less total stuff (conscious and everything else) than an exact copy of the universe.
So a universe with a lone scientist and ample computational resources could afford to simulate the exact experience of the scientist, but couldn’t afford to simulate everything else at the same time. The confusing bit is that the scientist being simulated wouldn’t be able to tell if the simulation they were watching tick away actually corresponds to another conscious entity, or if the experience of observing a simulation tick away is just sensory data being piped in from the parent universe, in which case the scientist is...well, watching what exactly? Themselves?
How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?
A world simulating itself would be highly unlikely. It’s theoretically possible to have a universe simulate itself, since it can do things like just read from records instead of simulating the computer itself, but it’s not really feasible. You have to simulate a smaller or more coarse-grained universe than the one you’re in.
Possibly a stupid question, but wouldn’t a simulation of N human minds be feasible even if a simulation of a universe with N human minds is not?
I’m not sure where you’re going with this. We clearly have a universe, although it’s possible that it’s being simulated at lower detail than it appears. If you had a universe simulating itself, you’d have to simulate N minds, the computer, and the rest of the universe. The computer also simulates N minds, the computer, and the rest of the universe, so in order for it to work correctly, it needs to be simulated at the same detail as the N minds, the computer, and the rest of the universe combined. It’s one thing to simulate the computer to the same detail as everything else combined, but you have to do it including the computer. You’re simulating N minds an infinite number of times.
I thought that you meant “more course grained” according to the experience of the conscious entities in the simulation, not “more course grained” in the sense of including less total stuff (conscious and everything else) than an exact copy of the universe.
So a universe with a lone scientist and ample computational resources could afford to simulate the exact experience of the scientist, but couldn’t afford to simulate everything else at the same time. The confusing bit is that the scientist being simulated wouldn’t be able to tell if the simulation they were watching tick away actually corresponds to another conscious entity, or if the experience of observing a simulation tick away is just sensory data being piped in from the parent universe, in which case the scientist is...well, watching what exactly? Themselves?