It does appear to depend on ancestor simulations being of the world’s history as it actually happened, on the basis that if we end up making simulations of our own history, then we are probably in such a simulation run by an someone in an outer future version of our own world.
You could argue for the same conclusion without requiring that, but it seems to me that it would end up being a completely different argument; at the very least, you’d have to figure out the general probability of some advanced civilization creating a simulation containing you, which is a lot harder when you aren’t assuming that the civilization running the simulation used to actually contain you (and can somehow extrapolate backwards far enough to recover the information in your mind).
OK I buy it. To be fair, Bostrom’s conclusion is either we’re in a simulation, we’re going to go extinct, or “(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).” You’re saying that (2) is so plausible that the other alternatives are not interesting.
You’re saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn’t explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they’d require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I’d argue that (2) is the most likely if we do eventually reach posthumanity.
It does appear to depend on ancestor simulations being of the world’s history as it actually happened, on the basis that if we end up making simulations of our own history, then we are probably in such a simulation run by an someone in an outer future version of our own world.
You could argue for the same conclusion without requiring that, but it seems to me that it would end up being a completely different argument; at the very least, you’d have to figure out the general probability of some advanced civilization creating a simulation containing you, which is a lot harder when you aren’t assuming that the civilization running the simulation used to actually contain you (and can somehow extrapolate backwards far enough to recover the information in your mind).
Maybe they are simulating me by mistake. Back in the “real world” I never existed. It is still the case that they simulating me.
Edit: Actually, this response wasn’t particularly responsive. Consider it withdrawn unless it contains virtues I don’t currently see.
OK I buy it. To be fair, Bostrom’s conclusion is either we’re in a simulation, we’re going to go extinct, or “(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).” You’re saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn’t explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they’d require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I’d argue that (2) is the most likely if we do eventually reach posthumanity.