Have advocates of the simulation argument actually argued for the possibility of ancestor simulations? It is a very counterintuitive idea, yet it seems to be invoked as though it is obviously possible. Aside from whatever probability we want to assign to the possibility that the future human race will discover strange previously-unknown laws of physics that make it more feasible, doesn’t the idea of an ancestor simulation (a simulation of “the entire mental history of humankind”) depend on having access to a huge amount of information that has presumably been permanently lost to entropy? Where is the future civilization expected to get all the mental structures needed to simulate the entire mental history of humankind (or a model of the early Earth implausibly precise enough that simulating it causes things to play out exactly as they really did)?
Second the question. It’s been a long time since I read Tipler, but as I recall, he claimed Omega would simulate all possible humans, not just all historically real ones.
Is Tipler / the Omega Point relevant to the simulation argument? I haven’t seen him invoked in discussions thereof, and that idea (whatever its probability) seems to have a whole different set of implications, more along the lines of the confusing anthropic problems we have with Very Big Worlds and Boltzmann brains.
Relevant only to the extent that large scale simulation of the hypothetical past of the human species is a large enough (and/or pointless enough) task that it will require an Omega Point quantity of resources.
It does appear to depend on ancestor simulations being of the world’s history as it actually happened, on the basis that if we end up making simulations of our own history, then we are probably in such a simulation run by an someone in an outer future version of our own world.
You could argue for the same conclusion without requiring that, but it seems to me that it would end up being a completely different argument; at the very least, you’d have to figure out the general probability of some advanced civilization creating a simulation containing you, which is a lot harder when you aren’t assuming that the civilization running the simulation used to actually contain you (and can somehow extrapolate backwards far enough to recover the information in your mind).
OK I buy it. To be fair, Bostrom’s conclusion is either we’re in a simulation, we’re going to go extinct, or “(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).” You’re saying that (2) is so plausible that the other alternatives are not interesting.
You’re saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn’t explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they’d require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I’d argue that (2) is the most likely if we do eventually reach posthumanity.
Have advocates of the simulation argument actually argued for the possibility of ancestor simulations? It is a very counterintuitive idea, yet it seems to be invoked as though it is obviously possible. Aside from whatever probability we want to assign to the possibility that the future human race will discover strange previously-unknown laws of physics that make it more feasible, doesn’t the idea of an ancestor simulation (a simulation of “the entire mental history of humankind”) depend on having access to a huge amount of information that has presumably been permanently lost to entropy? Where is the future civilization expected to get all the mental structures needed to simulate the entire mental history of humankind (or a model of the early Earth implausibly precise enough that simulating it causes things to play out exactly as they really did)?
If things don’t play out exactly as they really did, does the simulation argument lose any force?
Second the question. It’s been a long time since I read Tipler, but as I recall, he claimed Omega would simulate all possible humans, not just all historically real ones.
Is Tipler / the Omega Point relevant to the simulation argument? I haven’t seen him invoked in discussions thereof, and that idea (whatever its probability) seems to have a whole different set of implications, more along the lines of the confusing anthropic problems we have with Very Big Worlds and Boltzmann brains.
Relevant only to the extent that large scale simulation of the hypothetical past of the human species is a large enough (and/or pointless enough) task that it will require an Omega Point quantity of resources.
It does appear to depend on ancestor simulations being of the world’s history as it actually happened, on the basis that if we end up making simulations of our own history, then we are probably in such a simulation run by an someone in an outer future version of our own world.
You could argue for the same conclusion without requiring that, but it seems to me that it would end up being a completely different argument; at the very least, you’d have to figure out the general probability of some advanced civilization creating a simulation containing you, which is a lot harder when you aren’t assuming that the civilization running the simulation used to actually contain you (and can somehow extrapolate backwards far enough to recover the information in your mind).
Maybe they are simulating me by mistake. Back in the “real world” I never existed. It is still the case that they simulating me.
Edit: Actually, this response wasn’t particularly responsive. Consider it withdrawn unless it contains virtues I don’t currently see.
OK I buy it. To be fair, Bostrom’s conclusion is either we’re in a simulation, we’re going to go extinct, or “(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).” You’re saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn’t explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they’d require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I’d argue that (2) is the most likely if we do eventually reach posthumanity.