OK I buy it. To be fair, Bostrom’s conclusion is either we’re in a simulation, we’re going to go extinct, or “(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).” You’re saying that (2) is so plausible that the other alternatives are not interesting.
You’re saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn’t explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they’d require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I’d argue that (2) is the most likely if we do eventually reach posthumanity.
OK I buy it. To be fair, Bostrom’s conclusion is either we’re in a simulation, we’re going to go extinct, or “(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).” You’re saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn’t explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they’d require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I’d argue that (2) is the most likely if we do eventually reach posthumanity.