Now, if we knew that the only two sorts of creatures that experience what we experience are either in simulations or the actual, original, non-simulated Earth, then I can see why the argument would be reasonable. However, I don’t know how we could know this.
For example, consider zoos: Perhaps advanced aliens create “zoos” featuring humans in an Earth-like world, for their own entertainment or other purposes.
This falls under either #1 or #2, since you don’t say what human capabilities are in the zoo or explain how exactly this zoo situation matters to running simulations; do we go extinct at some time long in the future when our zookeepers stop keeping us alive (and “go extinct before reaching a “posthuman” stage”), having never become powerful zookeeper-level civs ourselves, or are we not permitted to (“extremely unlikely to run a significant number of simulations”)?
Similarly, consider games: Perhaps aliens create games or something like them set in Earth-like worlds that aren’t actually intended to be simulations of any particle world.
This is just fork #3: “we are in a simulation”. At no point does fork #3 require it to be an exact true perfect-fidelity simulation of an actual past, and he is explicit that the minds in the simulation may be only tenuously related to ‘real’/historical minds; if aliens would be likely to create Earth-like worlds, for any reason, that’s fine because that’s what necessary, because we observe an Earth-like world (see the indifference principle section).
he is explicit that the minds in the simulation may be only tenuously related to ‘real’/historical minds;
Oh, I guess I missed this. Do you know where Bostrom said the “simulations” can only tenuously related to real minds? I was rereading the paper but didn’t see mention of this. I’m just surprised, because normally I don’t think zoo-like things would be considered simulations.
This falls under either #1 or #2, since you don’t say what human capabilities are in the zoo or explain how exactly this zoo situation matters to running simulations; do we go extinct at some time long in the future when our zookeepers stop keeping us alive (and “go extinct before reaching a “posthuman” stage”), having never become powerful zookeeper-level civs ourselves, or are we not permitted to (“extremely unlikely to run a significant number of simulations”)?
In case I didn’t make it clear, I’m saying that even if a significant proportion of civilization reach a post-human stage and a significant proportion of these run simulations, there would still potentially be a non-small chance of actually not being in a simulation an instead being in a game or zoo. For example, suppose each post-human civilization makes 100 proper simulations and 100 zoos. Then even if parts 1 and 2 of the simulation argument are true, you still have a 50% chance of ending up in a zoo.
“If the real Chantiel is so correlated with you that they will do what you will do, then you should believe you’re real so that the real Chantiel will believe they are real, too. This holds even if you aren’t real.”
By “real”, do you mean non-simulated? Are you saying that even if 99% of Chantiels in the universe are in simulations, then I should still believe I’m not in one? I don’t know how I could convince myself of being “real” if 99% of Chantiels aren’t.
Do you perhaps mean I should act as if I were non-simulated, rather than literally being non-simulated?
It doesn’t matter how many fake versions of you hold the wrong conclusion about their own ontological status, since those fake beliefs exist in fake versions of you. The moral harm caused by a single real Chantiel thinking they’re not real is infinitely greater than infinitely many non-real Chantiels thinking they are real.
Interesting. When you say “fake” versions of myself, do you mean simulations? If so, I’m having a hard time seeing how that could be true. Specifically, what’s wrong about me thinking I might not be “real”? I mean, if I though I was in a simulation, I think I’d do pretty much the same things I would do if I thought I wasn’t in a simulation. So I’m not sure what the moral harm is.
Do you have any links to previous discussions about this?
I think you should reread the paper.
This falls under either #1 or #2, since you don’t say what human capabilities are in the zoo or explain how exactly this zoo situation matters to running simulations; do we go extinct at some time long in the future when our zookeepers stop keeping us alive (and “go extinct before reaching a “posthuman” stage”), having never become powerful zookeeper-level civs ourselves, or are we not permitted to (“extremely unlikely to run a significant number of simulations”)?
This is just fork #3: “we are in a simulation”. At no point does fork #3 require it to be an exact true perfect-fidelity simulation of an actual past, and he is explicit that the minds in the simulation may be only tenuously related to ‘real’/historical minds; if aliens would be likely to create Earth-like worlds, for any reason, that’s fine because that’s what necessary, because we observe an Earth-like world (see the indifference principle section).
Thanks for the response, Gwern.
Oh, I guess I missed this. Do you know where Bostrom said the “simulations” can only tenuously related to real minds? I was rereading the paper but didn’t see mention of this. I’m just surprised, because normally I don’t think zoo-like things would be considered simulations.
In case I didn’t make it clear, I’m saying that even if a significant proportion of civilization reach a post-human stage and a significant proportion of these run simulations, there would still potentially be a non-small chance of actually not being in a simulation an instead being in a game or zoo. For example, suppose each post-human civilization makes 100 proper simulations and 100 zoos. Then even if parts 1 and 2 of the simulation argument are true, you still have a 50% chance of ending up in a zoo.
Does this make sense?
[edited]
By “real”, do you mean non-simulated? Are you saying that even if 99% of Chantiels in the universe are in simulations, then I should still believe I’m not in one? I don’t know how I could convince myself of being “real” if 99% of Chantiels aren’t.
Do you perhaps mean I should act as if I were non-simulated, rather than literally being non-simulated?
[edited]
Interesting. When you say “fake” versions of myself, do you mean simulations? If so, I’m having a hard time seeing how that could be true. Specifically, what’s wrong about me thinking I might not be “real”? I mean, if I though I was in a simulation, I think I’d do pretty much the same things I would do if I thought I wasn’t in a simulation. So I’m not sure what the moral harm is.
Do you have any links to previous discussions about this?