Our present civilization is likely to reach the point where it can simulate a universe reasonably soon
I don’t know about that, seems unlikely to me. A future civilization simulating us requires a) tons of information about us, that is likely to be irreversibly lost in the meantime, and b) enough computing power to simulate at a sufficiently fine level of detail (i.e. if it’s a crude approximation, it will diverge from what actually happened pretty fast). Either of those alone looks like it makes simulating current-earth unfeasible.
But my main reaction to the simulation argument (even assuming it’s possible) is “so what?”. Are there any decisions I would change if I knew I might be being simulated?
A future civilization simulating their own ancestors would require a lot of information about them, possibly impossibly-hard-to-get amounts. You’re right about that.
So what? They could still simulate some arbitrary, fictional pre-singularity civ. There is no guarantee whatsoever, if we’re part of a simulation, that we were ever anything else.
But my main reaction to the simulation argument (even assuming it’s possible) is “so what?”. Are there any decisions I would change if I knew I might be being simulated?
Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I’m willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.
I don’t know about that, seems unlikely to me. A future civilization simulating us requires a) tons of information about us, that is likely to be irreversibly lost in the meantime, and b) enough computing power to simulate at a sufficiently fine level of detail (i.e. if it’s a crude approximation, it will diverge from what actually happened pretty fast). Either of those alone looks like it makes simulating current-earth unfeasible.
But my main reaction to the simulation argument (even assuming it’s possible) is “so what?”. Are there any decisions I would change if I knew I might be being simulated?
A future civilization simulating their own ancestors would require a lot of information about them, possibly impossibly-hard-to-get amounts. You’re right about that.
So what? They could still simulate some arbitrary, fictional pre-singularity civ. There is no guarantee whatsoever, if we’re part of a simulation, that we were ever anything else.
Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I’m willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.