But my main reaction to the simulation argument (even assuming it’s possible) is “so what?”. Are there any decisions I would change if I knew I might be being simulated?
Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I’m willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.
Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I’m willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.