This is a formal version of a real-life problem I’ve been thinking about lately.
Should we commit to creating ancestor-simulations in the future, where those ancestor-simulations will be granted a pleasant afterlife upon what appears to their neighbors to be death? If we do, then arguably we increase the likelihood that we ourselves have a pleasant afterlife to look forward to.
This is pretty much like one guy destroying evidence about global warming so that everyone else should predict a lower probability of a catastrophe. It fails for the same reasons.
It seems to me that you should only do this if everyone has utility functions that are completely anthropically selfish (i.e. they only care about their own subjective experience). Otherwise, wouldn’t it be cruel to intentionally simulate a world with so many unpleasant characteristics that we could otherwise remove if we weren’t focused on making the simulation subjectively indistinguishable from our own world?
As such, I don’t think we should commit to any such thing.
The point you raise is by far the strongest argument I know of against the idea.
However, it is a moral objection rather than a decision-theory objection. It sounds like you agree with me on the decision theory component of the idea: that if we were anthropically selfish, it would be rational for us to commit to making ancestor-simulations with afterlives. That’s an interesting result in itself, isn’t it? Let’s go tell Ayn Rand.
When it comes to the morality of the idea, I might end up agreeing with you. We’ll see. I think there are several minor considerations in favor of the proposal, and then this one massive consideration against it. Perhaps I’ll make a post on it soon.
This is a formal version of a real-life problem I’ve been thinking about lately.
Should we commit to creating ancestor-simulations in the future, where those ancestor-simulations will be granted a pleasant afterlife upon what appears to their neighbors to be death? If we do, then arguably we increase the likelihood that we ourselves have a pleasant afterlife to look forward to.
I’m pretty sure there’s something wrong with this argument, but I can’t seem to put my finger on what it is. It reminds of this post, in a way.
This is pretty much like one guy destroying evidence about global warming so that everyone else should predict a lower probability of a catastrophe. It fails for the same reasons.
It seems to me that you should only do this if everyone has utility functions that are completely anthropically selfish (i.e. they only care about their own subjective experience). Otherwise, wouldn’t it be cruel to intentionally simulate a world with so many unpleasant characteristics that we could otherwise remove if we weren’t focused on making the simulation subjectively indistinguishable from our own world?
As such, I don’t think we should commit to any such thing.
The point you raise is by far the strongest argument I know of against the idea.
However, it is a moral objection rather than a decision-theory objection. It sounds like you agree with me on the decision theory component of the idea: that if we were anthropically selfish, it would be rational for us to commit to making ancestor-simulations with afterlives. That’s an interesting result in itself, isn’t it? Let’s go tell Ayn Rand.
When it comes to the morality of the idea, I might end up agreeing with you. We’ll see. I think there are several minor considerations in favor of the proposal, and then this one massive consideration against it. Perhaps I’ll make a post on it soon.