It seems to me that you should only do this if everyone has utility functions that are completely anthropically selfish (i.e. they only care about their own subjective experience). Otherwise, wouldn’t it be cruel to intentionally simulate a world with so many unpleasant characteristics that we could otherwise remove if we weren’t focused on making the simulation subjectively indistinguishable from our own world?
As such, I don’t think we should commit to any such thing.
The point you raise is by far the strongest argument I know of against the idea.
However, it is a moral objection rather than a decision-theory objection. It sounds like you agree with me on the decision theory component of the idea: that if we were anthropically selfish, it would be rational for us to commit to making ancestor-simulations with afterlives. That’s an interesting result in itself, isn’t it? Let’s go tell Ayn Rand.
When it comes to the morality of the idea, I might end up agreeing with you. We’ll see. I think there are several minor considerations in favor of the proposal, and then this one massive consideration against it. Perhaps I’ll make a post on it soon.
It seems to me that you should only do this if everyone has utility functions that are completely anthropically selfish (i.e. they only care about their own subjective experience). Otherwise, wouldn’t it be cruel to intentionally simulate a world with so many unpleasant characteristics that we could otherwise remove if we weren’t focused on making the simulation subjectively indistinguishable from our own world?
As such, I don’t think we should commit to any such thing.
The point you raise is by far the strongest argument I know of against the idea.
However, it is a moral objection rather than a decision-theory objection. It sounds like you agree with me on the decision theory component of the idea: that if we were anthropically selfish, it would be rational for us to commit to making ancestor-simulations with afterlives. That’s an interesting result in itself, isn’t it? Let’s go tell Ayn Rand.
When it comes to the morality of the idea, I might end up agreeing with you. We’ll see. I think there are several minor considerations in favor of the proposal, and then this one massive consideration against it. Perhaps I’ll make a post on it soon.