I just don’t think it’s a good decision to make, regardless of the math. If I’m nearing the end of the universe, I prefer to spend all my compute instead maximising fun / searching for a way out. Trying to run simulations to make it so I no longer know if I’m about to die seems like a dumb use of compute. I can bear the thought of dying dude, there’s better uses of that compute. You’re not saving yourself, you’re just intentionally making yourself confused because you’re uncomfortable with the thought of death.
Well, that wasn’t the scenario I had in mind. The scenario I had in mind was: People in the year 2030 pass a law requiring future governments to make ancestor simulations with happy afterlives, because that way it’s probable that they themselves will be in such a simulation. (It’s like cryonics, but cheaper!) Then, hundreds or billions of years later, the future government carries out the plan, as required by law.
Not saying this is what we should do, just saying it’s a decision I could sympathize with, and I imagine it’s a decision some fraction of people would make, if they thought it was an option.
Thinking more, I think there are good arguments for taking actions that as a by-product induce anthropic uncertainty; these are the standard hansonian situation where you build lots of ems of yourself to do bits of work then turn them off.
But I still don’t agree with the people in the situation you describe because they’re optimising over their own epistemic state, I think they’re morally wrong to do that. I’m totally fine with a law requiring future governments to rebuild you / an em of you and give you a nice life (perhaps as a trade for working harder today to ensure that the future world exists), but that’s conceptually analogous to extending your life, and doesn’t require causing you to believe false things. You know you’ll be turned off and then later a copy of you will be turned on, there’s no anthropic uncertainty, you’re just going to get lots of valuable stuff.
I just don’t think it’s a good decision to make, regardless of the math. If I’m nearing the end of the universe, I prefer to spend all my compute instead maximising fun / searching for a way out. Trying to run simulations to make it so I no longer know if I’m about to die seems like a dumb use of compute. I can bear the thought of dying dude, there’s better uses of that compute. You’re not saving yourself, you’re just intentionally making yourself confused because you’re uncomfortable with the thought of death.
Well, that wasn’t the scenario I had in mind. The scenario I had in mind was: People in the year 2030 pass a law requiring future governments to make ancestor simulations with happy afterlives, because that way it’s probable that they themselves will be in such a simulation. (It’s like cryonics, but cheaper!) Then, hundreds or billions of years later, the future government carries out the plan, as required by law.
Not saying this is what we should do, just saying it’s a decision I could sympathize with, and I imagine it’s a decision some fraction of people would make, if they thought it was an option.
Thinking more, I think there are good arguments for taking actions that as a by-product induce anthropic uncertainty; these are the standard hansonian situation where you build lots of ems of yourself to do bits of work then turn them off.
But I still don’t agree with the people in the situation you describe because they’re optimising over their own epistemic state, I think they’re morally wrong to do that. I’m totally fine with a law requiring future governments to rebuild you / an em of you and give you a nice life (perhaps as a trade for working harder today to ensure that the future world exists), but that’s conceptually analogous to extending your life, and doesn’t require causing you to believe false things. You know you’ll be turned off and then later a copy of you will be turned on, there’s no anthropic uncertainty, you’re just going to get lots of valuable stuff.