Your first point sounds like it is saying we are probably in a simulation, but not the sort that should influence our decisions, because it is lawful. I think this is pretty much exactly what Bostrom’s Simulation Hypothesis is, so I think your first point is not an argument for the second disjunct of the simulation argument but rather for the third.
As for the second point, well, there are many ways for a simulation to be unlawful, and only some of them are undesirable—for example, a civilization might actually want to induce anthropic uncertainty in itself, if it is uncertainty about whether or not it is in a simulation that contains a pleasant afterlife for everyone who dies.
I don’t buy that it makes sense to induce anthropic uncertainty. It makes sense to spend all of your compute to run emulations that are having awesome lives, but it doesn’t make sense to cause yourself to believe false things.
I’m not sure it makes sense either, but I don’t think it is accurately described as “cause yourself to believe false things.” I think whether or not it makes sense comes down to decision theory. If you use evidential decision theory, it makes sense; if you use causal decision theory, it doesn’t. If you use functional decision theory, or updateless decision theory, I’m not sure, I’d have to think more about it. (My guess is that updateless decision theory would do it insofar as you care more about yourself than others, and functional decision theory wouldn’t do it even then.)
I just don’t think it’s a good decision to make, regardless of the math. If I’m nearing the end of the universe, I prefer to spend all my compute instead maximising fun / searching for a way out. Trying to run simulations to make it so I no longer know if I’m about to die seems like a dumb use of compute. I can bear the thought of dying dude, there’s better uses of that compute. You’re not saving yourself, you’re just intentionally making yourself confused because you’re uncomfortable with the thought of death.
Well, that wasn’t the scenario I had in mind. The scenario I had in mind was: People in the year 2030 pass a law requiring future governments to make ancestor simulations with happy afterlives, because that way it’s probable that they themselves will be in such a simulation. (It’s like cryonics, but cheaper!) Then, hundreds or billions of years later, the future government carries out the plan, as required by law.
Not saying this is what we should do, just saying it’s a decision I could sympathize with, and I imagine it’s a decision some fraction of people would make, if they thought it was an option.
Thinking more, I think there are good arguments for taking actions that as a by-product induce anthropic uncertainty; these are the standard hansonian situation where you build lots of ems of yourself to do bits of work then turn them off.
But I still don’t agree with the people in the situation you describe because they’re optimising over their own epistemic state, I think they’re morally wrong to do that. I’m totally fine with a law requiring future governments to rebuild you / an em of you and give you a nice life (perhaps as a trade for working harder today to ensure that the future world exists), but that’s conceptually analogous to extending your life, and doesn’t require causing you to believe false things. You know you’ll be turned off and then later a copy of you will be turned on, there’s no anthropic uncertainty, you’re just going to get lots of valuable stuff.
Your first point sounds like it is saying we are probably in a simulation, but not the sort that should influence our decisions, because it is lawful. I think this is pretty much exactly what Bostrom’s Simulation Hypothesis is, so I think your first point is not an argument for the second disjunct of the simulation argument but rather for the third.
As for the second point, well, there are many ways for a simulation to be unlawful, and only some of them are undesirable—for example, a civilization might actually want to induce anthropic uncertainty in itself, if it is uncertainty about whether or not it is in a simulation that contains a pleasant afterlife for everyone who dies.
I don’t buy that it makes sense to induce anthropic uncertainty. It makes sense to spend all of your compute to run emulations that are having awesome lives, but it doesn’t make sense to cause yourself to believe false things.
I’m not sure it makes sense either, but I don’t think it is accurately described as “cause yourself to believe false things.” I think whether or not it makes sense comes down to decision theory. If you use evidential decision theory, it makes sense; if you use causal decision theory, it doesn’t. If you use functional decision theory, or updateless decision theory, I’m not sure, I’d have to think more about it. (My guess is that updateless decision theory would do it insofar as you care more about yourself than others, and functional decision theory wouldn’t do it even then.)
I just don’t think it’s a good decision to make, regardless of the math. If I’m nearing the end of the universe, I prefer to spend all my compute instead maximising fun / searching for a way out. Trying to run simulations to make it so I no longer know if I’m about to die seems like a dumb use of compute. I can bear the thought of dying dude, there’s better uses of that compute. You’re not saving yourself, you’re just intentionally making yourself confused because you’re uncomfortable with the thought of death.
Well, that wasn’t the scenario I had in mind. The scenario I had in mind was: People in the year 2030 pass a law requiring future governments to make ancestor simulations with happy afterlives, because that way it’s probable that they themselves will be in such a simulation. (It’s like cryonics, but cheaper!) Then, hundreds or billions of years later, the future government carries out the plan, as required by law.
Not saying this is what we should do, just saying it’s a decision I could sympathize with, and I imagine it’s a decision some fraction of people would make, if they thought it was an option.
Thinking more, I think there are good arguments for taking actions that as a by-product induce anthropic uncertainty; these are the standard hansonian situation where you build lots of ems of yourself to do bits of work then turn them off.
But I still don’t agree with the people in the situation you describe because they’re optimising over their own epistemic state, I think they’re morally wrong to do that. I’m totally fine with a law requiring future governments to rebuild you / an em of you and give you a nice life (perhaps as a trade for working harder today to ensure that the future world exists), but that’s conceptually analogous to extending your life, and doesn’t require causing you to believe false things. You know you’ll be turned off and then later a copy of you will be turned on, there’s no anthropic uncertainty, you’re just going to get lots of valuable stuff.