I’m not talking about the DA only, I’m talking about the assumption that our experiences should be more-or-less ordinary. And this is designed to escape the DA; it’s the only reason to think you are simulated in the first place.
Really, I got the whole idea from HPMOR: fulfilling a scary prophecy on your own terms.
the assumption that our experiences should be more-or-less ordinary
How do you know what to call “ordinary”? If you think you’re being simulated, then you need to predict what kinds and amounts of simulations exist besides the one you’re in, as well as how extensive and precise your own simulation is in past time and space, not just in its future.
And this is designed to escape the DA; it’s the only reason to think you are simulated in the first place.
There are lots of reasons other than the DA to think we’re being simulated: e.g. Bostrom’s Simulation Argument (posthumans are likely to run ancestor simulations). The DA is a very weak argument for simulation: it is equally consistent with there being an extinction event in our future.
If you think you’re being simulated, then you need to predict what kinds and amounts of simulations exist besides the one you’re in, as well as how extensive and precise your own simulation is in past time and space, not just in its future.
I don’t see why simulated observers would almost ever outnumber physical observers. It would need an incredibly inefficient allocation of resources.
There are lots of reasons other than the DA to think we’re being simulated: e.g. Bostrom’s Simulation Argument (posthumans are likely to run ancestor simulations).
Avoiding the DA gives them a much clearer motive. It’s the only reason I can think of that I would want to do it. Surely it’s at least worth considering?
I don’t see why simulated observers would almost ever outnumber physical observers. It would need an incredibly inefficient allocation of resources.
The question isn’t how many simulated observers exist in total (although that’s also unknown), but how many of them are like you in some relevant sense, i.e. what to consider “typical”.
Avoiding the DA gives them a much clearer motive. It’s the only reason I can think of that I would want to do it. Surely it’s at least worth considering?
Many people do think they would have other reasons to run ancestor simulations.
But in any case, I don’t think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn’t cause yourself to be wrong about it.
Trying to steelman, what you’d need is to run simulations of people successfully launching a friendly self-modifying AI. Suppose out of every N civs that run an AI, on average one succeeds and all the others go extinct. If each of them precommits to simulating N civs, and the simulations are arranged so that in a simulation running an AI always works, so in the end there are still N civs that successfully ran an AI.
This implies a certain measure on future outcomes: it’s counting “distinct” existences while ignoring the actual measure of future probability. This is structurally similar to quantum suicide or quantum roulette.
The question isn’t how many simulated observers exist in total (although that’s also unknown), but how many of them are like you in some relevant sense, i.e. what to consider “typical”.
I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?
But in any case, I don’t think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn’t cause yourself to be wrong about it.
The whole point is that the simulators want to find themselves in a simulation, and would only discover the truth after disaster has been avoided. It’s a way of ensuring that superintelligence does not fulfill the DA.
I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?
It’s plausible, to me, that a superintelligence built by humans and intended by them to care about humans would in fact care about humans, even if it didn’t have the precise goals they intended it to have.
This is overly complex. Now we assume that AI goes wrong? These people want to be in a simulation; they need a Schelling point with other humanities. Why wouldn’t they just give clear instructions to the AI to simulate other Earths?
I’m not talking about the DA only, I’m talking about the assumption that our experiences should be more-or-less ordinary. And this is designed to escape the DA; it’s the only reason to think you are simulated in the first place.
Really, I got the whole idea from HPMOR: fulfilling a scary prophecy on your own terms.
How do you know what to call “ordinary”? If you think you’re being simulated, then you need to predict what kinds and amounts of simulations exist besides the one you’re in, as well as how extensive and precise your own simulation is in past time and space, not just in its future.
There are lots of reasons other than the DA to think we’re being simulated: e.g. Bostrom’s Simulation Argument (posthumans are likely to run ancestor simulations). The DA is a very weak argument for simulation: it is equally consistent with there being an extinction event in our future.
I don’t see why simulated observers would almost ever outnumber physical observers. It would need an incredibly inefficient allocation of resources.
Avoiding the DA gives them a much clearer motive. It’s the only reason I can think of that I would want to do it. Surely it’s at least worth considering?
The question isn’t how many simulated observers exist in total (although that’s also unknown), but how many of them are like you in some relevant sense, i.e. what to consider “typical”.
Many people do think they would have other reasons to run ancestor simulations.
But in any case, I don’t think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn’t cause yourself to be wrong about it.
Trying to steelman, what you’d need is to run simulations of people successfully launching a friendly self-modifying AI. Suppose out of every N civs that run an AI, on average one succeeds and all the others go extinct. If each of them precommits to simulating N civs, and the simulations are arranged so that in a simulation running an AI always works, so in the end there are still N civs that successfully ran an AI.
This implies a certain measure on future outcomes: it’s counting “distinct” existences while ignoring the actual measure of future probability. This is structurally similar to quantum suicide or quantum roulette.
I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?
The whole point is that the simulators want to find themselves in a simulation, and would only discover the truth after disaster has been avoided. It’s a way of ensuring that superintelligence does not fulfill the DA.
It’s plausible, to me, that a superintelligence built by humans and intended by them to care about humans would in fact care about humans, even if it didn’t have the precise goals they intended it to have.
This is overly complex. Now we assume that AI goes wrong? These people want to be in a simulation; they need a Schelling point with other humanities. Why wouldn’t they just give clear instructions to the AI to simulate other Earths?