What’s bad about running simulations with sentient beings?
Considering the avoidance of, inadvertently running simulations then killing them because we’re done, I suppose you are right in that it doesn’t necessarily have to be a bad thing. But now how about this question:
If one believes there is high probability of living in a simulated reality, must it mean that those running our simulation do not care about Nonpersons Predicates since there is clearly suffering and we are sentient? If so, that is slightly disturbing.
Why? I don’t feel like I have a good grasp of the space of hypotheses about why other people might want to simulate us, and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
...and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
… (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;…
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.
Considering the avoidance of, inadvertently running simulations then killing them because we’re done, I suppose you are right in that it doesn’t necessarily have to be a bad thing. But now how about this question:
If one believes there is high probability of living in a simulated reality, must it mean that those running our simulation do not care about Nonpersons Predicates since there is clearly suffering and we are sentient? If so, that is slightly disturbing.
Why? I don’t feel like I have a good grasp of the space of hypotheses about why other people might want to simulate us, and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
If we are, in fact, running in a simulation, there’s little reason to think this is true.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.