...and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
… (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;…
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
If we are, in fact, running in a simulation, there’s little reason to think this is true.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.