Running simulations with sentient beings is generally accepted as bad here at LW; yes or no?
If you assign a high probability of reality being simulated, does it follow that most people with our experiences are simulated sentient beings?
I don’t have an opinion yet, but I find the combination of answering yes to both questions, extremely unsettling. It’s like the whole universe conspires against your values. Surprisingly, each idea encountered by itself doesn’t seem to too bad. It’s when simultaneously being against simulation of sentience beings and believing that most sentient beings are probably simulated, that really makes it disturbing.
What’s bad about running simulations with sentient beings? (Nonperson Predicates is about inadvertently running simulations with sentient beings and then killing them because you’re done with the simulation.)
There’s nothing inherently wrong with simulating intelligent beings, so long as you don’t make them suffer. If you simulate an intelligent being and give it a life significantly worse than you could, well, that’s a bit ethically questionable. If we had the power to simulate someone, and we chose to simulate him in a world much like our own, including all the strife, trouble, and pain of this world, when we could have just as easily simulated him in a strictly better world, then I think it would be reasonable to say that you, the simulator, are morally responsible for all that additional suffering.
What’s bad about running simulations with sentient beings?
Considering the avoidance of, inadvertently running simulations then killing them because we’re done, I suppose you are right in that it doesn’t necessarily have to be a bad thing. But now how about this question:
If one believes there is high probability of living in a simulated reality, must it mean that those running our simulation do not care about Nonpersons Predicates since there is clearly suffering and we are sentient? If so, that is slightly disturbing.
Why? I don’t feel like I have a good grasp of the space of hypotheses about why other people might want to simulate us, and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
...and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
… (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;…
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.
1 - Depends what you mean by simulation—maintaining ems who think they’re in meat bodies? That’s dishonest at least, but I could see cases for certain special cases being a net good. Creating a digital pocket universe? That’s inefficient, but that inefficiency could end up being irrelevant. Any way you come at it, the same usual ethics regarding creating people apply, and those generally boil down to ‘it’s a big responsibility’ (cf. pregnancy)
2 - I don’t, but if you think so, then obviously yes. I mean, unless you think reality contains even more copies of us than the simulation. That seems a bit of a stretch.
Just a few questions for some of you:
Running simulations with sentient beings is generally accepted as bad here at LW; yes or no?
If you assign a high probability of reality being simulated, does it follow that most people with our experiences are simulated sentient beings?
I don’t have an opinion yet, but I find the combination of answering yes to both questions, extremely unsettling. It’s like the whole universe conspires against your values. Surprisingly, each idea encountered by itself doesn’t seem to too bad. It’s when simultaneously being against simulation of sentience beings and believing that most sentient beings are probably simulated, that really makes it disturbing.
What’s bad about running simulations with sentient beings? (Nonperson Predicates is about inadvertently running simulations with sentient beings and then killing them because you’re done with the simulation.)
There’s nothing inherently wrong with simulating intelligent beings, so long as you don’t make them suffer. If you simulate an intelligent being and give it a life significantly worse than you could, well, that’s a bit ethically questionable. If we had the power to simulate someone, and we chose to simulate him in a world much like our own, including all the strife, trouble, and pain of this world, when we could have just as easily simulated him in a strictly better world, then I think it would be reasonable to say that you, the simulator, are morally responsible for all that additional suffering.
Agree, but I’d like to point out that “just as easily” hides some subtlety in this claim.
Considering the avoidance of, inadvertently running simulations then killing them because we’re done, I suppose you are right in that it doesn’t necessarily have to be a bad thing. But now how about this question:
If one believes there is high probability of living in a simulated reality, must it mean that those running our simulation do not care about Nonpersons Predicates since there is clearly suffering and we are sentient? If so, that is slightly disturbing.
Why? I don’t feel like I have a good grasp of the space of hypotheses about why other people might want to simulate us, and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
If we are, in fact, running in a simulation, there’s little reason to think this is true.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.
1 - Depends what you mean by simulation—maintaining ems who think they’re in meat bodies? That’s dishonest at least, but I could see cases for certain special cases being a net good. Creating a digital pocket universe? That’s inefficient, but that inefficiency could end up being irrelevant. Any way you come at it, the same usual ethics regarding creating people apply, and those generally boil down to ‘it’s a big responsibility’ (cf. pregnancy)
2 - I don’t, but if you think so, then obviously yes. I mean, unless you think reality contains even more copies of us than the simulation. That seems a bit of a stretch.