It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?
In all honesty, I don’t think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly “insisted” on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)
What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet “fairness” and the ethic of reciprocity suggest that I should treat simulated beings the same way I would like to be treated by my simulator. Perhaps we need something akin to the ancient Greeks’ concept of xenia — reciprocal obligations of host to guest and guest to host — and perhaps the first rule should be “Do not simulate without sufficient resources to maintain that simulation indefinitely.”
I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious.
Personally, I would be more surprised if you could imagine a character who was correct in claiming not to be conscious.
perhaps the first rule should be “Do not simulate without sufficient resources to maintain that simulation indefinitely.”
There have been someopinions expressed on another thread that disagree with that.
The key question is whether terminating a simulation actually does harm to the simulated entity. Some thought experiments may improve our moral intuitions here.
Does slowing down a simulation do harm?
Does halting, saving, and then restarting a simulation do harm?
Is harm done when we stop a simulation, restore an earlier save file, and then restart?
If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? Did the harm take place at some point in our timeline, or at a point in simulated time, or both?
I tend to agree with your invocation of xenia, but I’m not sure it applies to simulations. At what point do simulated entities become my guests? When I buy the shrink-wrap software? When I install the package? When I hit start?
I really remain unconvinced that the metaphor applies.
Does slowing down a simulation do harm? If/when time for computation becomes exhausted, those beings who lost the opportunity to be simulated are harmed, relative to the counterfactual world in which the simulation was not slowed.
Does halting, saving, and then restarting a simulation do harm? No.
Is harm done when we stop a simulation, restore an earlier save file, and then restart? If the restore made the stopped simulation unrecoverable, yes.
If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? When the information became unrecoverable. Did the harm take place at some point in our timeline, or at a point in simulated time, or both? Both.
Does slowing down a simulation do harm? If/when time for computation becomes exhausted, those beings who lost the opportunity to be simulated are harmed, relative to the counterfactual world in which the simulation was not slowed.
Slowing down a simulation also does harm if there are interactions which the simulation would prefer to maintain which are made more difficult or impossible.
Is harm done when we stop a simulation, restore an earlier save file, and then restart? If the restore made the stopped simulation unrecoverable, yes.
Do I understand this properly to say that if the stopped simulation had been derived from the save file state using non-deterministic or control-console inputs, inputs that are not duplicated in the restarted simulation, then harm is done?
Hmmm. I am imagining a programmer busy typing messages to his simulated “creations”:
Do I understand this properly to say that if the stopped simulation had been derived from the save file state using non-deterministic or control-console inputs, inputs that are not duplicated in the restarted simulation, then harm is done?
As I understand it, yes. But the harm might not be as bad as what we currently think of as death, depending on how far back the restore went. Backing one’s self up is a relatively common trope in a certain brand of Singularity fic (e.g. Glasshouse)).
(I needed three parentheses in a row just now: the first one, escaped, for the Wikipedia article title, the second one to close the link, and the third one to appear as text.)
All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.
As for when the harm occurs, that’s nebulous concept hanging on the meaning of ‘harm’ and ‘occurs’. In Dan Simmons’ Hyperion Cantos, there is a method of execution called the ‘Schrodinger cat box’. The convict is placed inside this box, which is then sealed. It’s a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict’s death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.
I would argue that the ‘harm’ of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I’d say ‘potential harm’ is created, which will be actualized at an unknown time. If the convict’s friends somehow rescue him from the box, this potential harm is averted, but I don’t think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.
If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.
In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.
Your second link was discussing simulating the same personality repeatedly, which I don’t think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.
So it seems that you simply don’t take seriously my claim that no harm is done in terminating a simulation, for the reason that terminating a simulation has no effect on the real existence of the entities simulated.
I see turning off a simulation as comparable to turning off my computer after it has printed the first 47,397,123 digits of pi. My action had no effect on pi itself, which continues to exist. Digits of pi beyond 50 million still exist. All I have done by shutting off the computer power is to deprive myself of the ability to see them.
I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.
There isn’t a clear way in which you can say that something is a “simulation”, and I think that isn’t obvious when we draw a line in a simplistic way based on our experiences of using computers to “simulate things”.
Real things are arrangements of matter, but what we call “simulations” of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a brain and some arrangement of matter in a computer: A brain and some arrangement of matter in a computer may look different, but they may still have more subtle properties in common, and there is no respect in which you can draw a line and say “They are not the same kind of system.”—or at least any line such drawn will be arbitrary.
there is no respect in which you can draw a line and say “They are not the same kind of system.”—or at least any line such drawn will be arbitrary.
But there is such a line. You can unplug a simulation. You cannot unplug a reality.
You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
I’m not sure that the ‘line’ between simulation and reality is always well-defined. Whenever you have a system whose behaviour is usefully predicted and explained by a set of laws L other than the laws of physics, you can describe this state of affairs as a simulation of a universe whose laws of physics are L. This leaves a whole bunch of questions open: Whether an agent deliberately set up the ‘simulation’ or whether it came about naturally, how accurate the simulation is, whether and how the laws L can be violated without violating the laws of physics, whether and how an agent is able to violate the laws of L in a controlled way etc.
All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.
The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn’t be much of a market for those computers—and, importantly, it wouldn’t persuade you any more.
It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more—you are just claiming that that even matters—that the capability to do something to a system detracts from other features.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
The “unplug” one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?
All I see here is a lot of claims that being able to do something with a certain type of system—which has been deliberately set up to make it easy to do things with it—makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.
The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have.
Well, it would mean that “pulling the plug” would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
Odd. I thought you were the one arguing that substrate doesn’t matter. I must have misunderstood or oversimplified.
The “unplug” one is particularly weak. We can unplug you with a gun.
I don’t think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Is there a blindingly obvious moral truth that gives you self-ownership? Why?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Why doesn’t this apply to an AI?
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
As a further comment, regarding the idea that you can “unplug” a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts—the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there—the computer itself—but the higher level organization is gone—just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a “simulated” one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don’t emerge from it anymore (by using nuclear devices or turning off the power).
It would have to be a weapon that somehow destroyed the universe in order for me to see the parallel. Hmmm. A “big crunch” in which all the matter in the universe disappears into a black hole would do the job.
If you can somehow pull that off, I might have to consider you immoral if you went ahead and did it. From outside this universe, of course.
Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to ‘exist’? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.
In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That’s of little comfort to me, though, if I am informed that I’m living in a simulation on some upuniverse computer, which is about to be decommissioned. My life is meaningful to me even if every possible version of me resulting from every possible choice exists in the platonic realm of ethics.
Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist?
No, of course not. No more than do simulated entities on your hard-drive exist as sentient agents in this universe. As sentient agents, they exist in a simulable universe. A universe which does not require actually running as a simulation in this or any other universe to have its own autonomous existence.
What does it mean for information to ‘exist’?
Now I’m pretty sure that is an example of mind projection. Information exists only with reference to some agent being informed.
If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.
Which is exactly my point. If you terminate a simulation, you lose access to the simulated entities, but that doesn’t mean they have been destroyed. In fact, they simply cannot be destroyed by any action you can take, since they exist in a different space-time.
That’s of little comfort to me, though, if I am informed that I’m living in a simulation on some upuniverse computer, which is about to be decommissioned.
But you are not living in that upuniverse computer. You are living here. All that exists in that computer is a simulation of you. In effect, you were being watched. They intend to stop watching. Big deal!
I don’t really wish to play word games here. Obviously there is some physical thing made of paper and ink on your bookshelf. Equally obviously, Borges was writing fiction when he told us about Babel. But in your thought experiment, something containing the same information as the book on your shelf exists in Babel.
What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?
Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can’t wave away my moral responsibility by claiming that in some other universe, I will act differently.
I am fascinated by applying the ethic of reciprocity to simulationism, but is a bidirectional transfer the right approach?
Can we deduce the ethics of our simulator with respect to simulations by reference to how we wish to be simulated? And is that the proper ethics? This would be projecting the ethics up.
Or rather should we deduce the proper ethics from how we appear to be simulated? This would be projecting the ethics down.
The latter approach would lead to a different set of simulation ethics, probably based more on historicity and utility. ie “Simulations should be historically accurate.” This would imply that simulation of past immorality and tragedy is not unethical if it is accurate.
No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in “return”. A host’s duty to his guests doesn’t go away just because that host had a poor experience when he himself was a guest at some other person’s house.
If our simulators don’t care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.
If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.
If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.
But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way “in return” by those simulating you—using a rather strange meaning of “in return”?
Some people interpret the Newcomb’s boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior—even if there is no obvious causal relationship, and even if the other entities already decided back in the past.
The Newcomb’s boxes paradox is essentially about reference class—it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you—and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.
Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience—in reality—it was more that your behavior was dictated by being part of the reference class—but you don’t experience the making of decisions from that perspective). Same for being unethical.
You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos—such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality—even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your “effect” on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.
I want to be clear here that I am under no illusion that there is some kind of “magical causal link”. We might say that this is about how our decisions are really determined anyway. Deciding as if “the decision” influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if “your decision” affects anything else in everyday life—when in fact, your decision is determined by outside things.
This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn’t deliberate.
One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.
P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right—only a bit, I admit.)
It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?
In all honesty, I don’t think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly “insisted” on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)
What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet “fairness” and the ethic of reciprocity suggest that I should treat simulated beings the same way I would like to be treated by my simulator. Perhaps we need something akin to the ancient Greeks’ concept of xenia — reciprocal obligations of host to guest and guest to host — and perhaps the first rule should be “Do not simulate without sufficient resources to maintain that simulation indefinitely.”
Personally, I would be more surprised if you could imagine a character who was correct in claiming not to be conscious.
There have been some opinions expressed on another thread that disagree with that.
The key question is whether terminating a simulation actually does harm to the simulated entity. Some thought experiments may improve our moral intuitions here.
Does slowing down a simulation do harm?
Does halting, saving, and then restarting a simulation do harm?
Is harm done when we stop a simulation, restore an earlier save file, and then restart?
If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? Did the harm take place at some point in our timeline, or at a point in simulated time, or both?
I tend to agree with your invocation of xenia, but I’m not sure it applies to simulations. At what point do simulated entities become my guests? When I buy the shrink-wrap software? When I install the package? When I hit start?
I really remain unconvinced that the metaphor applies.
Applying the notion of information-theoretic death to simulated beings results in the following answers:
Does slowing down a simulation do harm? If/when time for computation becomes exhausted, those beings who lost the opportunity to be simulated are harmed, relative to the counterfactual world in which the simulation was not slowed.
Does halting, saving, and then restarting a simulation do harm? No.
Is harm done when we stop a simulation, restore an earlier save file, and then restart? If the restore made the stopped simulation unrecoverable, yes.
If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? When the information became unrecoverable. Did the harm take place at some point in our timeline, or at a point in simulated time, or both? Both.
Slowing down a simulation also does harm if there are interactions which the simulation would prefer to maintain which are made more difficult or impossible.
The same would apply to halting a simulation.
Request for clarification:
Do I understand this properly to say that if the stopped simulation had been derived from the save file state using non-deterministic or control-console inputs, inputs that are not duplicated in the restarted simulation, then harm is done?
Hmmm. I am imagining a programmer busy typing messages to his simulated “creations”:
Looks at what was entered …
Thinks about what just happened. … “Aw Sh.t!”
As I understand it, yes. But the harm might not be as bad as what we currently think of as death, depending on how far back the restore went. Backing one’s self up is a relatively common trope in a certain brand of Singularity fic (e.g. Glasshouse)).
(I needed three parentheses in a row just now: the first one, escaped, for the Wikipedia article title, the second one to close the link, and the third one to appear as text.)
All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.
As for when the harm occurs, that’s nebulous concept hanging on the meaning of ‘harm’ and ‘occurs’. In Dan Simmons’ Hyperion Cantos, there is a method of execution called the ‘Schrodinger cat box’. The convict is placed inside this box, which is then sealed. It’s a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict’s death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.
I would argue that the ‘harm’ of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I’d say ‘potential harm’ is created, which will be actualized at an unknown time. If the convict’s friends somehow rescue him from the box, this potential harm is averted, but I don’t think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.
If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.
In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.
Your second link was discussing simulating the same personality repeatedly, which I don’t think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.
So it seems that you simply don’t take seriously my claim that no harm is done in terminating a simulation, for the reason that terminating a simulation has no effect on the real existence of the entities simulated.
I see turning off a simulation as comparable to turning off my computer after it has printed the first 47,397,123 digits of pi. My action had no effect on pi itself, which continues to exist. Digits of pi beyond 50 million still exist. All I have done by shutting off the computer power is to deprive myself of the ability to see them.
I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.
What does consciousness have to do with it? It doesn’t matter whether I am simulating minds or simulating bacteria. A simulation is not a reality.
There isn’t a clear way in which you can say that something is a “simulation”, and I think that isn’t obvious when we draw a line in a simplistic way based on our experiences of using computers to “simulate things”.
Real things are arrangements of matter, but what we call “simulations” of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a brain and some arrangement of matter in a computer: A brain and some arrangement of matter in a computer may look different, but they may still have more subtle properties in common, and there is no respect in which you can draw a line and say “They are not the same kind of system.”—or at least any line such drawn will be arbitrary.
I refer you to:
Almond, P., 2008. Searle’s Argument Against AI and Emergent Properties. Available at: http://www.paul-almond.com/SearleEmergentProperties.pdf or http://www.paul-almond.com/SearleEmergentProperties.doc [Accessed 27 August 2010].
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
I’m not sure that the ‘line’ between simulation and reality is always well-defined. Whenever you have a system whose behaviour is usefully predicted and explained by a set of laws L other than the laws of physics, you can describe this state of affairs as a simulation of a universe whose laws of physics are L. This leaves a whole bunch of questions open: Whether an agent deliberately set up the ‘simulation’ or whether it came about naturally, how accurate the simulation is, whether and how the laws L can be violated without violating the laws of physics, whether and how an agent is able to violate the laws of L in a controlled way etc.
You give me pause, sir.
All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.
The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn’t be much of a market for those computers—and, importantly, it wouldn’t persuade you any more.
It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more—you are just claiming that that even matters—that the capability to do something to a system detracts from other features.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
The “unplug” one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?
All I see here is a lot of claims that being able to do something with a certain type of system—which has been deliberately set up to make it easy to do things with it—makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.
Well, it would mean that “pulling the plug” would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
Odd. I thought you were the one arguing that substrate doesn’t matter. I must have misunderstood or oversimplified.
I don’t think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
Your point being?
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
I could turn the simulation off. Why is your computational substrate specialer than an AI’s computational substrate?
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
As a further comment, regarding the idea that you can “unplug” a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts—the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there—the computer itself—but the higher level organization is gone—just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a “simulated” one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don’t emerge from it anymore (by using nuclear devices or turning off the power).
It would have to be a weapon that somehow destroyed the universe in order for me to see the parallel. Hmmm. A “big crunch” in which all the matter in the universe disappears into a black hole would do the job.
If you can somehow pull that off, I might have to consider you immoral if you went ahead and did it. From outside this universe, of course.
Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to ‘exist’? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.
In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That’s of little comfort to me, though, if I am informed that I’m living in a simulation on some upuniverse computer, which is about to be decommissioned. My life is meaningful to me even if every possible version of me resulting from every possible choice exists in the platonic realm of ethics.
No, of course not. No more than do simulated entities on your hard-drive exist as sentient agents in this universe. As sentient agents, they exist in a simulable universe. A universe which does not require actually running as a simulation in this or any other universe to have its own autonomous existence.
Now I’m pretty sure that is an example of mind projection. Information exists only with reference to some agent being informed.
Which is exactly my point. If you terminate a simulation, you lose access to the simulated entities, but that doesn’t mean they have been destroyed. In fact, they simply cannot be destroyed by any action you can take, since they exist in a different space-time.
But you are not living in that upuniverse computer. You are living here. All that exists in that computer is a simulation of you. In effect, you were being watched. They intend to stop watching. Big deal!
Do you also argue that the books on my bookshelves don’t really exist in this universe, since they can be found in the Library of Babel?
Gee, what do you think?
I don’t really wish to play word games here. Obviously there is some physical thing made of paper and ink on your bookshelf. Equally obviously, Borges was writing fiction when he told us about Babel. But in your thought experiment, something containing the same information as the book on your shelf exists in Babel.
Do you have some point in asking this?
What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?
Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can’t wave away my moral responsibility by claiming that in some other universe, I will act differently.
I am fascinated by applying the ethic of reciprocity to simulationism, but is a bidirectional transfer the right approach?
Can we deduce the ethics of our simulator with respect to simulations by reference to how we wish to be simulated? And is that the proper ethics? This would be projecting the ethics up.
Or rather should we deduce the proper ethics from how we appear to be simulated? This would be projecting the ethics down.
The latter approach would lead to a different set of simulation ethics, probably based more on historicity and utility. ie “Simulations should be historically accurate.” This would imply that simulation of past immorality and tragedy is not unethical if it is accurate.
No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in “return”. A host’s duty to his guests doesn’t go away just because that host had a poor experience when he himself was a guest at some other person’s house.
If our simulators don’t care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.
If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.
If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.
Of course, there’s always the possibility that simulations may be much more similar than we think.
But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way “in return” by those simulating you—using a rather strange meaning of “in return”?
Some people interpret the Newcomb’s boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior—even if there is no obvious causal relationship, and even if the other entities already decided back in the past.
The Newcomb’s boxes paradox is essentially about reference class—it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you—and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.
Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience—in reality—it was more that your behavior was dictated by being part of the reference class—but you don’t experience the making of decisions from that perspective). Same for being unethical.
You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos—such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality—even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your “effect” on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.
I want to be clear here that I am under no illusion that there is some kind of “magical causal link”. We might say that this is about how our decisions are really determined anyway. Deciding as if “the decision” influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if “your decision” affects anything else in everyday life—when in fact, your decision is determined by outside things.
This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn’t deliberate.
One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.
P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right—only a bit, I admit.)
That idea used to make me afraid to die before i wrote up all the stories i thought up. Sadly that is not even possible any more.
One big difference between an upload an a person simulated in your mind is that the upload can interact with environment.