there is no respect in which you can draw a line and say “They are not the same kind of system.”—or at least any line such drawn will be arbitrary.
But there is such a line. You can unplug a simulation. You cannot unplug a reality.
You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
I’m not sure that the ‘line’ between simulation and reality is always well-defined. Whenever you have a system whose behaviour is usefully predicted and explained by a set of laws L other than the laws of physics, you can describe this state of affairs as a simulation of a universe whose laws of physics are L. This leaves a whole bunch of questions open: Whether an agent deliberately set up the ‘simulation’ or whether it came about naturally, how accurate the simulation is, whether and how the laws L can be violated without violating the laws of physics, whether and how an agent is able to violate the laws of L in a controlled way etc.
All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.
The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn’t be much of a market for those computers—and, importantly, it wouldn’t persuade you any more.
It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more—you are just claiming that that even matters—that the capability to do something to a system detracts from other features.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
The “unplug” one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?
All I see here is a lot of claims that being able to do something with a certain type of system—which has been deliberately set up to make it easy to do things with it—makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.
The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have.
Well, it would mean that “pulling the plug” would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
Odd. I thought you were the one arguing that substrate doesn’t matter. I must have misunderstood or oversimplified.
The “unplug” one is particularly weak. We can unplug you with a gun.
I don’t think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Is there a blindingly obvious moral truth that gives you self-ownership? Why?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Why doesn’t this apply to an AI?
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
As a further comment, regarding the idea that you can “unplug” a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts—the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there—the computer itself—but the higher level organization is gone—just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a “simulated” one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don’t emerge from it anymore (by using nuclear devices or turning off the power).
It would have to be a weapon that somehow destroyed the universe in order for me to see the parallel. Hmmm. A “big crunch” in which all the matter in the universe disappears into a black hole would do the job.
If you can somehow pull that off, I might have to consider you immoral if you went ahead and did it. From outside this universe, of course.
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
I’m not sure that the ‘line’ between simulation and reality is always well-defined. Whenever you have a system whose behaviour is usefully predicted and explained by a set of laws L other than the laws of physics, you can describe this state of affairs as a simulation of a universe whose laws of physics are L. This leaves a whole bunch of questions open: Whether an agent deliberately set up the ‘simulation’ or whether it came about naturally, how accurate the simulation is, whether and how the laws L can be violated without violating the laws of physics, whether and how an agent is able to violate the laws of L in a controlled way etc.
You give me pause, sir.
All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.
The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn’t be much of a market for those computers—and, importantly, it wouldn’t persuade you any more.
It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more—you are just claiming that that even matters—that the capability to do something to a system detracts from other features.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
The “unplug” one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?
All I see here is a lot of claims that being able to do something with a certain type of system—which has been deliberately set up to make it easy to do things with it—makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.
Well, it would mean that “pulling the plug” would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
Odd. I thought you were the one arguing that substrate doesn’t matter. I must have misunderstood or oversimplified.
I don’t think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
Your point being?
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
I could turn the simulation off. Why is your computational substrate specialer than an AI’s computational substrate?
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
As a further comment, regarding the idea that you can “unplug” a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts—the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there—the computer itself—but the higher level organization is gone—just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a “simulated” one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don’t emerge from it anymore (by using nuclear devices or turning off the power).
It would have to be a weapon that somehow destroyed the universe in order for me to see the parallel. Hmmm. A “big crunch” in which all the matter in the universe disappears into a black hole would do the job.
If you can somehow pull that off, I might have to consider you immoral if you went ahead and did it. From outside this universe, of course.