The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have.
Well, it would mean that “pulling the plug” would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
Odd. I thought you were the one arguing that substrate doesn’t matter. I must have misunderstood or oversimplified.
The “unplug” one is particularly weak. We can unplug you with a gun.
I don’t think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Is there a blindingly obvious moral truth that gives you self-ownership? Why?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Why doesn’t this apply to an AI?
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
Well, it would mean that “pulling the plug” would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
Odd. I thought you were the one arguing that substrate doesn’t matter. I must have misunderstood or oversimplified.
I don’t think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
Your point being?
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
I could turn the simulation off. Why is your computational substrate specialer than an AI’s computational substrate?
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.