In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Is there a blindingly obvious moral truth that gives you self-ownership? Why?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Why doesn’t this apply to an AI?
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
Your point being?
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
I could turn the simulation off. Why is your computational substrate specialer than an AI’s computational substrate?
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.