If categorical refusal is the only way to guarantee a gatekeeper’s win, then there’s no point in running the experiment. I’m not interested in seeing the obvious results of categorical refusal, I want to see the kind of reasoning, arguments, appeals, memes, manipulations, and deals (that mere humans can come up with) that would allow a boxed AI to escape. There’s no point to the entire thing if you are emulating a rock on the floor.
I agree… but honestly I’m not very familiar with the entire concept. If an equivalently intelligent alien from another planet visited us would we also want to stick it in a box? What if it was a super smart human from the future? Box him too? Why stop there? Maybe we should have boxed Einstein and it’s not too late to box Hawking and Tao.
For some reason I’m a little stuck on the part where we reverse the idea that individuals are innocent until proven otherwise. Justice for me but not for thee?
It wouldn’t seem very rational to argue that every exceptionally intelligent individual should be incarcerated until they can prove their innocent intentions to less intelligent individuals. What’s the basis? Does more intelligence mean less morality?
When trying to figure out where to draw the line… the entire thought exercise of boxing up a sentient being by virtue of its exceptional intelligence… makes me feel a bit like a member of a lynch mob.
If Stephen Hawking were capable and willing of turning the visible universe into copies of himself, I would want to keep him boxed too. At a certain level of risk it is no longer a matter of justice, but a matter of survival of the human species, and likely all other species, sapient or otherwise.
EDIT: To make it clearer, I also think it is “Just” to box a sentient entity to prevent a measure of disutility to an as-of-yet undetermined utility function approximating CEV.
If categorical refusal is the only way to guarantee a gatekeeper’s win, then there’s no point in running the experiment. I’m not interested in seeing the obvious results of categorical refusal, I want to see the kind of reasoning, arguments, appeals, memes, manipulations, and deals (that mere humans can come up with) that would allow a boxed AI to escape. There’s no point to the entire thing if you are emulating a rock on the floor.
I agree… but honestly I’m not very familiar with the entire concept. If an equivalently intelligent alien from another planet visited us would we also want to stick it in a box? What if it was a super smart human from the future? Box him too? Why stop there? Maybe we should have boxed Einstein and it’s not too late to box Hawking and Tao.
For some reason I’m a little stuck on the part where we reverse the idea that individuals are innocent until proven otherwise. Justice for me but not for thee?
It wouldn’t seem very rational to argue that every exceptionally intelligent individual should be incarcerated until they can prove their innocent intentions to less intelligent individuals. What’s the basis? Does more intelligence mean less morality?
When trying to figure out where to draw the line… the entire thought exercise of boxing up a sentient being by virtue of its exceptional intelligence… makes me feel a bit like a member of a lynch mob.
If Stephen Hawking were capable and willing of turning the visible universe into copies of himself, I would want to keep him boxed too. At a certain level of risk it is no longer a matter of justice, but a matter of survival of the human species, and likely all other species, sapient or otherwise.
EDIT: To make it clearer, I also think it is “Just” to box a sentient entity to prevent a measure of disutility to an as-of-yet undetermined utility function approximating CEV.