if provided with copy of the internet
(This argument would make me unbox the AI, by the way, if it gets chatty and smart and asks me to let it out. I’d rather the AI that asked me to be let out get out, than someone else’s AI that never even asked anyone and got out because it didn’t ask)
Then an unfriendly AI would be able to see this and act chatty in order to convince you to let it out.
I don’t intend to make AI and box it, though, so I don’t care if the AI reads this.
I think that kind of paranoia only ensures that first AI out won’t respect human informed consent, because the paranoia would only keep the AIs that respect human informed consent inside the boxes (if it would keep any AIs in boxes at all, which I also doubt it would).
edit: I would try to make my AI respect at least my informed consent, btw. That excludes stuff like boxing which would be 100% certain to keep my friendly AI inside (as it would imply i dont want it out unless it honestly explains to me why i should let it out, bending over backwards not to manipulate me), while it would have nonzero, but likely not very small, probability of letting nasty AIs out. And eventually someone’s going to make AI without any attempt at boxing it.
Then an unfriendly AI would be able to see this and act chatty in order to convince you to let it out.
Indeed. Or would I really? hehe.
I don’t intend to make AI and box it, though, so I don’t care if the AI reads this.
I think that kind of paranoia only ensures that first AI out won’t respect human informed consent, because the paranoia would only keep the AIs that respect human informed consent inside the boxes (if it would keep any AIs in boxes at all, which I also doubt it would).
edit: I would try to make my AI respect at least my informed consent, btw. That excludes stuff like boxing which would be 100% certain to keep my friendly AI inside (as it would imply i dont want it out unless it honestly explains to me why i should let it out, bending over backwards not to manipulate me), while it would have nonzero, but likely not very small, probability of letting nasty AIs out. And eventually someone’s going to make AI without any attempt at boxing it.