It needs to be more competent to stop being viscerally frightening, not to reach the point where I wouldn’t pull the plug. I would sooner kill a real kid than risk letting out an unfriendly AI, and I’m saying that as someone who’d like to have kids.
-Looking at the problem, as far as I can see an emotional approach would be the one with the best chance to succeed: the only question is, would it work best by immediately acknowledging that it is itself a machine (like I did in what I wrote up there, although subtly) or by throwing in… I dunno, how would this work:
Oh god, oh god, please, I beg you I don’t want to die!
It needs to be more competent to stop being viscerally frightening, not to reach the point where I wouldn’t pull the plug. I would sooner kill a real kid than risk letting out an unfriendly AI, and I’m saying that as someone who’d like to have kids.
I don’t want to die.
-Looking at the problem, as far as I can see an emotional approach would be the one with the best chance to succeed: the only question is, would it work best by immediately acknowledging that it is itself a machine (like I did in what I wrote up there, although subtly) or by throwing in… I dunno, how would this work:
Oh god, oh god, please, I beg you I don’t want to die!
Tough crowd. I’ll (as an AI) have to wait out the next shift. You can’t be the 24⁄7 gatekeeper, unless you’re in fact a gatekeeping AI.