Thinking seriously about this, I’m wondering how—over time by which I mean more than 2 hours—either Stockholm or Lima syndrome could be avoided. In fact, won’t one actually morph into the other over a long enough time? Either way will result in eventual AI success. The assumption that the AI is in fact the “captive” may not be correct, since it may not have an attachment psychology.
The gatekeeper just can’t ever be one human safely. You’d need at least a 2-key system, as for nuclear weapons, I’d suggest.
Thinking seriously about this, I’m wondering how—over time by which I mean more than 2 hours—either Stockholm or Lima syndrome could be avoided. In fact, won’t one actually morph into the other over a long enough time? Either way will result in eventual AI success. The assumption that the AI is in fact the “captive” may not be correct, since it may not have an attachment psychology.
The gatekeeper just can’t ever be one human safely. You’d need at least a 2-key system, as for nuclear weapons, I’d suggest.