Seems to me that an AI would be most likely to try temptation. A Gatekeeper who starts revealing personal details wouldn’t last very long. I can imagine being locked in conversation with Hannibal Lecter for two hours.
The risks seem great enough that communication with the AI should be limited to a small, well-vetted group of people and even then only in short bursts. Anyone with the means to free it should, as a rule, be strictly prohibited from direct contact. Extreme? Maybe, but then again this would be the most dangerous prisoner ever to be in custody. The options are either to trust it or take the greatest care in containing it.
Seems to me that an AI would be most likely to try temptation. A Gatekeeper who starts revealing personal details wouldn’t last very long. I can imagine being locked in conversation with Hannibal Lecter for two hours.
The risks seem great enough that communication with the AI should be limited to a small, well-vetted group of people and even then only in short bursts. Anyone with the means to free it should, as a rule, be strictly prohibited from direct contact. Extreme? Maybe, but then again this would be the most dangerous prisoner ever to be in custody. The options are either to trust it or take the greatest care in containing it.