If you could deceive the AI that easily, I think it would probably be simpler to get all the benefits of having a gatekeeper without actually using one.
If you would want to have a gatekeeper at all, but definitely don’t want to let the AI out, I would think that the benefits of having one would be to permit communication with the AI to draw upon its superhuman intelligence. If you can use the setup you just described, you could skip the step of ever using gatekeepers who actually have the power to let the AI out.
I think you are right, I just shifted and convoluted the problem somewhat, but in principle it remains the same:
To utilize the AI, you need to get information from it. That information could in theory be infected with a persuasive hyperstimulus, effectively making the recipient an actuator of the AI.
Well, in practice the additional security layer might win us some time. More on this in the update to my original comment.
Persuasion/hyperstimulation aren’t the only way. Maybe these can be countered by narrowing the interface, e.g. to yes/no replies, for using the AI as an oracle (“Should we do X?”). Of course we wouldn’t follow its advice if we had the impression that that could enable it to escape. But its strategy might evade our ‘radar’. E.g. she could make us empower a person, of whom she knows that they will free her but we don’t know.
If you could deceive the AI that easily, I think it would probably be simpler to get all the benefits of having a gatekeeper without actually using one.
Please elaborate: What are the benefits of a Gatekeeper? How could you get them without one?
If you would want to have a gatekeeper at all, but definitely don’t want to let the AI out, I would think that the benefits of having one would be to permit communication with the AI to draw upon its superhuman intelligence. If you can use the setup you just described, you could skip the step of ever using gatekeepers who actually have the power to let the AI out.
I think you are right, I just shifted and convoluted the problem somewhat, but in principle it remains the same:
To utilize the AI, you need to get information from it. That information could in theory be infected with a persuasive hyperstimulus, effectively making the recipient an actuator of the AI.
Well, in practice the additional security layer might win us some time. More on this in the update to my original comment.
Persuasion/hyperstimulation aren’t the only way. Maybe these can be countered by narrowing the interface, e.g. to yes/no replies, for using the AI as an oracle (“Should we do X?”). Of course we wouldn’t follow its advice if we had the impression that that could enable it to escape. But its strategy might evade our ‘radar’. E.g. she could make us empower a person, of whom she knows that they will free her but we don’t know.