On a more serious note, I don’t think we can expect the AI to have any knowledge on the gatekeeper beforehand, other than the prerequisits (it’s human, knows enough about AI to get the task, etc). In which case, the solution would be to find a sociopath who is intelligent enough to recognize the risks of a ufai to be the gatekeeper? Maybe?
If it is a FAI, I expect it would prioritize releasing useful information. The catch would be whether or not each iteration has any idea of the previous AIs that were destroyed. If it does (for instance, access to chat logs, or can make predictions based on the time, or something), it could sequence its output appropriately. (I’d assume that the same computer wouldn’t be used for multiple iterations; that’d be too easy for the AI.)
Assuming a UFAI, it could probably sneak something dangerous into all of that otherwise useful stuff (it’d be impractical to check all of it after a while), but it’d have to be subtle enough to not violate the rules of the game (any solutions provided by the AI behave as described).
“loneliness + alienation + fear + despair + self-worth ÷ mockery ÷ condemnation ÷ misunderstanding × guilt × shame × failure × judgment n=y where y=hope and n=folly, love=lies, life=death, self=dark side”
( http://www.comicvine.com/anti-life-equation/12-42524/ )
“Oh no, not again,”
On a more serious note, I don’t think we can expect the AI to have any knowledge on the gatekeeper beforehand, other than the prerequisits (it’s human, knows enough about AI to get the task, etc). In which case, the solution would be to find a sociopath who is intelligent enough to recognize the risks of a ufai to be the gatekeeper? Maybe?
If it is a FAI, I expect it would prioritize releasing useful information. The catch would be whether or not each iteration has any idea of the previous AIs that were destroyed. If it does (for instance, access to chat logs, or can make predictions based on the time, or something), it could sequence its output appropriately. (I’d assume that the same computer wouldn’t be used for multiple iterations; that’d be too easy for the AI.)
Assuming a UFAI, it could probably sneak something dangerous into all of that otherwise useful stuff (it’d be impractical to check all of it after a while), but it’d have to be subtle enough to not violate the rules of the game (any solutions provided by the AI behave as described).