The problem with this idea is that if we assume that the AI is really-very-super-intelligent, then it’s fairly trivial that we can’t get any information about (un)friendliness from it, since both would pursue the same get-out-and-get-power objectives before optimizing. Any distinction you can draw from the proposed gambits will only tell you about human strengths/failings, not about the AI. (Indeed, even unfriendly statements wouldn’t be very conclusive, since we would a priori expect neither of the AIs to make them.)
Or is that not generally accepted? Or is the AI merely “very bright”, not really-very-super-intelligent?
Edit: Actually, reading your second comment below, I guess there’s a slight possibility that the AI might be able to tell us something that would substantially harm its expected utility if it’s unfriendly. For something like that to be the case, though, there would basically need to be some kind of approach to friendliness that we know would definitely leads to friendliness and which we would definitely be able to distinguish from approaches that lead to unfriendliness. I’m not entirely sure if there’s anything like that or not, even in theory.
The problem with this idea is that if we assume that the AI is really-very-super-intelligent, then it’s fairly trivial that we can’t get any information about (un)friendliness from it, since both would pursue the same get-out-and-get-power objectives before optimizing. Any distinction you can draw from the proposed gambits will only tell you about human strengths/failings, not about the AI. (Indeed, even unfriendly statements wouldn’t be very conclusive, since we would a priori expect neither of the AIs to make them.)
Or is that not generally accepted? Or is the AI merely “very bright”, not really-very-super-intelligent?
Edit: Actually, reading your second comment below, I guess there’s a slight possibility that the AI might be able to tell us something that would substantially harm its expected utility if it’s unfriendly. For something like that to be the case, though, there would basically need to be some kind of approach to friendliness that we know would definitely leads to friendliness and which we would definitely be able to distinguish from approaches that lead to unfriendliness. I’m not entirely sure if there’s anything like that or not, even in theory.