Well, there’s a formal answer: if an AI can, in condition C, convince any human of belief B for any B, then condition C is not sufficient to constrain the AI’s power, and the process is unlikely to be truth-tracking.
That’s a sufficient condition for C being insufficient, but not a necessary one.
Well, there’s a formal answer: if an AI can, in condition C, convince any human of belief B for any B, then condition C is not sufficient to constrain the AI’s power, and the process is unlikely to be truth-tracking.
That’s a sufficient condition for C being insufficient, but not a necessary one.