Ok. The obvious followup is “under what conditions is it a bad thing?” Your college example is a good one—are you saying you want to prevent AIs from making similar changes (but on a perhaps larger scale) that university does to students?
Well, there’s a formal answer: if an AI can, in condition C, convince any human of belief B for any B, then condition C is not sufficient to constrain the AI’s power, and the process is unlikely to be truth-tracking.
That’s a sufficient condition for C being insufficient, but not a necessary one.
Ok. The obvious followup is “under what conditions is it a bad thing?” Your college example is a good one—are you saying you want to prevent AIs from making similar changes (but on a perhaps larger scale) that university does to students?
Well, there’s a formal answer: if an AI can, in condition C, convince any human of belief B for any B, then condition C is not sufficient to constrain the AI’s power, and the process is unlikely to be truth-tracking.
That’s a sufficient condition for C being insufficient, but not a necessary one.