I agree that an AI with such amazing knowledge should be unusually good at communicating its justifications effectively (because able to anticipate responses, etc.) I’m of the opinion that this is one of the numerous minor reasons for being skeptical of traditional religions; their supposedly all-knowing gods seem surprisingly bad at conveying messages clearly to humans. But to return to VAuroch’s point, in order for the scenario to be “wildly inconsistent,” the AI would have to be perfect at communicating such justifications, not merely unusually good. Even such amazing predictive ability does not seem to me sufficient to guarantee perfection.
Albert doesn’t have to be perfect at communication. He doesn’t even have to be good at it. He just needs to have confidence that no action or decision will be made until both parties (human operators and Albert) are satisfied that they fully understand each other… which seems like a common sense rule to me.
Whether it’s common sense is irrelevant; it’s not realistically achievable even for humans, who have much smaller inferential distances between them than a human would have from an AI.
I agree that an AI with such amazing knowledge should be unusually good at communicating its justifications effectively (because able to anticipate responses, etc.) I’m of the opinion that this is one of the numerous minor reasons for being skeptical of traditional religions; their supposedly all-knowing gods seem surprisingly bad at conveying messages clearly to humans. But to return to VAuroch’s point, in order for the scenario to be “wildly inconsistent,” the AI would have to be perfect at communicating such justifications, not merely unusually good. Even such amazing predictive ability does not seem to me sufficient to guarantee perfection.
Albert doesn’t have to be perfect at communication. He doesn’t even have to be good at it. He just needs to have confidence that no action or decision will be made until both parties (human operators and Albert) are satisfied that they fully understand each other… which seems like a common sense rule to me.
Whether it’s common sense is irrelevant; it’s not realistically achievable even for humans, who have much smaller inferential distances between them than a human would have from an AI.