Albert doesn’t have to be perfect at communication. He doesn’t even have to be good at it. He just needs to have confidence that no action or decision will be made until both parties (human operators and Albert) are satisfied that they fully understand each other… which seems like a common sense rule to me.
Whether it’s common sense is irrelevant; it’s not realistically achievable even for humans, who have much smaller inferential distances between them than a human would have from an AI.
Albert doesn’t have to be perfect at communication. He doesn’t even have to be good at it. He just needs to have confidence that no action or decision will be made until both parties (human operators and Albert) are satisfied that they fully understand each other… which seems like a common sense rule to me.
Whether it’s common sense is irrelevant; it’s not realistically achievable even for humans, who have much smaller inferential distances between them than a human would have from an AI.