Albert is able to predict with absolute certainty that we would make a decision that we would regret, but it unable to communicate the justification for that certainty? That is wildly inconsistent.
If the results are communicated with perfect clarity, but the recipient is insufficiently moved by the evidence—for example because it cannot be presented in a form that feels real enough to emotionally justify an extreme response which is logically justified—then the AI must manipulate us to bring the emotional justification in line with the logical one. This isn’t actually extreme; things as simple as altering the format data is presented in, while remaining perfectly truthful, are still manipulation. Even presenting conclusions as a powerpoint rather than plain text, if the AI determines there will be a different response (which there will be), necessarily qualifies.
In general, someone who can reliably predict your actions based on its responses cannot help but manipulate you; the mere fact of providing you with information will influence your actions in a known way, and therefore is manipulation.
Your sentence structure is: if {condition} then {subject} MUST {verb} in order to {purpose}. Here “must” carries the meaning of necessity and lack of choice.
No, ‘must’ here is acting as a logical conditional; it could be rephrased as ‘if {condition} and {subject} does not {verb}, then {purpose} will not occur’ without changing the denotation or even connotation. This isn’t a rare structure, and is the usual interpretation of ‘must’ in sentences of this kind. Leaving off the {purpose} would change the dominant parsing to the imperative sense of must.
It’s curious that we parse your sentence differently. To me your original sentence unambiguously contains “the imperative sense of must” and your rephrasing is very different connotationally.
Let’s try it:
“If the results are communicated with perfect clarity, but the recipient is insufficiently moved by the evidence … and the AI does not manipulate us then the emotional justification will not be in line with the logical one.”
Yep, sounds completely different to my ear and conveys a different meaning.
I agree that an AI with such amazing knowledge should be unusually good at communicating its justifications effectively (because able to anticipate responses, etc.) I’m of the opinion that this is one of the numerous minor reasons for being skeptical of traditional religions; their supposedly all-knowing gods seem surprisingly bad at conveying messages clearly to humans. But to return to VAuroch’s point, in order for the scenario to be “wildly inconsistent,” the AI would have to be perfect at communicating such justifications, not merely unusually good. Even such amazing predictive ability does not seem to me sufficient to guarantee perfection.
Albert doesn’t have to be perfect at communication. He doesn’t even have to be good at it. He just needs to have confidence that no action or decision will be made until both parties (human operators and Albert) are satisfied that they fully understand each other… which seems like a common sense rule to me.
Whether it’s common sense is irrelevant; it’s not realistically achievable even for humans, who have much smaller inferential distances between them than a human would have from an AI.
Albert is able to predict with absolute certainty that we would make a decision that we would regret, but it unable to communicate the justification for that certainty? That is wildly inconsistent.
If the results are communicated with perfect clarity, but the recipient is insufficiently moved by the evidence—for example because it cannot be presented in a form that feels real enough to emotionally justify an extreme response which is logically justified—then the AI must manipulate us to bring the emotional justification in line with the logical one. This isn’t actually extreme; things as simple as altering the format data is presented in, while remaining perfectly truthful, are still manipulation. Even presenting conclusions as a powerpoint rather than plain text, if the AI determines there will be a different response (which there will be), necessarily qualifies.
In general, someone who can reliably predict your actions based on its responses cannot help but manipulate you; the mere fact of providing you with information will influence your actions in a known way, and therefore is manipulation.
That’s an interesting “must”.
You’re misquoting me.
That’s an interesting “must”.
This is a commonly-used grammatical structure in which ‘must’ acts as a conditional. What’s your problem?
Conditional?
Your sentence structure is: if {condition} then {subject} MUST {verb} in order to {purpose}. Here “must” carries the meaning of necessity and lack of choice.
No, ‘must’ here is acting as a logical conditional; it could be rephrased as ‘if {condition} and {subject} does not {verb}, then {purpose} will not occur’ without changing the denotation or even connotation. This isn’t a rare structure, and is the usual interpretation of ‘must’ in sentences of this kind. Leaving off the {purpose} would change the dominant parsing to the imperative sense of must.
It’s curious that we parse your sentence differently. To me your original sentence unambiguously contains “the imperative sense of must” and your rephrasing is very different connotationally.
Let’s try it:
“If the results are communicated with perfect clarity, but the recipient is insufficiently moved by the evidence … and the AI does not manipulate us then the emotional justification will not be in line with the logical one.”
Yep, sounds completely different to my ear and conveys a different meaning.
I agree that an AI with such amazing knowledge should be unusually good at communicating its justifications effectively (because able to anticipate responses, etc.) I’m of the opinion that this is one of the numerous minor reasons for being skeptical of traditional religions; their supposedly all-knowing gods seem surprisingly bad at conveying messages clearly to humans. But to return to VAuroch’s point, in order for the scenario to be “wildly inconsistent,” the AI would have to be perfect at communicating such justifications, not merely unusually good. Even such amazing predictive ability does not seem to me sufficient to guarantee perfection.
Albert doesn’t have to be perfect at communication. He doesn’t even have to be good at it. He just needs to have confidence that no action or decision will be made until both parties (human operators and Albert) are satisfied that they fully understand each other… which seems like a common sense rule to me.
Whether it’s common sense is irrelevant; it’s not realistically achievable even for humans, who have much smaller inferential distances between them than a human would have from an AI.