I think something that doesn’t match either. I think both 1 and 2 are possible. But what we should probably go for is 3.
Suppose a superintelligent AI, and it has a goal involving giving humans as much actual control as possible.
and able to manipulate humans so well, that any “choice” humanity faces will be predetermined.
All our choices are technically predetermined, because the universe is deterministic. (Modulo unimportant quantum details)
The AI could manipulate humans, but is programmed not to want to.
This isn’t an impairment scheme like boxing or oracle AI. This is the genie that listens to your wish without manipulating you, and then carries it out in the spirit you asked. If the human(s?) tell the AI to become a paperclip maximizer, the AI will. (Maybe with a little pop up box. “It looks like you are about to destroy the universe, are you sure you want to do that?” to prevent mistakes.)And the humans are making that decision using brains that haven’t been deliberately tampered with by the AI.
I think something that doesn’t match either. I think both 1 and 2 are possible. But what we should probably go for is 3.
Suppose a superintelligent AI, and it has a goal involving giving humans as much actual control as possible.
All our choices are technically predetermined, because the universe is deterministic. (Modulo unimportant quantum details)
The AI could manipulate humans, but is programmed not to want to.
This isn’t an impairment scheme like boxing or oracle AI. This is the genie that listens to your wish without manipulating you, and then carries it out in the spirit you asked. If the human(s?) tell the AI to become a paperclip maximizer, the AI will. (Maybe with a little pop up box. “It looks like you are about to destroy the universe, are you sure you want to do that?” to prevent mistakes.)And the humans are making that decision using brains that haven’t been deliberately tampered with by the AI.