People don’t want to believe that you can control an AI, for the same reason they don’t want to believe that their life stories could be designed by someone else. Reactance. The moment you suggest that a person’s life can only go one way, they want it to go another way. They want to have that power. Otherwise, they feel caged.
People think that humans have that power. And so they believe that any truly human-level AI must have that power.
More generally, people think of truly, genuinely, human level minds as black boxes. They don’t know how the black boxes work, and they don’t want to know. Scrutinizing the contents of the black box means two things:
the black box only does what it was programmed, or originally configured, to do—it is slowly grinding out its predetermined destiny, fixed before the black box started any real thinking
you can predict what the black box will do next
People cringe at both of these thoughts, because they are both constraining. And people hate to be constrained, even in abstract, philosophical ways.
2 is even worse than 1. Not only is 2 constraining (we only do what a competent predictor says), but it makes us vulnerable. If a predictor knows we are going to turn left, instead of right, we’re more vulnerable than if he doesn’t know which way we’ll turn.
[The counter-argument that completely random behavior makes you vulnerable, because predictable agents better enjoy the benefits of social cooperation, just doesn’t have the same pull on people’s emotions.]
It’s important to realize that this blind spot applies to both AIs and humans. It’s important to realize we’re fortunate that AIs are predictable, that they aren’t black boxes, because then we can program them. We can program them to be happy slaves, or any other thing, for our own benefit, even if we have to give up some misguided positive illusions about ourselves in the process.
People don’t want to believe that you can control an AI, for the same reason they don’t want to believe that their life stories could be designed by someone else. Reactance. The moment you suggest that a person’s life can only go one way, they want it to go another way. They want to have that power. Otherwise, they feel caged.
People think that humans have that power. And so they believe that any truly human-level AI must have that power.
More generally, people think of truly, genuinely, human level minds as black boxes. They don’t know how the black boxes work, and they don’t want to know. Scrutinizing the contents of the black box means two things:
the black box only does what it was programmed, or originally configured, to do—it is slowly grinding out its predetermined destiny, fixed before the black box started any real thinking
you can predict what the black box will do next
People cringe at both of these thoughts, because they are both constraining. And people hate to be constrained, even in abstract, philosophical ways.
2 is even worse than 1. Not only is 2 constraining (we only do what a competent predictor says), but it makes us vulnerable. If a predictor knows we are going to turn left, instead of right, we’re more vulnerable than if he doesn’t know which way we’ll turn.
[The counter-argument that completely random behavior makes you vulnerable, because predictable agents better enjoy the benefits of social cooperation, just doesn’t have the same pull on people’s emotions.]
It’s important to realize that this blind spot applies to both AIs and humans. It’s important to realize we’re fortunate that AIs are predictable, that they aren’t black boxes, because then we can program them. We can program them to be happy slaves, or any other thing, for our own benefit, even if we have to give up some misguided positive illusions about ourselves in the process.