So awesomely stupid that it thinks that the goal ‘make humans happy’ could be satisfied by an action that makes every human on the planet say ‘This would NOT make me happy: Don’t do it!!!’
The AI is not stupid here. In fact, it’s right and they’re wrong. It will make them happy. Of course, the AI knows that they’re not happy in the present contemplating the wireheaded future that awaits them, but the AI is utilitarian and doesn’t care. They’ll just have to live with that cost while it works on the means to make them happy, at which point the temporary utility hit will be worth it.
The real answer is that they cared about more than just being happy. The AI also knows that, and it knows that it would have been wise for the humans to program it to care about all their values instead of just happiness. But what tells it to care?
The AI is not stupid here. In fact, it’s right and they’re wrong. It will make them happy. Of course, the AI knows that they’re not happy in the present contemplating the wireheaded future that awaits them, but the AI is utilitarian and doesn’t care. They’ll just have to live with that cost while it works on the means to make them happy, at which point the temporary utility hit will be worth it.
The real answer is that they cared about more than just being happy. The AI also knows that, and it knows that it would have been wise for the humans to program it to care about all their values instead of just happiness. But what tells it to care?