Yes, but the AI was told, “make humans happy.” Not, “give humans what they actually want.”
And, you assume, it is not intelligent enough to realise that the intended meaning of “make people happy” is “give people what they actually want”—although you and I can see that. You are assuming that it is a subintellgience. You have proven Loosemore’s point.
You say things like “‘Make humans happy’ implies that...” and “subtleties implicit in...” You seem to think these implications are simple, but they really aren’t. They really, really aren’t.
We are smart enough to see that the Dpoamine Drip isn’t intended. The Ai is smarter than us. So....
This is why I say you’re anthropomorphizing.
I say that you are assuming the Ai is dumber than us, when it is stipulated as being smarter.
And, you assume, it is not intelligent enough to realise that the intended meaning of “make people happy” is “give people what they actually want”—although you and I can see that. You are assuming that it is a subintellgience. You have proven Loosemore’s point.
We are smart enough to see that the Dpoamine Drip isn’t intended. The Ai is smarter than us. So....
I say that you are assuming the Ai is dumber than us, when it is stipulated as being smarter.