I have doubts that it is even true “in principle” unless the goals are hard-wired in and unmodifiable by the intelligence. Do you really think that someone would agree to be OCD or schizophrenic if they had a choice? For higher levels of intelligence, I would think they would be even more discriminating as to goal-states they would accept.
As for the argument by thelittledoctor, the evil genius dictator model is broken even for highly intelligent humans, much less super-intelligences. Those “intelligent” demagogues are rarely, if ever, more than 2 standard deviations above average human intelligence, that definitely doesn’t count as “highly intelligent” as far as I’m concerned.
It seems irrelevant whether the AI is quote-unquote “highly intelligent” as long as it’s clever enough to take over a country and kill several million people.
The usual argument is that we are likely to be able to build machines that won’t want to modify their goals.
IMO, the more pressing issue with something like OCD is that it might interfere with intelligence tests—in which case you could argue that an OCD superintelligent machine is not really intelligent—since it is using its intelligence to screw itself.
This seems to be a corner case to me. The intended point is more that you could engineer an evil genius, or an autistic mind child.
I have doubts that it is even true “in principle” unless the goals are hard-wired in and unmodifiable by the intelligence. Do you really think that someone would agree to be OCD or schizophrenic if they had a choice? For higher levels of intelligence, I would think they would be even more discriminating as to goal-states they would accept.
As for the argument by thelittledoctor, the evil genius dictator model is broken even for highly intelligent humans, much less super-intelligences. Those “intelligent” demagogues are rarely, if ever, more than 2 standard deviations above average human intelligence, that definitely doesn’t count as “highly intelligent” as far as I’m concerned.
It seems irrelevant whether the AI is quote-unquote “highly intelligent” as long as it’s clever enough to take over a country and kill several million people.
The usual argument is that we are likely to be able to build machines that won’t want to modify their goals.
IMO, the more pressing issue with something like OCD is that it might interfere with intelligence tests—in which case you could argue that an OCD superintelligent machine is not really intelligent—since it is using its intelligence to screw itself.
This seems to be a corner case to me. The intended point is more that you could engineer an evil genius, or an autistic mind child.