No. There is no reason to suppose that any manufactured system will have any emotional stance towards us of any kind, friendly or unfriendly. In fact, even if the idea of “human-level” made sense, we could have a more-than-human-level super-intelligent machine, and still have it bear no emotional stance towards other entities whatsoever. Nor need it have any lust for power or political ambitions, unless we set out to construct such a thing (which AFAIK, nobody is doing.) Think of an unworldly boffin who just wants to be left alone to think, and does not care a whit for changing the world for better or for worse, and has no intentions or desires, but simply answers questions that are put to it and thinks about htings that it is asked to think about. It has no ambition and in any case no means to achieve any far-reaching changes even if it “wanted” to do so. It seems to me that this is what a super-intelligent question-answering system would be like. I see no inherent, even slight, danger arising from the presence of such a device.
I can’t help but cringe reading that. But it really depends on what he meant by “inherent”.
It’s amazing, and a little scary, that he thinks “just want[ing] to be left alone to think” couldn’t lead to any harm. The more resources an agent gathers, the more thinking it can do...
Yeah, this was an odd thing of him to write. The danger doesn’t require any “emotional” qualities at all, just the wrong goal and sufficient power to achieve it.
I can’t help but cringe reading that. But it really depends on what he meant by “inherent”.
It’s amazing, and a little scary, that he thinks “just want[ing] to be left alone to think” couldn’t lead to any harm. The more resources an agent gathers, the more thinking it can do...
...and the more forcefully it can ensure it is left alone.
Yeah, this was an odd thing of him to write. The danger doesn’t require any “emotional” qualities at all, just the wrong goal and sufficient power to achieve it.
Oracle AIs aren’t dangerous? Well, it’s possible.