It’s not only unlikely—what’s much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.
This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn’t smart, it’s much harder to find good reasons why we should solve AI friendliness, when there are lots of distracting strawmans.
It was me from half a year ago. I used to think, that anybody, who fears AI may bring harm, is a loony. All the reasons I heard from people were that AI wouldn’t know emotions, AI would try to harmfully save people from themselves, AI would want to take over the world, AI would be infected by virus or hacked or that AI would be just outright evil. I can easily debunk all of above.
And then I read about Paperclip Maximizer and radically changed my mind.
I might got to that point much sooner, if not for all the strawman distractions.
It’s not only unlikely—what’s much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.
This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn’t smart, it’s much harder to find good reasons why we should solve AI friendliness, when there are lots of distracting strawmans.
It was me from half a year ago. I used to think, that anybody, who fears AI may bring harm, is a loony. All the reasons I heard from people were that AI wouldn’t know emotions, AI would try to harmfully save people from themselves, AI would want to take over the world, AI would be infected by virus or hacked or that AI would be just outright evil. I can easily debunk all of above. And then I read about Paperclip Maximizer and radically changed my mind. I might got to that point much sooner, if not for all the strawman distractions.
I think you are looking into it too deep. Skynet as an example of AI risk is fine, if cartoonish.
Of course, we are very far away from strong AIs and therefore from existential AI risk.