So what exactly is this ‘witch hunt’ composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?
What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.
He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.
You believe he’s calling for the execution, imprisonment or other punishment of AI researchers?
I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.
The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).
ETA: Just take those people who destroy GMO test fields. Musk won’t do something like that. But other people, who would commit such acts, might be inspired by his remarks.
John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion
by tomorrow, they would consider nuking it.
There is some truth to that, especially how crazy von Neumann was. But I’m not sure if anyone would be launching pre-emtive nuclear attack on other country because of AGI research. I mean this countries already have nukes, pretty solid doomsday weapon so I dont think that adding another superweapon to its arsenal will change situation. Whether you are blown to bits by chinese nuke or turn into paperclips by chinese-built AGI doesn’t make much difference.
You believe he’s calling for the execution, imprisonment or other punishment of AI researchers? I doubt it.
So what exactly is this ‘witch hunt’ composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?
What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.
He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.
I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.
The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).
ETA: Just take those people who destroy GMO test fields. Musk won’t do something like that. But other people, who would commit such acts, might be inspired by his remarks.
There is some truth to that, especially how crazy von Neumann was. But I’m not sure if anyone would be launching pre-emtive nuclear attack on other country because of AGI research. I mean this countries already have nukes, pretty solid doomsday weapon so I dont think that adding another superweapon to its arsenal will change situation. Whether you are blown to bits by chinese nuke or turn into paperclips by chinese-built AGI doesn’t make much difference.