Yes, I can imagine many things. I can also imagine all molecules in a glass of water bouncing off in a way that suddenly the water freezes. I don’t see how a superintelligence makes that happen. This is the biggest mistake that EY is making. He is equating enormous ability to almightiness. They are different. I think that pulling off what you suggest is beyond what a superintelligence can do
Security mindset suggests that it’s more useful to think of ways in which something might go wrong, rather than ways in which it might not.
So rather than poking holes into suggestions (by humans, who are not superintelligent) for how a superintelligence could achieve some big goal like wiping out humanity, I expect you’d benefit much more from doing the following thought experiment:
Imagine yourself to be 1000x smarter, 1000x quicker at thinking and learning, with Internet access but no physical body. (I expect you could also trivially add “access to tons of money” from discovering a security exploit in a cryptocurrency or something.) How could you take over the world / wipe out humanity, from that position? What’s the best plan you can come up with? How high is its likelihood of success? Etc.
I agree that it can be more useful but this is not what is being discussed or what I am criticizing. I never said that AGI won’t be dangerous nor that it is not important to work on this. What I am a bit worried about is that this community is getting something wrong, namely, that an AGI will exterminate the human race and it will happen soon. Realism and objectivity should be preserved at all cost. Having a totally unrealistic take in the real hazards will cause backlash eventually: think of the many groups that’s defended that to better fight climate change we need to consider the worst case scenario, that we need to exaggerate and scare people. I feel the LW community is falling into this.
I understand your worry, but I was addressing your specific point that “I think that pulling off what you suggest is beyond what a superintelligence can do”.
There are people who have reasonable arguments against various claims of the AI x-risk community, but I’m extremely skeptical of this claim. To me it suggests a failure of imagination, hence my suggested thought experiment.
I see. I agree that it might be a failure of imagination, but if it is, why do you consider that way more likely than the alternative “it is not that easy to do something like that even being very clever”? The problem I have is that all doom scenarios that I see discussed are so utterly unrealistic (e.g. the AGI suddenly makes nanobots and delivers it to all humans at once and so on) that it makes me think that the fact we are failing at conceiving plans that could succeed is because it might be harder than we think.
Yes, I can imagine many things. I can also imagine all molecules in a glass of water bouncing off in a way that suddenly the water freezes. I don’t see how a superintelligence makes that happen. This is the biggest mistake that EY is making. He is equating enormous ability to almightiness. They are different. I think that pulling off what you suggest is beyond what a superintelligence can do
Security mindset suggests that it’s more useful to think of ways in which something might go wrong, rather than ways in which it might not.
So rather than poking holes into suggestions (by humans, who are not superintelligent) for how a superintelligence could achieve some big goal like wiping out humanity, I expect you’d benefit much more from doing the following thought experiment:
Imagine yourself to be 1000x smarter, 1000x quicker at thinking and learning, with Internet access but no physical body. (I expect you could also trivially add “access to tons of money” from discovering a security exploit in a cryptocurrency or something.) How could you take over the world / wipe out humanity, from that position? What’s the best plan you can come up with? How high is its likelihood of success? Etc.
I agree that it can be more useful but this is not what is being discussed or what I am criticizing. I never said that AGI won’t be dangerous nor that it is not important to work on this. What I am a bit worried about is that this community is getting something wrong, namely, that an AGI will exterminate the human race and it will happen soon. Realism and objectivity should be preserved at all cost. Having a totally unrealistic take in the real hazards will cause backlash eventually: think of the many groups that’s defended that to better fight climate change we need to consider the worst case scenario, that we need to exaggerate and scare people. I feel the LW community is falling into this.
I understand your worry, but I was addressing your specific point that “I think that pulling off what you suggest is beyond what a superintelligence can do”.
There are people who have reasonable arguments against various claims of the AI x-risk community, but I’m extremely skeptical of this claim. To me it suggests a failure of imagination, hence my suggested thought experiment.
I see. I agree that it might be a failure of imagination, but if it is, why do you consider that way more likely than the alternative “it is not that easy to do something like that even being very clever”? The problem I have is that all doom scenarios that I see discussed are so utterly unrealistic (e.g. the AGI suddenly makes nanobots and delivers it to all humans at once and so on) that it makes me think that the fact we are failing at conceiving plans that could succeed is because it might be harder than we think.