I understand your worry, but I was addressing your specific point that “I think that pulling off what you suggest is beyond what a superintelligence can do”.
There are people who have reasonable arguments against various claims of the AI x-risk community, but I’m extremely skeptical of this claim. To me it suggests a failure of imagination, hence my suggested thought experiment.
I see. I agree that it might be a failure of imagination, but if it is, why do you consider that way more likely than the alternative “it is not that easy to do something like that even being very clever”? The problem I have is that all doom scenarios that I see discussed are so utterly unrealistic (e.g. the AGI suddenly makes nanobots and delivers it to all humans at once and so on) that it makes me think that the fact we are failing at conceiving plans that could succeed is because it might be harder than we think.
I understand your worry, but I was addressing your specific point that “I think that pulling off what you suggest is beyond what a superintelligence can do”.
There are people who have reasonable arguments against various claims of the AI x-risk community, but I’m extremely skeptical of this claim. To me it suggests a failure of imagination, hence my suggested thought experiment.
I see. I agree that it might be a failure of imagination, but if it is, why do you consider that way more likely than the alternative “it is not that easy to do something like that even being very clever”? The problem I have is that all doom scenarios that I see discussed are so utterly unrealistic (e.g. the AGI suddenly makes nanobots and delivers it to all humans at once and so on) that it makes me think that the fact we are failing at conceiving plans that could succeed is because it might be harder than we think.