Yes, I can imagine many things. I can also imagine all molecules in a glass of water bouncing off in a way that suddenly the water freezes. I don’t see how a superintelligence makes that happen. This is the biggest mistake that EY is making. He is equating enormous ability to almightiness. They are different. I think that pulling off what you suggest is beyond what a superintelligence can do
Security mindset suggests that it’s more useful to think of ways in which something might go wrong, rather than ways in which it might not.
So rather than poking holes into suggestions (by humans, who are not superintelligent) for how a superintelligence could achieve some big goal like wiping out humanity, I expect you’d benefit much more from doing the following thought experiment:
Imagine yourself to be 1000x smarter, 1000x quicker at thinking and learning, with Internet access but no physical body. (I expect you could also trivially add “access to tons of money” from discovering a security exploit in a cryptocurrency or something.) How could you take over the world / wipe out humanity, from that position? What’s the best plan you can come up with? How high is its likelihood of success? Etc.
I agree that it can be more useful but this is not what is being discussed or what I am criticizing. I never said that AGI won’t be dangerous nor that it is not important to work on this. What I am a bit worried about is that this community is getting something wrong, namely, that an AGI will exterminate the human race and it will happen soon. Realism and objectivity should be preserved at all cost. Having a totally unrealistic take in the real hazards will cause backlash eventually: think of the many groups that’s defended that to better fight climate change we need to consider the worst case scenario, that we need to exaggerate and scare people. I feel the LW community is falling into this.
I understand your worry, but I was addressing your specific point that “I think that pulling off what you suggest is beyond what a superintelligence can do”.
There are people who have reasonable arguments against various claims of the AI x-risk community, but I’m extremely skeptical of this claim. To me it suggests a failure of imagination, hence my suggested thought experiment.
I see. I agree that it might be a failure of imagination, but if it is, why do you consider that way more likely than the alternative “it is not that easy to do something like that even being very clever”? The problem I have is that all doom scenarios that I see discussed are so utterly unrealistic (e.g. the AGI suddenly makes nanobots and delivers it to all humans at once and so on) that it makes me think that the fact we are failing at conceiving plans that could succeed is because it might be harder than we think.
There would also be a fraction of the human beings who would probably be inmune. How does the superintelligence solve that? Can it also know the full diversity how human inmune systems?
I agree with you broader point that a superintelligence could design incredibly lethal, highly communicable diseases. However, I’d note that it’s only symptomatic untreated rabies that has a survival rate of zero. It’s entirely possible (even likely) to be bitten by a rabid animal and not contract rabies.
Many factors influence your odds of developing symptomatic rabies, including bite location, bite depth and pathogen load of the biting animal. The effects of pathogen inoculations are actually quite dependent on initial conditions. Presumably, the innoculum in non-transmitting bites is greater than zero, so it is actually possible for the immune system to fight off a rabies infection. It’s just that, conditional on having failed to do so at the start of infection, the odds of doing so afterwards are tiny.
You’re actually right about rabies; I found things saying that about 14% of dogs survive and a group of unvaccinated people who had rabies antibodies but never had symptoms.
How do you guarantee that all humans get exposed to a significant dosage before they start reacting? How do you guarantee that there are full populations (maybe in places with a large genetic diversity like India or Africa) that happen to be inmune?
Just want to preemptively flag that in the EA biosecurity community we follow a general norm against brainstorming novel ways to cause harm with biology. Basic reasoning is that succeeding in this task ≈ generating info hazards.
Abstractly postulating a hypothetical virus with high virulence + transmissibility and a long latent period can be useful for facilitating thinking, but brainstorming the specifics of how to actually accomplish this—as some folks in these and some nearby comments are trending in the direction of starting to do—poses risks that exceed the likely benefits.
Happy to discuss further if interested, feel free to DM me.
Yes, I can imagine many things. I can also imagine all molecules in a glass of water bouncing off in a way that suddenly the water freezes. I don’t see how a superintelligence makes that happen. This is the biggest mistake that EY is making. He is equating enormous ability to almightiness. They are different. I think that pulling off what you suggest is beyond what a superintelligence can do
Security mindset suggests that it’s more useful to think of ways in which something might go wrong, rather than ways in which it might not.
So rather than poking holes into suggestions (by humans, who are not superintelligent) for how a superintelligence could achieve some big goal like wiping out humanity, I expect you’d benefit much more from doing the following thought experiment:
Imagine yourself to be 1000x smarter, 1000x quicker at thinking and learning, with Internet access but no physical body. (I expect you could also trivially add “access to tons of money” from discovering a security exploit in a cryptocurrency or something.) How could you take over the world / wipe out humanity, from that position? What’s the best plan you can come up with? How high is its likelihood of success? Etc.
I agree that it can be more useful but this is not what is being discussed or what I am criticizing. I never said that AGI won’t be dangerous nor that it is not important to work on this. What I am a bit worried about is that this community is getting something wrong, namely, that an AGI will exterminate the human race and it will happen soon. Realism and objectivity should be preserved at all cost. Having a totally unrealistic take in the real hazards will cause backlash eventually: think of the many groups that’s defended that to better fight climate change we need to consider the worst case scenario, that we need to exaggerate and scare people. I feel the LW community is falling into this.
I understand your worry, but I was addressing your specific point that “I think that pulling off what you suggest is beyond what a superintelligence can do”.
There are people who have reasonable arguments against various claims of the AI x-risk community, but I’m extremely skeptical of this claim. To me it suggests a failure of imagination, hence my suggested thought experiment.
I see. I agree that it might be a failure of imagination, but if it is, why do you consider that way more likely than the alternative “it is not that easy to do something like that even being very clever”? The problem I have is that all doom scenarios that I see discussed are so utterly unrealistic (e.g. the AGI suddenly makes nanobots and delivers it to all humans at once and so on) that it makes me think that the fact we are failing at conceiving plans that could succeed is because it might be harder than we think.
There would also be a fraction of the human beings who would probably be inmune. How does the superintelligence solve that? Can it also know the full diversity how human inmune systems?
Untreated rabies has a survival rate of literally zero. It’s not inconceivable that another virus could be equally lethal.
(Edit: not literally zero, because not every exposure leads to symptoms, but surviving symptomatic rabies is incredibly rare.)
I agree with you broader point that a superintelligence could design incredibly lethal, highly communicable diseases. However, I’d note that it’s only symptomatic untreated rabies that has a survival rate of zero. It’s entirely possible (even likely) to be bitten by a rabid animal and not contract rabies.
Many factors influence your odds of developing symptomatic rabies, including bite location, bite depth and pathogen load of the biting animal. The effects of pathogen inoculations are actually quite dependent on initial conditions. Presumably, the innoculum in non-transmitting bites is greater than zero, so it is actually possible for the immune system to fight off a rabies infection. It’s just that, conditional on having failed to do so at the start of infection, the odds of doing so afterwards are tiny.
You’re actually right about rabies; I found things saying that about 14% of dogs survive and a group of unvaccinated people who had rabies antibodies but never had symptoms.
How do you guarantee that all humans get exposed to a significant dosage before they start reacting? How do you guarantee that there are full populations (maybe in places with a large genetic diversity like India or Africa) that happen to be inmune?
Just want to preemptively flag that in the EA biosecurity community we follow a general norm against brainstorming novel ways to cause harm with biology. Basic reasoning is that succeeding in this task ≈ generating info hazards.
Abstractly postulating a hypothetical virus with high virulence + transmissibility and a long latent period can be useful for facilitating thinking, but brainstorming the specifics of how to actually accomplish this—as some folks in these and some nearby comments are trending in the direction of starting to do—poses risks that exceed the likely benefits.
Happy to discuss further if interested, feel free to DM me.
Thanks for the heads-up, it makes sense