There would also be a fraction of the human beings who would probably be inmune. How does the superintelligence solve that? Can it also know the full diversity how human inmune systems?
I agree with you broader point that a superintelligence could design incredibly lethal, highly communicable diseases. However, I’d note that it’s only symptomatic untreated rabies that has a survival rate of zero. It’s entirely possible (even likely) to be bitten by a rabid animal and not contract rabies.
Many factors influence your odds of developing symptomatic rabies, including bite location, bite depth and pathogen load of the biting animal. The effects of pathogen inoculations are actually quite dependent on initial conditions. Presumably, the innoculum in non-transmitting bites is greater than zero, so it is actually possible for the immune system to fight off a rabies infection. It’s just that, conditional on having failed to do so at the start of infection, the odds of doing so afterwards are tiny.
You’re actually right about rabies; I found things saying that about 14% of dogs survive and a group of unvaccinated people who had rabies antibodies but never had symptoms.
How do you guarantee that all humans get exposed to a significant dosage before they start reacting? How do you guarantee that there are full populations (maybe in places with a large genetic diversity like India or Africa) that happen to be inmune?
Just want to preemptively flag that in the EA biosecurity community we follow a general norm against brainstorming novel ways to cause harm with biology. Basic reasoning is that succeeding in this task ≈ generating info hazards.
Abstractly postulating a hypothetical virus with high virulence + transmissibility and a long latent period can be useful for facilitating thinking, but brainstorming the specifics of how to actually accomplish this—as some folks in these and some nearby comments are trending in the direction of starting to do—poses risks that exceed the likely benefits.
Happy to discuss further if interested, feel free to DM me.
There would also be a fraction of the human beings who would probably be inmune. How does the superintelligence solve that? Can it also know the full diversity how human inmune systems?
Untreated rabies has a survival rate of literally zero. It’s not inconceivable that another virus could be equally lethal.
(Edit: not literally zero, because not every exposure leads to symptoms, but surviving symptomatic rabies is incredibly rare.)
I agree with you broader point that a superintelligence could design incredibly lethal, highly communicable diseases. However, I’d note that it’s only symptomatic untreated rabies that has a survival rate of zero. It’s entirely possible (even likely) to be bitten by a rabid animal and not contract rabies.
Many factors influence your odds of developing symptomatic rabies, including bite location, bite depth and pathogen load of the biting animal. The effects of pathogen inoculations are actually quite dependent on initial conditions. Presumably, the innoculum in non-transmitting bites is greater than zero, so it is actually possible for the immune system to fight off a rabies infection. It’s just that, conditional on having failed to do so at the start of infection, the odds of doing so afterwards are tiny.
You’re actually right about rabies; I found things saying that about 14% of dogs survive and a group of unvaccinated people who had rabies antibodies but never had symptoms.
How do you guarantee that all humans get exposed to a significant dosage before they start reacting? How do you guarantee that there are full populations (maybe in places with a large genetic diversity like India or Africa) that happen to be inmune?
Just want to preemptively flag that in the EA biosecurity community we follow a general norm against brainstorming novel ways to cause harm with biology. Basic reasoning is that succeeding in this task ≈ generating info hazards.
Abstractly postulating a hypothetical virus with high virulence + transmissibility and a long latent period can be useful for facilitating thinking, but brainstorming the specifics of how to actually accomplish this—as some folks in these and some nearby comments are trending in the direction of starting to do—poses risks that exceed the likely benefits.
Happy to discuss further if interested, feel free to DM me.
Thanks for the heads-up, it makes sense