Just want to preemptively flag that in the EA biosecurity community we follow a general norm against brainstorming novel ways to cause harm with biology. Basic reasoning is that succeeding in this task ≈ generating info hazards.
Abstractly postulating a hypothetical virus with high virulence + transmissibility and a long latent period can be useful for facilitating thinking, but brainstorming the specifics of how to actually accomplish this—as some folks in these and some nearby comments are trending in the direction of starting to do—poses risks that exceed the likely benefits.
Happy to discuss further if interested, feel free to DM me.
Just want to preemptively flag that in the EA biosecurity community we follow a general norm against brainstorming novel ways to cause harm with biology. Basic reasoning is that succeeding in this task ≈ generating info hazards.
Abstractly postulating a hypothetical virus with high virulence + transmissibility and a long latent period can be useful for facilitating thinking, but brainstorming the specifics of how to actually accomplish this—as some folks in these and some nearby comments are trending in the direction of starting to do—poses risks that exceed the likely benefits.
Happy to discuss further if interested, feel free to DM me.
Thanks for the heads-up, it makes sense