Hmm. Did you read the comment I linked? I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I took a quick peek at it at first, but now I’ve read it more properly.
I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I think the main question is, why would state actors (which currently provide security by suppressing threats) allow this?
I don’t believe they currently possess the means to prevent it.
Creating a devastating bioweapon is currently technically challenging, but not resource intensive and not easy for governments to detect. If government policy around biology materials and equipment does not shift dramatically in the coming three years, the technical difficulty will probably continue to drop while no corresponding increase in prevention occurs.
I’m currently engaged in studying AI related Biorisk, so I know a lot of details I cannot disclose about the current threat situation. I will share what I can.
https://securebio.org/ai/
Hmm. Did you read the comment I linked? I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I took a quick peek at it at first, but now I’ve read it more properly.
I think the main question is, why would state actors (which currently provide security by suppressing threats) allow this?
I don’t believe they currently possess the means to prevent it.
Creating a devastating bioweapon is currently technically challenging, but not resource intensive and not easy for governments to detect. If government policy around biology materials and equipment does not shift dramatically in the coming three years, the technical difficulty will probably continue to drop while no corresponding increase in prevention occurs.
I’m currently engaged in studying AI related Biorisk, so I know a lot of details I cannot disclose about the current threat situation. I will share what I can. https://securebio.org/ai/