My main thought with those two problems is that I agree they are an issue in the world where AGI is avoided for ideological reasons, but it seems like in the world where AGI gets fully developed, they could simply be prevented by having a good AGI monitor everything and nib such dangers in the bud.
Indeed, my main hope for humanity does route through developing a good AGI monitor to prevent these risks. And, conditional on that happening, your described threat would move to top place.
I don’t think the route to having a robust worldwide AGI monitor capable of preventing the harms I describe is a safe or smooth one though. That’s where I expect most of humanity’s risk currently lies.
One could maybe say that our current system is mainly a mixture of capitalism, which leads to the problem I describe, and democratically-governed nation-states with militaries, which leads to the problem you describe. How do we transition our production vs how do we transition our security.
Hmm. Did you read the comment I linked? I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I took a quick peek at it at first, but now I’ve read it more properly.
I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I think the main question is, why would state actors (which currently provide security by suppressing threats) allow this?
I don’t believe they currently possess the means to prevent it.
Creating a devastating bioweapon is currently technically challenging, but not resource intensive and not easy for governments to detect. If government policy around biology materials and equipment does not shift dramatically in the coming three years, the technical difficulty will probably continue to drop while no corresponding increase in prevention occurs.
I’m currently engaged in studying AI related Biorisk, so I know a lot of details I cannot disclose about the current threat situation. I will share what I can.
https://securebio.org/ai/
My main thought with those two problems is that I agree they are an issue in the world where AGI is avoided for ideological reasons, but it seems like in the world where AGI gets fully developed, they could simply be prevented by having a good AGI monitor everything and nib such dangers in the bud.
Indeed, my main hope for humanity does route through developing a good AGI monitor to prevent these risks. And, conditional on that happening, your described threat would move to top place.
I don’t think the route to having a robust worldwide AGI monitor capable of preventing the harms I describe is a safe or smooth one though. That’s where I expect most of humanity’s risk currently lies.
One could maybe say that our current system is mainly a mixture of capitalism, which leads to the problem I describe, and democratically-governed nation-states with militaries, which leads to the problem you describe. How do we transition our production vs how do we transition our security.
Hmm. Did you read the comment I linked? I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I took a quick peek at it at first, but now I’ve read it more properly.
I think the main question is, why would state actors (which currently provide security by suppressing threats) allow this?
I don’t believe they currently possess the means to prevent it.
Creating a devastating bioweapon is currently technically challenging, but not resource intensive and not easy for governments to detect. If government policy around biology materials and equipment does not shift dramatically in the coming three years, the technical difficulty will probably continue to drop while no corresponding increase in prevention occurs.
I’m currently engaged in studying AI related Biorisk, so I know a lot of details I cannot disclose about the current threat situation. I will share what I can. https://securebio.org/ai/