At this point, I wouldn’t agree that that is the main concern. I do think it’s a solid third place though.
My top two are:
Deliberately harmful use
Some human decides they want to kill some other humans, and an AI enables them to acquire more power to do so. This could scale really fast, and actually give a single human the capability to wipe out nearly all of humanity. I would not like to bet my loved ones lives on there not being some person in the world crazy enough to try this.
Rogue AGI
I think it’s possible that some crazy human might decide that AI becoming an independent species is a good thing actually, and deliberately creating and releasing an AI system which is sufficiently capable and general that it manages to take off. There’s also the possibility an AI will independently develop Omohundro Drives and escape of its own volition. That seems less likely to me, but the two probabilities sum together, so… pick your nightmare, I guess.
Both of these threats route through the possibility of AI enabling offense-dominant self-replicating weaponry (bioweapons and/or nanotech). Self-replicating weaponry is a particularly dangerous category, because a small actor can cheaply initiate huge irrevocable effects thereby.
My main thought with those two problems is that I agree they are an issue in the world where AGI is avoided for ideological reasons, but it seems like in the world where AGI gets fully developed, they could simply be prevented by having a good AGI monitor everything and nib such dangers in the bud.
Indeed, my main hope for humanity does route through developing a good AGI monitor to prevent these risks. And, conditional on that happening, your described threat would move to top place.
I don’t think the route to having a robust worldwide AGI monitor capable of preventing the harms I describe is a safe or smooth one though. That’s where I expect most of humanity’s risk currently lies.
One could maybe say that our current system is mainly a mixture of capitalism, which leads to the problem I describe, and democratically-governed nation-states with militaries, which leads to the problem you describe. How do we transition our production vs how do we transition our security.
Hmm. Did you read the comment I linked? I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I took a quick peek at it at first, but now I’ve read it more properly.
I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I think the main question is, why would state actors (which currently provide security by suppressing threats) allow this?
I don’t believe they currently possess the means to prevent it.
Creating a devastating bioweapon is currently technically challenging, but not resource intensive and not easy for governments to detect. If government policy around biology materials and equipment does not shift dramatically in the coming three years, the technical difficulty will probably continue to drop while no corresponding increase in prevention occurs.
I’m currently engaged in studying AI related Biorisk, so I know a lot of details I cannot disclose about the current threat situation. I will share what I can.
https://securebio.org/ai/
At this point, I wouldn’t agree that that is the main concern. I do think it’s a solid third place though.
My top two are:
Deliberately harmful use
Some human decides they want to kill some other humans, and an AI enables them to acquire more power to do so. This could scale really fast, and actually give a single human the capability to wipe out nearly all of humanity. I would not like to bet my loved ones lives on there not being some person in the world crazy enough to try this.
Rogue AGI
I think it’s possible that some crazy human might decide that AI becoming an independent species is a good thing actually, and deliberately creating and releasing an AI system which is sufficiently capable and general that it manages to take off. There’s also the possibility an AI will independently develop Omohundro Drives and escape of its own volition. That seems less likely to me, but the two probabilities sum together, so… pick your nightmare, I guess.
Both of these threats route through the possibility of AI enabling offense-dominant self-replicating weaponry (bioweapons and/or nanotech). Self-replicating weaponry is a particularly dangerous category, because a small actor can cheaply initiate huge irrevocable effects thereby.
For more details, see this other comment I wrote today.
My main thought with those two problems is that I agree they are an issue in the world where AGI is avoided for ideological reasons, but it seems like in the world where AGI gets fully developed, they could simply be prevented by having a good AGI monitor everything and nib such dangers in the bud.
Indeed, my main hope for humanity does route through developing a good AGI monitor to prevent these risks. And, conditional on that happening, your described threat would move to top place.
I don’t think the route to having a robust worldwide AGI monitor capable of preventing the harms I describe is a safe or smooth one though. That’s where I expect most of humanity’s risk currently lies.
One could maybe say that our current system is mainly a mixture of capitalism, which leads to the problem I describe, and democratically-governed nation-states with militaries, which leads to the problem you describe. How do we transition our production vs how do we transition our security.
Hmm. Did you read the comment I linked? I don’t place enough predicted risk weight on state actors that they are the reason for my top 2 threats being the top 2. The danger to me is that high-variability of behavior of individual humans, and the extremely low costs to launching a self-replicating weapon, make it so that all of humanity is currently endangered by a single individual bad actor (human or AGI).
I took a quick peek at it at first, but now I’ve read it more properly.
I think the main question is, why would state actors (which currently provide security by suppressing threats) allow this?
I don’t believe they currently possess the means to prevent it.
Creating a devastating bioweapon is currently technically challenging, but not resource intensive and not easy for governments to detect. If government policy around biology materials and equipment does not shift dramatically in the coming three years, the technical difficulty will probably continue to drop while no corresponding increase in prevention occurs.
I’m currently engaged in studying AI related Biorisk, so I know a lot of details I cannot disclose about the current threat situation. I will share what I can. https://securebio.org/ai/