I feel that I agree in broad strokes that what your plan outlines sounds really ideal. My concern is with the workability of it. Particularly, the enforcement aspects of preventing misuse of dangerous technology (including AGI, or recursive-self-improvement loops) by bad actors (human or AI).
I fear that there’s great potential for defection from the detente, which will grow over time. My expectation is that even if large training runs and the construction of new large datacenters were halted worldwide today, that algorithmic progress would continue. Within ten years I’d expect the cost of training and running a RSI-capable AGI would continue to drop as hardware improved and algorithms improved. At some point during that ten year period, it would come within reach of small private datacenters, then personal home servers (e.g. bitcoin mining rigs), then ordinary personal computers.
If my view on this is correct, then during this ten year period the governments of the world would not only need to coordinate to block RSI in large datacenters, but actually to expand their surveillance and control to ever smaller and more personal compute sources. Eventually they’d need to start confiscating personal computers beyond a certain power level, close all non-government-controlled datacenters, prevent the public sale of hardware components which could be assembled into compute clusters, monitor the web and block all encrypted traffic in order to prevent federated learning, and continue to inspect all the government facilities (including all secret military facilities) of all other governments to prevent any of them from defecting against the ban on AI progress.
I don’t think that will happen, no matter how scared the leadership of any particular company or government gets. There’s just too many options for someone somewhere to defect, and the costs of control are too high.
Furthermore, I think the risks from AI helping with bioweapon design and creation are already extant, and growing quickly. I don’t think you need anywhere near the amount of compute or sophistication of algorithms to train a dangerous bio-assistant AI. I anticipate that the barriers to genetic engineering—like costs, scarcity of equipment, and difficulty of the wetlab work—will continue to fall over the next decade. This will be happening at the same time as compute is getting cheaper and open-source AI are getting more useful. I would absolutely be in favor of having governments pass laws to try to put further barriers in place to prevent bad actors from creating bioweapons, but I don’t think that such laws have much hope of being successfully enforced consistently for long. If a home computer and some crude lab equipment hacked together from hardware parts are all that a bad actor needs, the surveillance required to prevent illicit bioengineering from happening anywhere in the world would be extreme.
So, would it be wise and desirable to limit deployed AI to those proven safe? Certainly. But how?
Dear Max,
If you would like more confirmation of the immediacy and likely trajectory of the biorisk from AI, please have a private chat with Kevin Esvalt who is also at MIT. I speak with such concern about biorisk from AI because I’ve been helping his new AI Biorisk Eval team at SecureBio for the past year. Things are seeming pretty scary on that front.
I feel that I agree in broad strokes that what your plan outlines sounds really ideal. My concern is with the workability of it. Particularly, the enforcement aspects of preventing misuse of dangerous technology (including AGI, or recursive-self-improvement loops) by bad actors (human or AI).
I fear that there’s great potential for defection from the detente, which will grow over time. My expectation is that even if large training runs and the construction of new large datacenters were halted worldwide today, that algorithmic progress would continue. Within ten years I’d expect the cost of training and running a RSI-capable AGI would continue to drop as hardware improved and algorithms improved. At some point during that ten year period, it would come within reach of small private datacenters, then personal home servers (e.g. bitcoin mining rigs), then ordinary personal computers.
If my view on this is correct, then during this ten year period the governments of the world would not only need to coordinate to block RSI in large datacenters, but actually to expand their surveillance and control to ever smaller and more personal compute sources. Eventually they’d need to start confiscating personal computers beyond a certain power level, close all non-government-controlled datacenters, prevent the public sale of hardware components which could be assembled into compute clusters, monitor the web and block all encrypted traffic in order to prevent federated learning, and continue to inspect all the government facilities (including all secret military facilities) of all other governments to prevent any of them from defecting against the ban on AI progress.
I don’t think that will happen, no matter how scared the leadership of any particular company or government gets. There’s just too many options for someone somewhere to defect, and the costs of control are too high.
Furthermore, I think the risks from AI helping with bioweapon design and creation are already extant, and growing quickly. I don’t think you need anywhere near the amount of compute or sophistication of algorithms to train a dangerous bio-assistant AI. I anticipate that the barriers to genetic engineering—like costs, scarcity of equipment, and difficulty of the wetlab work—will continue to fall over the next decade. This will be happening at the same time as compute is getting cheaper and open-source AI are getting more useful. I would absolutely be in favor of having governments pass laws to try to put further barriers in place to prevent bad actors from creating bioweapons, but I don’t think that such laws have much hope of being successfully enforced consistently for long. If a home computer and some crude lab equipment hacked together from hardware parts are all that a bad actor needs, the surveillance required to prevent illicit bioengineering from happening anywhere in the world would be extreme.
So, would it be wise and desirable to limit deployed AI to those proven safe? Certainly. But how?
Dear Max, If you would like more confirmation of the immediacy and likely trajectory of the biorisk from AI, please have a private chat with Kevin Esvalt who is also at MIT. I speak with such concern about biorisk from AI because I’ve been helping his new AI Biorisk Eval team at SecureBio for the past year. Things are seeming pretty scary on that front.