I’m hoping for discussion on that post, and quite ready to change my draft comment, or not submit one, based on those arguments. After putting a bunch of thought into it, my planned comment will recommend forming a committee that can work in private to investigate the opportunities and risks of AI development, to inform future policy. I will note that this was Roosevelt’s response to Einstein’s letter on the potential of nuclear weaponry.
I hope that such a committee will conclude that yeah, there are some big dangers on expectation. I will emphasize the disagreement among experts, and suggest that the sane thing to do is put real effort into sorting out the many conflicting claims and possibilities, while also pursuing our current best guesses. I think any request for a slowdown is wasted, given the request’s note about reducing regulatory barriers. But I will note that there are dangers to both our economy from potential rapid job loss, and large security risks from adversaries stealing or copying our AI, such that we may be currently building tools and weapons that will be used against us. I think I will not emphasize x-risk, and may not even include it. But I will probably mention that predictions of reaching human-level autonomous operation are very mixed, so we’re not sure how far we are from creating what’s effectively a new intelligent species. I’m hoping that triggers the right intuitions of danger.
Again, I’m highly uncertain and very open to changing my mind on what to say.
Original comment:
This raises the question: what should we say?
Fortunately, I’ve almost finished a post about this. It analyzes many aspects of the question “do we want governments to recognize the potential of AGI?”.
Unfortunately, it doesn’t answer the question. There are strong points on both sides, and it needs more careful thought.
Nonetheless, I’ll probably get it out tomorrow since it’s almost finished anyway, and this request for public comments might make a good forcing function for all of us to put some thought into it.
I think on net, there are relatively fewer risks related to getting governments more AGI-pilled vs. them continuing on their current course; governments are broadly AI-pilled even if not AGI/ASI-pilled and are doing most of the accelerating actions an AGI-accelerator would want.
I wasn’t able to finish that post in the few minutes I’ve got so far today, so here’s the super short version. I remain highly uncertain whether my comments will include any mention of AGI.
I think whether AGI-pilling governments is a good idea is quite complex. Pushing the government to become aware of AGI x-risks will probably decelerate progress, but it could even accelerate it if the conclusion is “build it first, don’t worry we’ll be super careful when we get close”.
Even if it does help with alignment, it’s not necessarily net good. If governments take control early enough to prevent proliferation of AGI, that helps a lot with the risks of misalignment and catastrophic misuse. The US could even cooperate with China to prevent proliferation to other countries and to nongovermental groups, just as the US cooperated with Russia on nuclear nonproliferation.
But government control also raises the risks of power concentration. Intent-aligned AGI in untrustworthy hands could create a permanent dictatorship and unbreakable police state. The current governments of both the US and China don’t seem like the best types to control the future.
This also needs to be balanced agains the possibility of misuse of intent-aligned AGI if it does proliferate broadly; see If we solve alignment, do we die anyway?
If I had a firm estimate of how hard technical alignment is, I’d have a better answer. But I don’t, and I think the best objective conclusion, taking in all of the arguments made to date and the very wide variance in opinion even among those who’ve thought deeply about it, is that nobody has a very good estimate. (Edit: I mean estimates between very very hard and modestly tricky. I don’t know of anyone who’s addressed the hard parts and concluded that it happens by default.)
Neither do we have a good estimate of how likely individuals in power would be to use AGI well or poorly, in various circumstances (unchallenged hegemony vs. close race dynamics).
Edit: I finished that post on this topic: Whether governments will control AGI is important and neglected.
I’m hoping for discussion on that post, and quite ready to change my draft comment, or not submit one, based on those arguments. After putting a bunch of thought into it, my planned comment will recommend forming a committee that can work in private to investigate the opportunities and risks of AI development, to inform future policy. I will note that this was Roosevelt’s response to Einstein’s letter on the potential of nuclear weaponry.
I hope that such a committee will conclude that yeah, there are some big dangers on expectation. I will emphasize the disagreement among experts, and suggest that the sane thing to do is put real effort into sorting out the many conflicting claims and possibilities, while also pursuing our current best guesses. I think any request for a slowdown is wasted, given the request’s note about reducing regulatory barriers. But I will note that there are dangers to both our economy from potential rapid job loss, and large security risks from adversaries stealing or copying our AI, such that we may be currently building tools and weapons that will be used against us. I think I will not emphasize x-risk, and may not even include it. But I will probably mention that predictions of reaching human-level autonomous operation are very mixed, so we’re not sure how far we are from creating what’s effectively a new intelligent species. I’m hoping that triggers the right intuitions of danger.
Again, I’m highly uncertain and very open to changing my mind on what to say.
Original comment:
This raises the question: what should we say?
Fortunately, I’ve almost finished a post about this. It analyzes many aspects of the question “do we want governments to recognize the potential of AGI?”.
Unfortunately, it doesn’t answer the question. There are strong points on both sides, and it needs more careful thought.
Nonetheless, I’ll probably get it out tomorrow since it’s almost finished anyway, and this request for public comments might make a good forcing function for all of us to put some thought into it.
I think on net, there are relatively fewer risks related to getting governments more AGI-pilled vs. them continuing on their current course; governments are broadly AI-pilled even if not AGI/ASI-pilled and are doing most of the accelerating actions an AGI-accelerator would want.
I wasn’t able to finish that post in the few minutes I’ve got so far today, so here’s the super short version. I remain highly uncertain whether my comments will include any mention of AGI.
(Edit: I finally finished it: Whether governments will control AGI is important and neglected)
I think whether AGI-pilling governments is a good idea is quite complex. Pushing the government to become aware of AGI x-risks will probably decelerate progress, but it could even accelerate it if the conclusion is “build it first, don’t worry we’ll be super careful when we get close”.
Even if it does help with alignment, it’s not necessarily net good. If governments take control early enough to prevent proliferation of AGI, that helps a lot with the risks of misalignment and catastrophic misuse. The US could even cooperate with China to prevent proliferation to other countries and to nongovermental groups, just as the US cooperated with Russia on nuclear nonproliferation.
But government control also raises the risks of power concentration. Intent-aligned AGI in untrustworthy hands could create a permanent dictatorship and unbreakable police state. The current governments of both the US and China don’t seem like the best types to control the future.
So it’s a matter of balancing Fear of centralized power vs. fear of misaligned AGI.
This also needs to be balanced agains the possibility of misuse of intent-aligned AGI if it does proliferate broadly; see If we solve alignment, do we die anyway?
If I had a firm estimate of how hard technical alignment is, I’d have a better answer. But I don’t, and I think the best objective conclusion, taking in all of the arguments made to date and the very wide variance in opinion even among those who’ve thought deeply about it, is that nobody has a very good estimate. (Edit: I mean estimates between very very hard and modestly tricky. I don’t know of anyone who’s addressed the hard parts and concluded that it happens by default.)
Neither do we have a good estimate of how likely individuals in power would be to use AGI well or poorly, in various circumstances (unchallenged hegemony vs. close race dynamics).