“That’s one lesson you could take away. Another might be: governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms.”
Governments cooperate a lot more than Eliezer seemed to be suggesting they do. One example is banning CFCs in response to the ozone hole. There was also significant co-operation between central banks in mitigating certain consequences of the 2008 financial system crash.
However, I would tend to agree that there is virtually zero chance of governments banning dangerous AGI research because:
(i) The technology is incredibly militarily significant; and
(ii) Cheating is very easy.
(Parenthetically, this also has a number of other implications which make limiting AGI to friendly or aligned AGI highly unlikely, even if it is possible to do that in a timely fashion.)
In addition, as computing power increases, the ease of conducting such research increases so the number of people and situations that the ban has to cover increases over time. This means that an effective ban would require a high degree of surveillance and control which is incompatible with how at least some societies are organized. And beyond the capacity of other societies.
(The above assumes that governments are focused and competent enough to understand the risks of AGI and react to them in a timely manner. I do not think this is likely.)
“That’s one lesson you could take away. Another might be: governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms.”
Governments cooperate a lot more than Eliezer seemed to be suggesting they do. One example is banning CFCs in response to the ozone hole. There was also significant co-operation between central banks in mitigating certain consequences of the 2008 financial system crash.
However, I would tend to agree that there is virtually zero chance of governments banning dangerous AGI research because:
(i) The technology is incredibly militarily significant; and
(ii) Cheating is very easy.
(Parenthetically, this also has a number of other implications which make limiting AGI to friendly or aligned AGI highly unlikely, even if it is possible to do that in a timely fashion.)
In addition, as computing power increases, the ease of conducting such research increases so the number of people and situations that the ban has to cover increases over time. This means that an effective ban would require a high degree of surveillance and control which is incompatible with how at least some societies are organized. And beyond the capacity of other societies.
(The above assumes that governments are focused and competent enough to understand the risks of AGI and react to them in a timely manner. I do not think this is likely.)