To take the pessimistic side on AI, I see some reasons why AI probably won’t be regulated in a way that matters:
I suspect the no fire alarm hypothesis is roughly correct, in that by and large, people won’t react until it’s too late. My biggest reason comes from the AI effect, where people start downplaying the intelligence it has, which is dangerous because people don’t react to warning shots like GPT-3 or AlphaFold 2, and updates me to thinking that people won’t seriously start calling for regulation until AGI is actually here, and that’s far too late. We got a fire alarm for nukes in Hiroshima, which was an instance of a lucky fire alarm before many nukes or nuclear power plants were made, and we can’t rely on luck saving us again.
Politicization. The COVID-19 response worries me much more than you, and it’s positives only outweighed it’s negatives only because of the fact that there wasn’t any X-risk. In particular, the fact that there was a strong response actually decayed pretty fast, and in our world virtually everything is politicized into a culture war as soon as it actually impacts people’s lives. A lot of the competence of say, handling nukes or genetic engineering is that politics didn’t use to eat everything, thus no one had too much motivation to defect. Now, if they had to deal with nukes or genetic engineering with our politics, at least 40% of the US population would support getting these technologies solely to destroy the other side.
Speaking of that far too late thing, most technologies that got successfully regulated either had everyone panicking like nuclear reactor radiation or wasn’t very developed like Human genetic engineering/cloning.
Finally, no one can have it and AGI itself is a threat thanks to inner optimizer concerns. So the solution of having government control it is exactly unworkable, since they themselves have large incentives to get AGI ala nukes and have little reason not to.
Politicization. The COVID-19 response worries me much more than you, and it’s positives only outweighed it’s negatives only because of the fact that there wasn’t any X-risk. In particular, the fact that there was a strong response actually decayed pretty fast, and in our world virtually everything is politicized into a culture war as soon as it actually impacts people’s lives.
Note that I’m simply pointing out that people will probably try to regulate AI, and that this could delay AI timelines. I’m not proposing that we should be optimistic about regulation. Indeed, I’m quite pessimistic about heavy-handed government regulation of AI, but for reasons I’m not going to go into here.
Separately, the reason why the Covid-19 response decayed quickly likely had little to do with politicization, given that the pandemic response decayed in every nation in the world, with the exception of China. My guess is that, historically, regulations on manufacturing particular technologies have not decayed quite so quickly.
To take the pessimistic side on AI, I see some reasons why AI probably won’t be regulated in a way that matters:
I suspect the no fire alarm hypothesis is roughly correct, in that by and large, people won’t react until it’s too late. My biggest reason comes from the AI effect, where people start downplaying the intelligence it has, which is dangerous because people don’t react to warning shots like GPT-3 or AlphaFold 2, and updates me to thinking that people won’t seriously start calling for regulation until AGI is actually here, and that’s far too late. We got a fire alarm for nukes in Hiroshima, which was an instance of a lucky fire alarm before many nukes or nuclear power plants were made, and we can’t rely on luck saving us again.
Politicization. The COVID-19 response worries me much more than you, and it’s positives only outweighed it’s negatives only because of the fact that there wasn’t any X-risk. In particular, the fact that there was a strong response actually decayed pretty fast, and in our world virtually everything is politicized into a culture war as soon as it actually impacts people’s lives. A lot of the competence of say, handling nukes or genetic engineering is that politics didn’t use to eat everything, thus no one had too much motivation to defect. Now, if they had to deal with nukes or genetic engineering with our politics, at least 40% of the US population would support getting these technologies solely to destroy the other side.
Speaking of that far too late thing, most technologies that got successfully regulated either had everyone panicking like nuclear reactor radiation or wasn’t very developed like Human genetic engineering/cloning.
Finally, no one can have it and AGI itself is a threat thanks to inner optimizer concerns. So the solution of having government control it is exactly unworkable, since they themselves have large incentives to get AGI ala nukes and have little reason not to.
Note that I’m simply pointing out that people will probably try to regulate AI, and that this could delay AI timelines. I’m not proposing that we should be optimistic about regulation. Indeed, I’m quite pessimistic about heavy-handed government regulation of AI, but for reasons I’m not going to go into here.
Separately, the reason why the Covid-19 response decayed quickly likely had little to do with politicization, given that the pandemic response decayed in every nation in the world, with the exception of China. My guess is that, historically, regulations on manufacturing particular technologies have not decayed quite so quickly.