(Note, this post seems to have been originally published roughly a year ago, and so my reply here might be responding to an out-of-date argument.)
Slow down AI with stupid regulations
I guess I’m just much more optimistic than most people here that slowing down AI via regulations is the default, reasonable path that we should expect society to take in the absence of further interventions from longtermists. The reason for my belief has perhaps become more apparent in the last six months, but it’s worth repeating. My argument is built on a few simple premises:
Most people aren’t excited about the idea of transformative AI radically upending human life, and so will demand regulations if they think that transformative AI is imminent. I expect regulations to focus on mitigating harms from job losses, making sure the systems are reliable, and ensuring that powerful models can’t be exploited by terrorists.
AI will get gradually more powerful over the course of years, rather than being hidden in the background until suddenly godlike AI appears and reshapes the world. In these years, the technology will be rolled out on a large scale, resulting in most people (especially tech-focused young people) recognizing that these technologies are becoming powerful.
AIs will be relatively easy to regulate in the short-term, since AI progress is currently largely maintained by scaling compute budgets, and large AI supercomputers are easy to monitor. GPU production is highly centralized, and it is almost trivial for governments to limit production, which would raise prices, delaying AI.
Given that we’re probably going to get regulations by default, I care much more about ensuring that our regulations are thoughtful and well-targeted. I’m averse to trying to push for any regulations whatsoever because that’s “the best hope we have”. I don’t think stupid regulations are the best hope we have, and moreover, locking in stupid regulations could make the situation even worse!
(Note, this post seems to have been originally published roughly a year ago, and so my reply here might be responding to an out-of-date argument.)
I guess I’m just much more optimistic than most people here that slowing down AI via regulations is the default, reasonable path that we should expect society to take in the absence of further interventions from longtermists. The reason for my belief has perhaps become more apparent in the last six months, but it’s worth repeating. My argument is built on a few simple premises:
Most people aren’t excited about the idea of transformative AI radically upending human life, and so will demand regulations if they think that transformative AI is imminent. I expect regulations to focus on mitigating harms from job losses, making sure the systems are reliable, and ensuring that powerful models can’t be exploited by terrorists.
AI will get gradually more powerful over the course of years, rather than being hidden in the background until suddenly godlike AI appears and reshapes the world. In these years, the technology will be rolled out on a large scale, resulting in most people (especially tech-focused young people) recognizing that these technologies are becoming powerful.
AIs will be relatively easy to regulate in the short-term, since AI progress is currently largely maintained by scaling compute budgets, and large AI supercomputers are easy to monitor. GPU production is highly centralized, and it is almost trivial for governments to limit production, which would raise prices, delaying AI.
Given that we’re probably going to get regulations by default, I care much more about ensuring that our regulations are thoughtful and well-targeted. I’m averse to trying to push for any regulations whatsoever because that’s “the best hope we have”. I don’t think stupid regulations are the best hope we have, and moreover, locking in stupid regulations could make the situation even worse!