Worriers often invoke a Pascal’s wager sort of calculus, wherein any tiny risk of this nightmare scenario could justify large cuts in AI progress. But that seems to assume that it is relatively easy to assure the same total future progress, just spread out over a longer time period. I instead fear that overall economic growth and technical progress is more fragile that this assumes. Consider how regulations inspired by nuclear power nightmare scenarios have for seventy years prevented most of its potential from being realized. I have also seen progress on many other promising techs mostly stopped, not merely slowed, via regulation inspired by vague fears. In fact, progress seems to me to be slowing down worldwide due to excess fear-induced regulation.
This to me is the key paragraph. If people’s worries about AI x-risk drive them in a positive direction, such as doing safety research, there’s nothing wrong with that, even if they’re mistaken. But if the response is to strangle technology in the crib via regulation, now you’re doing a lot of harm based off your unproven philosophical speculation, likely more than you realize. (In fact, it’s quite easy to imagine ways that attempting to regulate AI to death could actually increase long-term AI x-risk, though that’s far from the only possible harm.)
This to me is the key paragraph. If people’s worries about AI x-risk drive them in a positive direction, such as doing safety research, there’s nothing wrong with that, even if they’re mistaken. But if the response is to strangle technology in the crib via regulation, now you’re doing a lot of harm based off your unproven philosophical speculation, likely more than you realize. (In fact, it’s quite easy to imagine ways that attempting to regulate AI to death could actually increase long-term AI x-risk, though that’s far from the only possible harm.)