A critical failure mode in many discussions of technological risk is the assumption that maintaining status quo for technology would lead to maintaining the status quo for society. Lewis Anslow suggests that this “propensity to treat technological stagnation as safer than technological acceleration” is a fallacy. I agree that it is an important failure of reasoning among some EAs, and want to call it out clearly.
One obvious example of this, flagged by Anslow, is the anti-nuclear movement. It was not an explicitly pro-coal position, but because there was continued pressure for economic growth, the result of delaying nuclear technology wasn’t less power usage, it was more coal. To the extent that they succeeded narrowly, they damaged the environment.
The risk from artificial intelligence systems today is very arguably significant, but stopping future progress won’t reduce the impact of extant innovations. Stopping where we are today would still lead to continued problems with mass disinformation assisted by generative AI, and we’ll see continued progress towards automation of huge parts of modern work even without more capable systems as the systems which exist are deployed in new ways. It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI—but we wouldn’t have the accompanying productivity gains which could be used to pay for UBI or other programs.
All of that said, the perils of stasis aren’t necessarily greater than those of progress. This is not a question of safety versus risk, it is a risk-risk tradeoff. And the balance of the tradeoff is debatable—it is entirely possible to agree that there are significant risks to technological stasis, and even that AI would solve problems, and still debate what strategy to promote—whether it is safer to accelerate through the time of perils, promote differential technological development, or to shut it all down.
Safe Stasis Fallacy
A critical failure mode in many discussions of technological risk is the assumption that maintaining status quo for technology would lead to maintaining the status quo for society. Lewis Anslow suggests that this “propensity to treat technological stagnation as safer than technological acceleration” is a fallacy. I agree that it is an important failure of reasoning among some EAs, and want to call it out clearly.
One obvious example of this, flagged by Anslow, is the anti-nuclear movement. It was not an explicitly pro-coal position, but because there was continued pressure for economic growth, the result of delaying nuclear technology wasn’t less power usage, it was more coal. To the extent that they succeeded narrowly, they damaged the environment.
The risk from artificial intelligence systems today is very arguably significant, but stopping future progress won’t reduce the impact of extant innovations. Stopping where we are today would still lead to continued problems with mass disinformation assisted by generative AI, and we’ll see continued progress towards automation of huge parts of modern work even without more capable systems as the systems which exist are deployed in new ways. It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI—but we wouldn’t have the accompanying productivity gains which could be used to pay for UBI or other programs.
All of that said, the perils of stasis aren’t necessarily greater than those of progress. This is not a question of safety versus risk, it is a risk-risk tradeoff. And the balance of the tradeoff is debatable—it is entirely possible to agree that there are significant risks to technological stasis, and even that AI would solve problems, and still debate what strategy to promote—whether it is safer to accelerate through the time of perils, promote differential technological development, or to shut it all down.