This shortform was going to be a post in Slowing AI but its tone is off.
This shortform is very non-exhaustive.
Bad take #1: China won’t slow, so the West shouldn’t either
There is a real consideration here. Reasonable variants of this take include
What matters for safety is not just slowing but also the practices of the organizations that build powerful AI. Insofar as the West is safer and China won’t slow, it’s worth sacrificing some Western slowing to preserve Western lead.
What matters for safety is not just slowing but especially slowing near the end. Differentially slowing the West now would reduce its ability to slow later (or even cause it to speed later). So differentially slowing the West now is bad.
(Set aside the fact that slowing the West generally also slows China, because they’re correlated and because ideas pass from the West to China.) (Set aside the question of whether China will try to slow and how correlated that is with the West slowing.)
In some cases slowing the West would be worth burning lead time. But slowing AI doesn’t just mean the West slowing itself down. Some interventions would slow both spheres similarly or even differentially slow China– most notably export controls, reducing diffusion of ideas, and improved migration policy.
Yes, insofar as slowing now risks speeding later, we should notice that. There is a real consideration here.
But in some cases slowing now would be worth a little speeding later. Moreover, some kinds of slowing don’t cause faster progress later at all: for example, reducing diffusion of ideas, decreasing hardware progress, and any stable and enforceable policy regimes that slow AI.
Bad take #3: powerful AI helps alignment research, so we shouldn’t slow it
(Set aside the question of how much powerful AI helps alignment research.) If powerful AI is important for alignment research, that means we should aim to increase time with powerful AI, not how soon powerful AI appears.
Bad take #4: it would be harder for unaligned AI to take over in a world with less compute available (for it to hijack), and failed takeover attempts would be good, so it’s better for unaligned AI to try to take over soon
No, running AI systems seems likely to be cheap and there’s already plenty of compute.
Slowing AI: Bad takes
This shortform was going to be a post in Slowing AI but its tone is off.
This shortform is very non-exhaustive.
Bad take #1: China won’t slow, so the West shouldn’t either
There is a real consideration here. Reasonable variants of this take include
What matters for safety is not just slowing but also the practices of the organizations that build powerful AI. Insofar as the West is safer and China won’t slow, it’s worth sacrificing some Western slowing to preserve Western lead.
What matters for safety is not just slowing but especially slowing near the end. Differentially slowing the West now would reduce its ability to slow later (or even cause it to speed later). So differentially slowing the West now is bad.
(Set aside the fact that slowing the West generally also slows China, because they’re correlated and because ideas pass from the West to China.) (Set aside the question of whether China will try to slow and how correlated that is with the West slowing.)
In some cases slowing the West would be worth burning lead time. But slowing AI doesn’t just mean the West slowing itself down. Some interventions would slow both spheres similarly or even differentially slow China– most notably export controls, reducing diffusion of ideas, and improved migration policy.
See West-China relation.
Bad take #2: slowing can create a compute overhang, so all slowing is bad
Taboo “overhang.”
Yes, insofar as slowing now risks speeding later, we should notice that. There is a real consideration here.
But in some cases slowing now would be worth a little speeding later. Moreover, some kinds of slowing don’t cause faster progress later at all: for example, reducing diffusion of ideas, decreasing hardware progress, and any stable and enforceable policy regimes that slow AI.
See Quickly scaling up compute.
Bad take #3: powerful AI helps alignment research, so we shouldn’t slow it
(Set aside the question of how much powerful AI helps alignment research.) If powerful AI is important for alignment research, that means we should aim to increase time with powerful AI, not how soon powerful AI appears.
Bad take #4: it would be harder for unaligned AI to take over in a world with less compute available (for it to hijack), and failed takeover attempts would be good, so it’s better for unaligned AI to try to take over soon
No, running AI systems seems likely to be cheap and there’s already plenty of compute.