I roughly support slowing AI progress (although the space of possibilities has way more dimensions than just slow vs fast). Some takes on “Reasons one might try to accelerate progress”:
Avoid/delay a race with China + Keep the good guys in the lead. Sure, if you think you can differentially accelerate better actors, that’s worth noticing. (And maybe long timelines means more actors in general, which seems bad on net.) I feel pretty uncertain about the magnitude of these factors, though.
Smooth out takeoff. Sure, but be careful—this factor suggests faster progress is good insofar as it’s due to greater spending. This is consistent with trying to slow timelines by e.g. trying to get labs to publish less.
Another factor is non-AI x-risk: if human-level AI solves other risks, and greater exposure to other risks doesn’t help with AI, this is a force in favor of rolling the dice on AI sooner. (I roughly believe non-AI x-risk is much smaller than the increase in x-risk from shorter timelines, but I’m flagging this as cruxy; if I came to believe that e.g. biorisk was much bigger, I would support accelerating AI.)
I roughly support slowing AI progress (although the space of possibilities has way more dimensions than just slow vs fast). Some takes on “Reasons one might try to accelerate progress”:
Avoid/delay a race with China + Keep the good guys in the lead. Sure, if you think you can differentially accelerate better actors, that’s worth noticing. (And maybe long timelines means more actors in general, which seems bad on net.) I feel pretty uncertain about the magnitude of these factors, though.
Smooth out takeoff. Sure, but be careful—this factor suggests faster progress is good insofar as it’s due to greater spending. This is consistent with trying to slow timelines by e.g. trying to get labs to publish less.
Another factor is non-AI x-risk: if human-level AI solves other risks, and greater exposure to other risks doesn’t help with AI, this is a force in favor of rolling the dice on AI sooner. (I roughly believe non-AI x-risk is much smaller than the increase in x-risk from shorter timelines, but I’m flagging this as cruxy; if I came to believe that e.g. biorisk was much bigger, I would support accelerating AI.)