I agree with 1 and think that race dynamics makes the situation considerably worse when we only have access to prosaic approaches. (Though I don’t think this is the biggest issue with these approaches.)
I think I expect a period substantially longer than several months by default due to slower takeoff than this. (More like 2 years than 2 months.)
Insofar as the hope was for governments to step in at some point, I think the best and easiest point for them to step in is actually during the point where AIs are already becoming very powerful:
Prior to this point, we don’t get substantial value from pausing, especially if we’re not pausing/dismantling all of semi-conductor R&D globally.
Prior to this point AI won’t be concerning enough for governments to take agressive action.
At this point, additional time is extremely useful due to access to powerful AIs.
The main counterargument is that at this point more powerful AI will also look very attractive. So, it will seem too expensive to stop.
So, I don’t really see very compelling alternatives to push on at the margin as far as “metastrategy” (though I’m not sure I know exactly what you’re pointing at here). Pushing for bigger asks seems fine, but probably less leveraged.
I actually don’t think control is a great meme for the interests of labs which purely optimize for power as it is a relatively legible ask which is potentially considerably more expensive than just “our model looks aligned because we red teamed it” which is more like the default IMO.
The same way “secure these model weights from China” isn’t a great meme for these interests IMO.
I agree with 1 and think that race dynamics makes the situation considerably worse when we only have access to prosaic approaches. (Though I don’t think this is the biggest issue with these approaches.)
I think I expect a period substantially longer than several months by default due to slower takeoff than this. (More like 2 years than 2 months.)
Insofar as the hope was for governments to step in at some point, I think the best and easiest point for them to step in is actually during the point where AIs are already becoming very powerful:
Prior to this point, we don’t get substantial value from pausing, especially if we’re not pausing/dismantling all of semi-conductor R&D globally.
Prior to this point AI won’t be concerning enough for governments to take agressive action.
At this point, additional time is extremely useful due to access to powerful AIs.
The main counterargument is that at this point more powerful AI will also look very attractive. So, it will seem too expensive to stop.
So, I don’t really see very compelling alternatives to push on at the margin as far as “metastrategy” (though I’m not sure I know exactly what you’re pointing at here). Pushing for bigger asks seems fine, but probably less leveraged.
I actually don’t think control is a great meme for the interests of labs which purely optimize for power as it is a relatively legible ask which is potentially considerably more expensive than just “our model looks aligned because we red teamed it” which is more like the default IMO.
The same way “secure these model weights from China” isn’t a great meme for these interests IMO.