My guess is that you think heavy government involvement should occur for before/during the creation of ASL-4 systems, since you’re pretty concerned about risks from ASL-4 systems being developed in non-SL5 contexts.
Yes, I think heavy government should occur once AIs can substantially accelerate general purpose R&D and AI R&D in particular. I think the occurs at some point during ASL-4.
In practice, there might be a lag between when government should get involve and when it really does get involved such that I think companies should be prepared to implement SL5 without heavy government assistance. I think SL5 with involve massive operating cost, particularly if implemented on short notice, but should be possible for a competent actor to implement with a big effort.
(I’m also somewhat skeptical that the government will actually be that helpful in implementing SL5 relative to just hiring people the relevant expertise who will often be formerly working for various government. The difficulty in SL5 implementation also depends heavily on what costs you’re willing to accept: full airgapping is conceptually simple and should be workable, but prevents serving a public API.)
In general, I’d be interested in seeing more about how you (and Buck) are thinking about policy stuff + government involvement.
I don’t think we should get into this here, but we are in fact thinking about these topics and will likely discuss this more in future posts.
And I suspect readers will be better able to evaluate the AI control plan if some of the assumptions/expectations around government involvement are spelled out more clearly.
Agreed, though I think that “do something like control” is more robust than “the AI control plan” (which we haven’t even really clearly spelled out publicly, though we do have something in mind).
Yes, I think heavy government should occur once AIs can substantially accelerate general purpose R&D and AI R&D in particular. I think the occurs at some point during ASL-4.
In practice, there might be a lag between when government should get involve and when it really does get involved such that I think companies should be prepared to implement SL5 without heavy government assistance. I think SL5 with involve massive operating cost, particularly if implemented on short notice, but should be possible for a competent actor to implement with a big effort.
(I’m also somewhat skeptical that the government will actually be that helpful in implementing SL5 relative to just hiring people the relevant expertise who will often be formerly working for various government. The difficulty in SL5 implementation also depends heavily on what costs you’re willing to accept: full airgapping is conceptually simple and should be workable, but prevents serving a public API.)
I don’t think we should get into this here, but we are in fact thinking about these topics and will likely discuss this more in future posts.
Agreed, though I think that “do something like control” is more robust than “the AI control plan” (which we haven’t even really clearly spelled out publicly, though we do have something in mind).