Something like that sounds like a sensible proposal to me.
I’m not sure I endorse that as stated (I believe returns to intelligence are strongly sublinear, so a fixed slow rate of scaling may end up taking too long to get to transformative AI for my taste), but I endorse the general idea of deliberate attempts to control AI takeoff at a pace we can deal with (both for technical AI safety and governance/societal response approaches).
I was pushing back against the idea of an indefinite moratorium now while we harvest the gains from developments to date.
That could lead to a hardware overhang and move us out of the regime where only a handful of companies can train strong AI systems to a world where hundreds or thousands of actors can do so.
I believe we should limit AI development to below 0.2 OOMs/year which is slow continuous takeoff.
Something like that sounds like a sensible proposal to me.
I’m not sure I endorse that as stated (I believe returns to intelligence are strongly sublinear, so a fixed slow rate of scaling may end up taking too long to get to transformative AI for my taste), but I endorse the general idea of deliberate attempts to control AI takeoff at a pace we can deal with (both for technical AI safety and governance/societal response approaches).
I was pushing back against the idea of an indefinite moratorium now while we harvest the gains from developments to date.
That could lead to a hardware overhang and move us out of the regime where only a handful of companies can train strong AI systems to a world where hundreds or thousands of actors can do so.