A 10-year global pause would allow for a lot of person-years-equivalents of automated AI safety R&D. E.g. from Some thoughts on automating alignment research (under some assumptions mentioned in the post): ‘each month of lead that the leader started out with would correspond to 15,000 human researchers working for 15 months.’ And for different assumptions the numbers could be [much] larger still: ‘For a model trained with 1000x the compute, over the course of 4 rather than 12 months, you could 100x as many models in parallel.[9] You’d have 1.5 million researchers working for 15 months.’
This would probably obsolete all previous AI safety R&D.
Of course, this assumes you’d be able to use automated AI safety R&D safely and productively. I’m relatively optimistic that a world which would be willing to enforce a 10-year global pause would also invest enough in e.g. a mix of control and superalignment to do this.
A 10-year global pause would allow for a lot of person-years-equivalents of automated AI safety R&D. E.g. from Some thoughts on automating alignment research (under some assumptions mentioned in the post): ‘each month of lead that the leader started out with would correspond to 15,000 human researchers working for 15 months.’ And for different assumptions the numbers could be [much] larger still: ‘For a model trained with 1000x the compute, over the course of 4 rather than 12 months, you could 100x as many models in parallel.[9] You’d have 1.5 million researchers working for 15 months.’
This would probably obsolete all previous AI safety R&D.
Of course, this assumes you’d be able to use automated AI safety R&D safely and productively. I’m relatively optimistic that a world which would be willing to enforce a 10-year global pause would also invest enough in e.g. a mix of control and superalignment to do this.