On my picture, I think a key variable is the length of time between when-we-understand-the-basic-shape-of-things-that-will-get-to-AGI and when-it-reaches-strong-superintelligence.
I don’t understand why you think the sort of capabilities research done by alignment-conscious people contributes to lengthening this time. In particular, what reason do you have to think they’re not advancing the second time point as much as the first? Could you spell that out more explicitly?
I don’t understand why you think the sort of capabilities research done by alignment-conscious people contributes to lengthening this time. In particular, what reason do you have to think they’re not advancing the second time point as much as the first? Could you spell that out more explicitly?