Thanks for this post. I agree with you that AI macrostrategy is extremely important and relatively neglected.
However, I’m having some trouble understanding your specific world model. Most concretely: can you link to or explain what your definition of “AGI” is?
Overall, I expect alignment outcomes to be significantly if not primarily determined by the quality of the “last mile” work done by the first AGI developer and other actors in close cooperation with them in the ~2 years prior to the development of AGI.
This makes me think that in your world model, there is most likely “one AGI” and that there is a “last mile” rather than a general continuous improvement. It seems to me to be basically a claim about very fast takeoff speeds. Because otherwise, it seems to me that we would expect multiple groups with access to AGIs with different strengths and weaknesses, a relatively slower and continuous improvement in their capabilities, etc.
My takeoff speeds are on the somewhat faster end, probably ~a year or two from “we basically don’t have crazy systems” to “AI (or whoever controls AI) controls the world”
EDIT: After further reflection, I no longer endorse this. I would now put 90% CI from 6 months to 15 years with median around 3.5 years. I still think fast takeoff is plausible but now think pretty slow is also plausible and overall more likely.
Got it. To avoid derailing with this object level question, I’ll just say that I think it seems helpful to be explicit about takeoff speeds in macrostrategy discussions. Ideally, specifying how different strategies work over distributions of takeoff speeds.
Thanks for this post. I agree with you that AI macrostrategy is extremely important and relatively neglected.
However, I’m having some trouble understanding your specific world model. Most concretely: can you link to or explain what your definition of “AGI” is?
This makes me think that in your world model, there is most likely “one AGI” and that there is a “last mile” rather than a general continuous improvement. It seems to me to be basically a claim about very fast takeoff speeds. Because otherwise, it seems to me that we would expect multiple groups with access to AGIs with different strengths and weaknesses, a relatively slower and continuous improvement in their capabilities, etc.
My takeoff speeds are on the somewhat faster end, probably ~a year or two from “we basically don’t have crazy systems” to “AI (or whoever controls AI) controls the world”
EDIT: After further reflection, I no longer endorse this. I would now put 90% CI from 6 months to 15 years with median around 3.5 years. I still think fast takeoff is plausible but now think pretty slow is also plausible and overall more likely.
Got it. To avoid derailing with this object level question, I’ll just say that I think it seems helpful to be explicit about takeoff speeds in macrostrategy discussions. Ideally, specifying how different strategies work over distributions of takeoff speeds.