Agreed on all points except a couple of the less consequential, where I don’t disagree.
Strongest agreement: we’re underestimating the importance of governments for alignment and use/misuse. We haven’t fully updated from the inattentive world hypothesis. Governments will notice the importance of AGI before it’s developed, and will seize control. They don’t need to nationalize the corporations, they just need to have a few people embedded at theh company and demand on threat of imprisonment that they’re kept involved with all consequential decisions on its use. I doubt they’d even need new laws, because the national security implications are enormous. But if they need new laws, they’ll create them as rapidly as necessary. Hopping borders will be difficult, and just put a different government in control.
Strongest disagreement: I think it’s likely that zero breakthroughs are needed to add long term planning capabilities to LLM-based systems, and so long term planning agents (I like the terminology) will be present very soon, and improve as LLMs continue to improve. I have specific reasons for thinking this. I could easily be wrong, but I’m pretty sure that the rational stance is “maybe”. This maybe advances the timelines dramatically.
Also strongly agree on AGI as a relatively discontinuous improvement; I worry that this is glossed over in modern “AI safety” discussions, causing people to mistake controlling LLMs for aligning the AGIs we’ll create on top of them. AGI alignment requires different conceptual work.
Agreed on all points except a couple of the less consequential, where I don’t disagree.
Strongest agreement: we’re underestimating the importance of governments for alignment and use/misuse. We haven’t fully updated from the inattentive world hypothesis. Governments will notice the importance of AGI before it’s developed, and will seize control. They don’t need to nationalize the corporations, they just need to have a few people embedded at theh company and demand on threat of imprisonment that they’re kept involved with all consequential decisions on its use. I doubt they’d even need new laws, because the national security implications are enormous. But if they need new laws, they’ll create them as rapidly as necessary. Hopping borders will be difficult, and just put a different government in control.
Strongest disagreement: I think it’s likely that zero breakthroughs are needed to add long term planning capabilities to LLM-based systems, and so long term planning agents (I like the terminology) will be present very soon, and improve as LLMs continue to improve. I have specific reasons for thinking this. I could easily be wrong, but I’m pretty sure that the rational stance is “maybe”. This maybe advances the timelines dramatically.
Also strongly agree on AGI as a relatively discontinuous improvement; I worry that this is glossed over in modern “AI safety” discussions, causing people to mistake controlling LLMs for aligning the AGIs we’ll create on top of them. AGI alignment requires different conceptual work.