My impression is that Dario (somewhat intentionally?) plays the game of saying things he believes to be true about the 5-10 years after AGI, conditional on AI development not continuing.
What happens after those 5-10 years, or if AI gets even vastly smarter? That seems out of scope for the article. I assume he’s doing that since he wants to influence a specific set of people, maybe politicians, to take a radical future more seriously than they currently do. Once a radical future is more viscerally clear in a few years, we will likely see even more radical essays.
It’s tricky to pin down from this what he gut-level believes versus thinks it expedient to publish.
Consider this passage and its footnote:
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
10 Another factor is of course that powerful AI itself can potentially be used to create even more powerful AI. My assumption is that this might (in fact, probably will) occur, but that its effect will be smaller than you might imagine, precisely because of the “decreasing marginal returns to intelligence” discussed here. In other words, AI will continue to get smarter quickly, but its effect will eventually be limited by non-intelligence factors, and analyzing those is what matters most to the speed of scientific progress outside AI.
The two implied assumptions I note relevant to this:
AI will only get a bit smarter (2-3x) than the smartest human, not a lot smarter (100x).
Algorithmic advances won’t make it vastly cheaper to train AI. Datacenters with oversight and computer governance, control of AGI by a small number of responsible parties, defense-dominant technology outcomes. This is an imagined future without radical changes in world governments, but also everything staying neat and tidy and controlled.
My impression is that Dario (somewhat intentionally?) plays the game of saying things he believes to be true about the 5-10 years after AGI, conditional on AI development not continuing.
What happens after those 5-10 years, or if AI gets even vastly smarter? That seems out of scope for the article. I assume he’s doing that since he wants to influence a specific set of people, maybe politicians, to take a radical future more seriously than they currently do. Once a radical future is more viscerally clear in a few years, we will likely see even more radical essays.
It’s tricky to pin down from this what he gut-level believes versus thinks it expedient to publish.
Consider this passage and its footnote:
The two implied assumptions I note relevant to this:
AI will only get a bit smarter (2-3x) than the smartest human, not a lot smarter (100x).
Algorithmic advances won’t make it vastly cheaper to train AI. Datacenters with oversight and computer governance, control of AGI by a small number of responsible parties, defense-dominant technology outcomes. This is an imagined future without radical changes in world governments, but also everything staying neat and tidy and controlled.