More generally, Dario appears to assume that for 5-10 years after powerful AI we’ll just have a million AIs which are a bit smarter than the smartest humans and perhaps 100x faster rather than AIs which are radically smarter, faster, and more numerous than humans. I don’t see any argument that AI progress will stop at the point of top humans rather continuing much further.
Well, there’s footnote 10:
Another factor is of course that powerful AI itself can potentially be used to create even more powerful AI. My assumption is that this might (in fact, probably will) occur, but that its effect will be smaller than you might imagine, precisely because of the “decreasing marginal returns to intelligence” discussed here. In other words, AI will continue to get smarter quickly, but its effect will eventually be limited by non-intelligence factors, and analyzing those is what matters most to the speed of scientific progress outside AI.
So his view seems to be that even significantly smarter AIs just wouldn’t be able to accomplish that much more than what he’s discussing here. Such that they’re not very relevant.
(I disagree. Maybe there are some hard limits, here, but maybe there’s not. For most of the bottlenecks that Dario discusses, I don’t know how you become confident that there are 0 ways to speed them up or circumvent them. We’re talking about putting in many times more intellectual labor than our whole civilization has spent on any topic to date.)
Well, there’s footnote 10:
So his view seems to be that even significantly smarter AIs just wouldn’t be able to accomplish that much more than what he’s discussing here. Such that they’re not very relevant.
(I disagree. Maybe there are some hard limits, here, but maybe there’s not. For most of the bottlenecks that Dario discusses, I don’t know how you become confident that there are 0 ways to speed them up or circumvent them. We’re talking about putting in many times more intellectual labor than our whole civilization has spent on any topic to date.)