Outside the three major AGI labs, I’m reasonably confident no major organization is following a solid roadmap to AGI; no-one else woke up. A few LARPers, maybe, who’d utter “we’re working on AGI” because that’s trendy now. But nobody who has a gears-level model of the path there, and what its endpoint entails.
This seems pretty false. In terms of large players, there also exists Meta and Inflection AI. There are also many other smaller players who also care about AGI, and no doubt many AGI-motivated workers at three labs mentioned would start their own orgs if the org they’re currently working under shuts down.
Inflection’s claim to fame is having tons of compute and promising to “train models that are 10 times larger than the cutting edge GPT-4 and then 100 times larger than GPT-4”, plus the leader talking about “the containment problem” in a way that kind-of palatably misses the point. So far, they seems to be precisely the sort of “just scale LLMs” vision-less actor I’m not particularly concerned about. I could be proven wrong any day now, but so far they don’t really seem to be doing anything interesting.
As to Meta – what’s the last original invention they did? Last I checked, they couldn’t even match GPT-4, with all of Meta’s resources. Yann LeCun has thoughts on AGI, but it doesn’t look like he’s being allowed to freely and efficiently pursue them. That seems to be how a vision-less major corporation investing in AI looks like. Pretty unimpressive.
Current AGI labs metastazing across the ecosystem and potentially founding new ones if shut down – I agree that it may be a problem, but I don’t think they necessarily by-default coalesce into more AGI labs. Some of them have research skills but no leadership/management skills, for example. So while they’d advance towards an AGI when embedded into a company with this vision, they won’t independently start one up if left to their own devices, nor embed themselves into a different project and hijack it towards AGI-pursuit. And whichever of them do manage that – they’d be unlikely to coalesce into a single new organization, meaning the smattering of new orgs would still advance slower collectively, and each may have more trouble getting millions/billions of funding unless the leadership are also decent negotiators.
meaning the smattering of new orgs would still advance slower collectively, and each may have more trouble getting millions/billions of funding unless the leadership are also decent negotiators
This seems to contradict history. The split up of Standard Oil for example led to innovations in oil drilling. Also you are seriously overestimating how hard it is to get funding. Much stupider and more poorly run companies have gotten billions in funding. And these leaders in the worst case can just hire negotiators.
Presumably these innovations were immediately profitable. I’m not sure that moves towards architectures closer to AGI (as opposed to myopic/greedy-search moves towards incrementally-more-capable models) are immediately profitable. It’d be increasingly more true as we inch closer to AGI, but it definitely wasn’t true back in the 2010s, and it may not yet be true now.
So I’m sure some of them would intend to try innovations that’d inch closer to AGI, but I expect them not to be differentially more rewarded by the market. Meaning that, unless one of these AGI-focused entrepreneurs is also really good at selling their pitch to investors (or has the right friend, or enough money and competence-recognition ability to get a co-founder skilled at making such pitches), then they’d be about as well-positioned to rush to AGI as some of the minor AI labs today are. Which is to say, not all that well-positioned at all.
you are seriously overestimating how hard it is to get funding
You may not be taking into account the market situation immediately after major AI labs’ hypothetical implosion. It’d be flooded with newly-unemployed ML researchers trying to found new AI startups or something; the demand on that might well end up saturated (especially if major labs’ shutdown cools the hype down somewhat). And then it’s the question of which ideas are differentially more likely to get funded; and, as per above, I’m not sure it’s the AGI-focused ones.
Presumably these innovations were immediately profitable.
That’s not always the case. It can take time to scale up an innovation, but I’d assume it’s plausibly profitable. AGI is no longer a secret belief and several venture capitalists + rich people believe in it. These people also under stand long term profit horizons. Uber took over 10 years to become profitable. Many startups haven’t been profitable yet.
Also a major lab shutting down for safety reasons is like broadcasting to all world governments that AGI exists and is powerful/dangerous.
This seems pretty false. In terms of large players, there also exists Meta and Inflection AI. There are also many other smaller players who also care about AGI, and no doubt many AGI-motivated workers at three labs mentioned would start their own orgs if the org they’re currently working under shuts down.
Inflection’s claim to fame is having tons of compute and promising to “train models that are 10 times larger than the cutting edge GPT-4 and then 100 times larger than GPT-4”, plus the leader talking about “the containment problem” in a way that kind-of palatably misses the point. So far, they seems to be precisely the sort of “just scale LLMs” vision-less actor I’m not particularly concerned about. I could be proven wrong any day now, but so far they don’t really seem to be doing anything interesting.
As to Meta – what’s the last original invention they did? Last I checked, they couldn’t even match GPT-4, with all of Meta’s resources. Yann LeCun has thoughts on AGI, but it doesn’t look like he’s being allowed to freely and efficiently pursue them. That seems to be how a vision-less major corporation investing in AI looks like. Pretty unimpressive.
Current AGI labs metastazing across the ecosystem and potentially founding new ones if shut down – I agree that it may be a problem, but I don’t think they necessarily by-default coalesce into more AGI labs. Some of them have research skills but no leadership/management skills, for example. So while they’d advance towards an AGI when embedded into a company with this vision, they won’t independently start one up if left to their own devices, nor embed themselves into a different project and hijack it towards AGI-pursuit. And whichever of them do manage that – they’d be unlikely to coalesce into a single new organization, meaning the smattering of new orgs would still advance slower collectively, and each may have more trouble getting millions/billions of funding unless the leadership are also decent negotiators.
This seems to contradict history. The split up of Standard Oil for example led to innovations in oil drilling. Also you are seriously overestimating how hard it is to get funding. Much stupider and more poorly run companies have gotten billions in funding. And these leaders in the worst case can just hire negotiators.
Presumably these innovations were immediately profitable. I’m not sure that moves towards architectures closer to AGI (as opposed to myopic/greedy-search moves towards incrementally-more-capable models) are immediately profitable. It’d be increasingly more true as we inch closer to AGI, but it definitely wasn’t true back in the 2010s, and it may not yet be true now.
So I’m sure some of them would intend to try innovations that’d inch closer to AGI, but I expect them not to be differentially more rewarded by the market. Meaning that, unless one of these AGI-focused entrepreneurs is also really good at selling their pitch to investors (or has the right friend, or enough money and competence-recognition ability to get a co-founder skilled at making such pitches), then they’d be about as well-positioned to rush to AGI as some of the minor AI labs today are. Which is to say, not all that well-positioned at all.
You may not be taking into account the market situation immediately after major AI labs’ hypothetical implosion. It’d be flooded with newly-unemployed ML researchers trying to found new AI startups or something; the demand on that might well end up saturated (especially if major labs’ shutdown cools the hype down somewhat). And then it’s the question of which ideas are differentially more likely to get funded; and, as per above, I’m not sure it’s the AGI-focused ones.
That’s not always the case. It can take time to scale up an innovation, but I’d assume it’s plausibly profitable. AGI is no longer a secret belief and several venture capitalists + rich people believe in it. These people also under stand long term profit horizons. Uber took over 10 years to become profitable. Many startups haven’t been profitable yet.
Also a major lab shutting down for safety reasons is like broadcasting to all world governments that AGI exists and is powerful/dangerous.