Agreed.
In addition: I expect one of the counter-arguments to this would be “if these labs shut down, more will spring up in their place, and nothing would change”.
Potentially-hot take: I think that’s actually a much lesser concern that might seem.
The current major AGI labs are led by believers. My understanding is that quite a few (all?) of them bought into the initial LW-style AGI Risk concerns, and founded these labs as a galaxy-brained plan to prevent extinction and solve alignment. Crucially, they aimed to do that well before the talk of AGI became mainstream. They did it back in the days where “AGI” was a taboo topic due to the AI field experiencing one too many AI winters.
They also did that in defiance of profit-maximization gradients. Back in 2010s, “AGI research” may have sounded like a fringe but tolerable research topic, but certainly not like something that’d have invited much investor/market hype.
And inasmuch as humanity is still speeding up towards AGI, I think that’s currently still mostly spearheaded by believers. Not by raw financial incentives or geopolitical races. (Yes, yes, LLMs are now all the hype, and I’m sure the military loves to put CNNs on their warheads’ targeting systems, or whatever it is they do. But LLMs are not AGI.)
Outside the three major AGI labs, I’m reasonably confident no major organization is following a solid roadmap to AGI; no-one else woke up. A few LARPers, maybe, who’d utter “we’re working on AGI” because that’s trendy now. But nobody who has a gears-level model of the path there, and what its endpoint entails.
So what would happen if OpenAI, DeepMind, and Anthropic shut down just now? I’m not confident, but I’d put decent odds that the vision of AGI would go the way great startup ideas go. There won’t be necessarily anyone who’d step in to replace them. There’d be companies centered around scaling LLMs in the brutest manners possible, but I’m reasonably sure that’s mostly safe.
The business world, left to its own devices, would meander around to developing AGI eventually, yes. But the path it’d take there might end up incremental and circuitous, potentially taking a few decades more. Nothing like the current determined push.
… Or so goes my current strong-view-weakly-held.
The truth should be rewarded. Even if it’s obvious. Everyday this post is more blatantly correct.
I think this post was and remains important and spot-on. Especially this part, which is proving more clearly true (but still contested):