AGI companies merging within next 2-3 years inevitable?
There are currently about a dozen major AI companies racing towards AGI with many more minor AI companies. The way the technology shakes out this seems like unstable equilibrium.
It seems by now inevitable that we will see further mergers, joint ventures—within two years there might only be two or three major players left. Scale is all-dominant. There is no magic sauce, no moat. OpenAI doesn’t have algorithms that her competitors can’t copy within 6-12 months. It’s all leveraging compute. Whatever innovations smaller companies make can be easily stolen by tech giants.
e.g. we might have xAI- Meta, Anthropic- DeepMind-SSI-Google, OpenAI-Microsoft-Apple.
Actuallly, although this would be deeply unpopular in EA circles it wouldn’t be all that surprising if Anthropic and OpenAI would team up.
And—of course—a few years later we might only have two competitors: USA, China.
EDIT: the obvious thing to happen is that nvidia realizes it can just build AI itself. if Taiwan is Dune, GPUs are the spice, then nvidia is house Atreides
Whatever innovations smaller companies make can be easily stolen by tech giants.
And they / their basic components are probably also published by academia, though the precise hyperparameters, etc. might still matter and be non-trivial/costly to find.
I have a similar feeling, but there are some forces in the opposite direction:
Nvidia seems to limit how many GPUs a single competitor can acquire.
training frontier models becomes cheaper over time. Thus, those that build competitive models some time later than the absolute frontier have to invest much less resources.
In 2-3 years they would need to decide on training systems built in 3-5 years, and by 2027-2029 the scale might get to $200-1000 billion for an individual training system. (This is assuming geographically distributed training is solved, since such systems would need 5-35 gigawatts.)
Getting to a go-ahead on $200 billion systems might require a level of success that also makes $1 trillion plausible. So instead of merging, they might instead either temporarily give up on scaling further (if there isn’t sufficient success in 2-3 years), or become capable of financing such training systems individually, without pooling efforts.
For similar arguments, I think it’s gonna be very hard/unlikely to stop China from having AGI within a couple of years of the US (and most relevant AI chips currently being produced in Taiwan should probably further increase the probability of this). So taking on a lot more x-risk to try and race hard vs. China doesn’t seem like a good strategy from this POV.
AGI companies merging within next 2-3 years inevitable?
There are currently about a dozen major AI companies racing towards AGI with many more minor AI companies. The way the technology shakes out this seems like unstable equilibrium.
It seems by now inevitable that we will see further mergers, joint ventures—within two years there might only be two or three major players left. Scale is all-dominant. There is no magic sauce, no moat. OpenAI doesn’t have algorithms that her competitors can’t copy within 6-12 months. It’s all leveraging compute. Whatever innovations smaller companies make can be easily stolen by tech giants.
e.g. we might have xAI- Meta, Anthropic- DeepMind-SSI-Google, OpenAI-Microsoft-Apple.
Actuallly, although this would be deeply unpopular in EA circles it wouldn’t be all that surprising if Anthropic and OpenAI would team up.
And—of course—a few years later we might only have two competitors: USA, China.
EDIT: the obvious thing to happen is that nvidia realizes it can just build AI itself. if Taiwan is Dune, GPUs are the spice, then nvidia is house Atreides
And they / their basic components are probably also published by academia, though the precise hyperparameters, etc. might still matter and be non-trivial/costly to find.
I have a similar feeling, but there are some forces in the opposite direction:
Nvidia seems to limit how many GPUs a single competitor can acquire.
training frontier models becomes cheaper over time. Thus, those that build competitive models some time later than the absolute frontier have to invest much less resources.
In 2-3 years they would need to decide on training systems built in 3-5 years, and by 2027-2029 the scale might get to $200-1000 billion for an individual training system. (This is assuming geographically distributed training is solved, since such systems would need 5-35 gigawatts.)
Getting to a go-ahead on $200 billion systems might require a level of success that also makes $1 trillion plausible. So instead of merging, they might instead either temporarily give up on scaling further (if there isn’t sufficient success in 2-3 years), or become capable of financing such training systems individually, without pooling efforts.
They’ve already started…
For similar arguments, I think it’s gonna be very hard/unlikely to stop China from having AGI within a couple of years of the US (and most relevant AI chips currently being produced in Taiwan should probably further increase the probability of this). So taking on a lot more x-risk to try and race hard vs. China doesn’t seem like a good strategy from this POV.