Without access to hardware, further scaling will be a problem. GPT-4 level models don’t need that much hardware, but this changes when you scale by another 30 or 1000 times. Fabs take a long time to get into production even when the tools they need are available. With whatever fabs there are, you still need chip designs. And a few years is forever in AI time.
I think Leopold addresses this but 5% of our compute will be used to make a hypothetical AGI while China can direct 100% of their compute. They can make up in quality with quantity and they also happen to have far more energy than us which is probably the more salient variable in the AGI equation.
Also I’m of the opinion that the GPU bans are largely symbolic. There is little incentive to respect them, especially when China realizes stakes are higher than they seem now. In fact they are largely symbolic now.
That shows the opposite. Purchases of 1 or 6 A100s, in an era where the SOTA is going to take 100,000+ B100s, 2 generations later, are totally irrelevant and downright symbolic.
I mean are you sure Singapore’s sudden large increase in GPU purchases is organic? GPU bans have very obviously not stopped Chinese AI progress, so I think we should build conclusions starting from there instead of the reverse order.
I also think US GPU superiority is short lived. China can skip engineering milestones we’ve had to pass, exploit the fact that they have far more energy than us, skip the general purpose computing/gaming tech debt that may exist in current GPUs, etc.
EDIT: This is selective rationalism. If you sought any evidence in this issue, it would become extremely obvious that Singapore’s orders of H100s magically increased by many magnitudes after they were banned in China.
Just want to register that I agree that—regardless of US GPU superiority right now—the US AI superiority is pretty small, and decreasing. Yi-Large beats a bunch of GPT-4 versions—even in English—on lmsys; it scores just above stuff like Gemini. Their open source releases like DeepSeekV2 look like ~Llama 3 70b level. And so on and so forth.
Maybe whatever OpenAI is training now will destroy whatever China has, and establish OpenAI as firmly in the lead.… or maybe not. Yi says they’re training their next model as well, so it isn’t like they’ve stopped doing things.
I think some chunk of “China is so far behind” is fueled by the desire to be able to stop US labs while not just letting China catch up, but that is what it would actually do.
Without access to hardware, further scaling will be a problem. GPT-4 level models don’t need that much hardware, but this changes when you scale by another 30 or 1000 times. Fabs take a long time to get into production even when the tools they need are available. With whatever fabs there are, you still need chip designs. And a few years is forever in AI time.
I think Leopold addresses this but 5% of our compute will be used to make a hypothetical AGI while China can direct 100% of their compute. They can make up in quality with quantity and they also happen to have far more energy than us which is probably the more salient variable in the AGI equation.
Also I’m of the opinion that the GPU bans are largely symbolic. There is little incentive to respect them, especially when China realizes stakes are higher than they seem now. In fact they are largely symbolic now.
That shows the opposite. Purchases of 1 or 6 A100s, in an era where the SOTA is going to take 100,000+ B100s, 2 generations later, are totally irrelevant and downright symbolic.
I mean are you sure Singapore’s sudden large increase in GPU purchases is organic? GPU bans have very obviously not stopped Chinese AI progress, so I think we should build conclusions starting from there instead of the reverse order.
I also think US GPU superiority is short lived. China can skip engineering milestones we’ve had to pass, exploit the fact that they have far more energy than us, skip the general purpose computing/gaming tech debt that may exist in current GPUs, etc.
EDIT: This is selective rationalism. If you sought any evidence in this issue, it would become extremely obvious that Singapore’s orders of H100s magically increased by many magnitudes after they were banned in China.
Just want to register that I agree that—regardless of US GPU superiority right now—the US AI superiority is pretty small, and decreasing. Yi-Large beats a bunch of GPT-4 versions—even in English—on lmsys; it scores just above stuff like Gemini. Their open source releases like DeepSeekV2 look like ~Llama 3 70b level. And so on and so forth.
Maybe whatever OpenAI is training now will destroy whatever China has, and establish OpenAI as firmly in the lead.… or maybe not. Yi says they’re training their next model as well, so it isn’t like they’ve stopped doing things.
I think some chunk of “China is so far behind” is fueled by the desire to be able to stop US labs while not just letting China catch up, but that is what it would actually do.