the clustering at 4-level remains suspicious and worth pondering
It takes 10K H100s to train a 4-level model in a few months, after some months of tinkering, after the datacenter is built. A100s are worse, you need more of them and it takes longer, OpenAI had the lead by being the only one who tried. The value of 4-level models only became legible in Mar 2023, the stragglers only just had the opportunity to catch up.
So even if hardware-in-datacenters was magically abundant, only OpenAI would’ve been ready to take advantage of it far enough in the past for a trained model at the next level of scale to already be here. Google and Anthropic would only now be training their 5-level models, but those models wouldn’t be out yet. Meta and xAI would only just be starting the training or preparation for it.
In reality OpenAI might’ve been delayed with a 5-level model by lack of hardware, even if it was just waiting for a datacenter to get built and there were no relevant shortages, while the rest might be training on schedule (Google and Anthropic well in progress, xAI and Meta finishing up with final preparations). The scale for Anthropic/xAI/Meta might be lower for now, they might need to train over more months to get new capabilities out of it. But OpenAI might have their 100K training H100s already, and Google has the TPUs and possibly resolve to work on distributed training across multiple datacenter campuses.
Thus I expect models at the next level of scale (5e26-1e27 FLOPs) to be out in early 2025, possibly late 2024, first from OpenAI and Google, possibly also Anthropic, and then xAI and Meta (mid-2025). Musk promises a Grok-3 in a few months, but I don’t think that much scale can get into it in time (it could get maybe 6x more FLOPs than Grok-2 if the latter was done in BF16 and they transition to FP8 while training 3 times longer).
It takes 10K H100s to train a 4-level model in a few months, after some months of tinkering, after the datacenter is built. A100s are worse, you need more of them and it takes longer, OpenAI had the lead by being the only one who tried. The value of 4-level models only became legible in Mar 2023, the stragglers only just had the opportunity to catch up.
So even if hardware-in-datacenters was magically abundant, only OpenAI would’ve been ready to take advantage of it far enough in the past for a trained model at the next level of scale to already be here. Google and Anthropic would only now be training their 5-level models, but those models wouldn’t be out yet. Meta and xAI would only just be starting the training or preparation for it.
In reality OpenAI might’ve been delayed with a 5-level model by lack of hardware, even if it was just waiting for a datacenter to get built and there were no relevant shortages, while the rest might be training on schedule (Google and Anthropic well in progress, xAI and Meta finishing up with final preparations). The scale for Anthropic/xAI/Meta might be lower for now, they might need to train over more months to get new capabilities out of it. But OpenAI might have their 100K training H100s already, and Google has the TPUs and possibly resolve to work on distributed training across multiple datacenter campuses.
Thus I expect models at the next level of scale (5e26-1e27 FLOPs) to be out in early 2025, possibly late 2024, first from OpenAI and Google, possibly also Anthropic, and then xAI and Meta (mid-2025). Musk promises a Grok-3 in a few months, but I don’t think that much scale can get into it in time (it could get maybe 6x more FLOPs than Grok-2 if the latter was done in BF16 and they transition to FP8 while training 3 times longer).