Use massive test-time compute to improve the test-time algorithm.
If test-time scaling right now is only bound by computation, then OpenAI could potentially invest billions, or even tens of billions, in test-time computation for a very specific purpose—solving test-time algorithm problem, e.g. improve CoT search. In another sentence, while improving efficiency is important, we just need it to be efficient enough to be able to run it on a huge cluster disregarding the test-time cost, then we can use that extremely expensive but very smart intelligence to improve the efficiency and cost. Assuming O3 high can be scale up another 4 OOM more compute to reach ASI level intelligence, then Open AI just need to acquire 4 OOM more GPUs to use it to solve algorithm problems, which is around 100 million GPUs. However if they manage to improve the algorithm efficiency by 2 OOM, which is reasonably achievable within 2 years, then they just need 1 million GPUs—roughly 25- 50 billion dollars.
This perspective echos with the recent pronouncements from figures like Dario Amodei, CEO of Anthropic, who has spoken about the necessity of labs needing millions of GPUs in the coming years and the timeline to achieve that break through is 2 −3 years.
Use massive test-time compute to improve the test-time algorithm.
If test-time scaling right now is only bound by computation, then OpenAI could potentially invest billions, or even tens of billions, in test-time computation for a very specific purpose—solving test-time algorithm problem, e.g. improve CoT search. In another sentence, while improving efficiency is important, we just need it to be efficient enough to be able to run it on a huge cluster disregarding the test-time cost, then we can use that extremely expensive but very smart intelligence to improve the efficiency and cost.
Assuming O3 high can be scale up another 4 OOM more compute to reach ASI level intelligence, then Open AI just need to acquire 4 OOM more GPUs to use it to solve algorithm problems, which is around 100 million GPUs. However if they manage to improve the algorithm efficiency by 2 OOM, which is reasonably achievable within 2 years, then they just need 1 million GPUs—roughly 25- 50 billion dollars.
This perspective echos with the recent pronouncements from figures like Dario Amodei, CEO of Anthropic, who has spoken about the necessity of labs needing millions of GPUs in the coming years and the timeline to achieve that break through is 2 −3 years.