Hi @Akash here are my current best guesses on the arrival and nature of superintelligence:
Timeframe probably around end of the decade 2028-2032 but possibly even earlier. In the unlikely but not impossible case that AGI has not been achieved by 2035 timelines lengthen again.
Players will be the usual suspects: openAi, Anthropic, deepMind, xAI, Meta, Baidu etc. Compute spend will be in hundreds of billions maybe more. The number of players will be exceedingly small; likely some or all of the labs will join forces.
Likely there will be soft nationalization of AGI under US government under pressure of ai safety concerns, national security and the economics of giant GPU clusters.
China will have joined the race to AGI, and will likely build AGI systems not too long after US (conditional on no Crazy Things happening in the interim).
There will likely be various international agreements that will be ignored to varying degree.
AGI will be similar in some respects to current day LLMs in having a pre-training phase on large internet data but importantly will differ in being a continually ‘open-loop’ trained RL agent on top of a LLM chassis. The key will be doing efficient long-horizon RL on thoughts on top of current day pretrained transformers.
An important difference between current day LLMs and future superintelligent AGI is that reactions to prompts (like solve cancer, the Riemann hypothesis, selfreplicating drones) can take many hours, weeks, months. So directing the AGI in what questions to think about will be plausibly very important.
______________________<
After AGI is achieved (in the US) the future seems murky (to me). Seems hard to forecast how the public and decision-making elites will react exactly.
Likely the US government will try to control its use to a significant degree. It’s very plausible that the best AI will not be accessible fully to the public.
National security interests likely to become increasingly important.
A lot of compute likely to go into making AGI think about health &medicine.
Other countries (chief among them China) will want to build their own.
Very large and powerful language models will still be available to the public.
Yeah, one of OpenAIs goals is to make AI models think for very long and get better answers the more they think, limited only by available computing power, so this will almost certainly be an important subgoal to this:
An important difference between current day LLMs and future superintelligent AGI is that reactions to prompts (like solve cancer, the Riemann hypothesis, selfreplicating drones) can take many hours, weeks, months
And o1, while not totally like the system proposed, is a big conceptual advance to making this a reality.
Hi @Akash here are my current best guesses on the arrival and nature of superintelligence:
Timeframe probably around end of the decade 2028-2032 but possibly even earlier. In the unlikely but not impossible case that AGI has not been achieved by 2035 timelines lengthen again.
Players will be the usual suspects: openAi, Anthropic, deepMind, xAI, Meta, Baidu etc. Compute spend will be in hundreds of billions maybe more. The number of players will be exceedingly small; likely some or all of the labs will join forces.
Likely there will be soft nationalization of AGI under US government under pressure of ai safety concerns, national security and the economics of giant GPU clusters. China will have joined the race to AGI, and will likely build AGI systems not too long after US (conditional on no Crazy Things happening in the interim). There will likely be various international agreements that will be ignored to varying degree.
AGI will be similar in some respects to current day LLMs in having a pre-training phase on large internet data but importantly will differ in being a continually ‘open-loop’ trained RL agent on top of a LLM chassis. The key will be doing efficient long-horizon RL on thoughts on top of current day pretrained transformers.
An important difference between current day LLMs and future superintelligent AGI is that reactions to prompts (like solve cancer, the Riemann hypothesis, selfreplicating drones) can take many hours, weeks, months. So directing the AGI in what questions to think about will be plausibly very important.
______________________< After AGI is achieved (in the US) the future seems murky (to me). Seems hard to forecast how the public and decision-making elites will react exactly.
Likely the US government will try to control its use to a significant degree. It’s very plausible that the best AI will not be accessible fully to the public.
National security interests likely to become increasingly important. A lot of compute likely to go into making AGI think about health &medicine.
Other countries (chief among them China) will want to build their own. Very large and powerful language models will still be available to the public.
Yeah, one of OpenAIs goals is to make AI models think for very long and get better answers the more they think, limited only by available computing power, so this will almost certainly be an important subgoal to this:
And o1, while not totally like the system proposed, is a big conceptual advance to making this a reality.
Just pointing out that you should transition to LessWrong Docs when trying to call out someone by tagging them.
I’ll do that right now:
@Akash
Response to Alexander Oldenziel’s models?