Are you planning to test this on reasoning models?
teradimich
I agree. But now people write so often about short timelines that it seems appropriate to recall the possible reason for the uncertainty.
Doesn’t that seem like a reason to be optimistic about reasoning models?
There doesn’t seem to be a consensus that ASI will be created in the next 5-10 years. This means that current technology leaders and their promises may be forgotten.
Does anyone else remember Ben Goertzel and Novamente? Or Hugo de Garis?
Yudkowsky may think that the plan ‘Avert all creation of superintelligence in the near and medium term — augment human intelligence’ has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.
It seems that we are already at the GPT 4.5 level? Except that reasoning models have confused everything, and increasing OOM on output can have the same effect as ~OOM on training, as I understand it.
By the way, you’ve analyzed the scaling of pretraining a lot. But what about inference scaling? It seems that o3 has already used thousands of GPUs to solve tasks in ARC-AGI.
Thank you. In conditions of extreme uncertainty about the timing and impact of AGI, it’s nice to know at least something definite.
Can we assume that Gemini 2.0, GPT-4o, Claude 3.5 and other models with similar performance have a similar compute?
If we don’t build fast enough, then the authoritarian countries could win.
Ideally it would be something like the UN, but given the geopolitical complexities, that doesn’t seem very possible.
This sounds like a rejection of international coordination.
But there was coordination between the United States and the USSR on nuclear weapons issues, despite geopolitical tensions, for example. You can interact with countries you don’t like without trying to destroy the world faster than them!
2 years ago, you seemed quite optimistic about AGI Safety/Alignment and had a long timeline.
Have your views changed since then?
I understand that hiring will be necessary in any case.
Keeping people as a commodity for acasual trade or pets seems like a more likely option.
If only one innovation separates us from AGI, we’re fucked.
It seems that if OpenAI or Anthropic had agreed with you, they should have had even shorter timelines.
A short reading list which should be required before one has permission to opine. You can disagree, but step 1 is to at least make an effort to understand why some of the smartest people in the world (and 100% of the top 5 ai researchers — the group historically most skeptical about ai risk) think that we’re dancing on a volcano . [Flo suggests: There’s No Fire Alarm for Artificial General Intelligence, AGI Ruin: A List of Lethalities, Superintelligence by Nick Bostrom, and Superintelligence FAQ by Scott Alexander]
But Bostrom estimated the probability of extinction within a century as <20%. Scott Alexander estimated the risk from AI as 33%.
They could have changed their forecasts. But it seems strange to refer to them as a justification for confident doom.
I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people’s survival. Especially in a scenario where alignment was easy.
After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation!
I don’t know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible.
If AI respects the right to property, why shouldn’t it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many.
In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
It’s possible that we won’t get something that deserves the name ASI or TAI until, for example, 2030.
And a lot can change in more than 5 years!The current panic seems excessive. We do not live in a world where all reasonable people expect the emergence of artificial superintelligence in the next few years and the extinction of humanity soon after that.
The situation is very worrying, and this is the most likely cause of death for all of us in the coming years, yes. But I don’t understand how anyone can be so sure of a bad outcome as to consider people’s survival a miracle.
Then what is the probability of extinction caused by AI?
Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell.
Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely.
But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
Most experts do not believe that we are certainly (>80%) doomed. It would be an overreaction to give up after the news that politicians and CEO are behaving like politicians and CEO.
Thanks for the reply. I remembered a recent article by Evans and thought that reasoning models might show a different behavior. Sorry if this sounds silly