TBC, I don’t think it will slow progress all that much; there are other routes to improvement. I guess I didn’t express the biggest reason this shifts my p(doom) a little: it’s a slower takeoff, giving more time for the reality of the situation to sink in before we have takeover capable AGI. I think we’ll still hit near-human LLM agents on schedule (1-2 years) by scaffolding next-gen LLMs boosted with o1 style training.
I’m really hoping that the autonomy of these systems will impact people emotionally, creating more and better policy thinking and alignment work on those types of AGIs. I think the rate of approach to AGI is more important than the absolute timelines; we’ll see ten times the work on really relevant policy and alignment once we see compelling evidence of the type of AGI that will be transformative and dangerous.
I’ve heard enough credible-sounding rumors to give > 50% that they’re true. This is partly a product of this result fitting my theory of why LLMs work so well. While they are predictors, what they’re learning from human text is mostly to copy human intelligence. Moving past that will be slower.
Do you mean we’re waiting tiil 2026⁄27 for results of thee next scaleup? If this round (GPT5, Claude 4, Gemini 2.0) show diminishing returns, wouldn’t we expect that the next will too?
Do you mean we’re waiting tiil 2026⁄27 for results of thee next scaleup? If this round (GPT5, Claude 4, Gemini 2.0) show diminishing returns, wouldn’t we expect that the next will too?
Yes, assuming Claude 4/Gemini 2.0/GPT-5 don’t release or are disappointing in 2025-2026, this is definitely evidence that things are slowing down.
It doesn’t conclusively disprove it, but it does make progress shakier.
To be clear, I don’t yet believe that the rumors are true, or that if they are, that they matter.
We will have to wait until 2026-2027 to get real evidence on large training run progress.
TBC, I don’t think it will slow progress all that much; there are other routes to improvement. I guess I didn’t express the biggest reason this shifts my p(doom) a little: it’s a slower takeoff, giving more time for the reality of the situation to sink in before we have takeover capable AGI. I think we’ll still hit near-human LLM agents on schedule (1-2 years) by scaffolding next-gen LLMs boosted with o1 style training.
I’m really hoping that the autonomy of these systems will impact people emotionally, creating more and better policy thinking and alignment work on those types of AGIs. I think the rate of approach to AGI is more important than the absolute timelines; we’ll see ten times the work on really relevant policy and alignment once we see compelling evidence of the type of AGI that will be transformative and dangerous.
I’ve heard enough credible-sounding rumors to give > 50% that they’re true. This is partly a product of this result fitting my theory of why LLMs work so well. While they are predictors, what they’re learning from human text is mostly to copy human intelligence. Moving past that will be slower.
Do you mean we’re waiting tiil 2026⁄27 for results of thee next scaleup? If this round (GPT5, Claude 4, Gemini 2.0) show diminishing returns, wouldn’t we expect that the next will too?
To answer this specific question
Yes, assuming Claude 4/Gemini 2.0/GPT-5 don’t release or are disappointing in 2025-2026, this is definitely evidence that things are slowing down.
It doesn’t conclusively disprove it, but it does make progress shakier.
Agree with the rest of the comment.