This links to a poll of Lex Fridman’s Twitter followers, which doesn’t seem like a representative sample of the US population.
they jointly support a greater than 10% likelihood that we will develop broadly human-level AI systems within the next decade.
Is this what you’re arguing for when you say “short AI timelines”? I think that’s a fairly common view among people who think about AI timelines.
AI is starting to be used to accelerate AI research.
My sense is that Copilot is by far the most important example here.
I imagine visiting alien civilizations much like earth, and I try to reason from just one piece of evidence at a time about how long that planet has.
I find this part really confusing. Is “much like earth” supposed to mean “basically the same as earth”? In that case, why not just present each piece of evidence normally, without setting up an “alien civilization” hypothetical? For example, the “sparks of AGI” paper provides very little evidence for short timelines on its own, because all we know is the capabilities of a particular system, not how long it took to get that point and whether that progress might continue.
The first two graphs show the overall number of college degrees and the number of STEM degrees conferred from 2011 to 2021
If you think one should put less than 20% of their timeline thinking weight on recent progress
Can you clarify what you mean by this?
Overall, I think this post provides evidence that short AI timelines are possible, but doesn’t provide strong evidence that short AI timelines are probable. Here are some posts that provide more arguments for the latter point:
I agree that Lex’s audience is not representative. I also think this is the biggest sample size poll on the topic that I’ve seen by at least 1 OOM, which counts for a fair amount. Perhaps my wording was wrong.
I think what is implied by the first half of the Anthropic quote is much more than 10% on AGI in the next decade. I included the second part to avoid strongly-selective quoting. It seems to me that saying >10% is mainly a PR-style thing to do to avoid seeming too weird or confident, after all it is compatible with 15% or 90%. When I read the first part of the quote, I think something like ’25% on AGI in the next 5 years, and 50% in 10 years,′ but this is not what they said and I’m going to respect their desire to write vague words.
I find this part really confusing
Sorry. This framing was useful for me and I hoped it would help others, but maybe not. I probably disagree about how strong the evidence from the existence of “sparks of AGI” is. The thing I am aiming for here is something like “imagine the set of possible worlds that look a fair amount like earth, then condition on worlds that have a “sparks of AGI” paper, then how much longer do those world have until AGI” and I think that even not knowing that much else about these worlds, they don’t have very many years.
Yep, graph is per year, I’ve updated my wording to be clearer. Thanks.
Can you clarify what you mean by this?
When I think about when we will see AGI, I try to use a variety of models weighted by how good and useful they seem. I believe that, when doing this, at least 20% of the total weight should come from models/forecasts that are based substantially in extrapolating from recent ML progress. This recent literature review is a good example of how one might use such weightings.
Some comments:
This links to a poll of Lex Fridman’s Twitter followers, which doesn’t seem like a representative sample of the US population.
Is this what you’re arguing for when you say “short AI timelines”? I think that’s a fairly common view among people who think about AI timelines.
My sense is that Copilot is by far the most important example here.
I find this part really confusing. Is “much like earth” supposed to mean “basically the same as earth”? In that case, why not just present each piece of evidence normally, without setting up an “alien civilization” hypothetical? For example, the “sparks of AGI” paper provides very little evidence for short timelines on its own, because all we know is the capabilities of a particular system, not how long it took to get that point and whether that progress might continue.
Per year, or cumulative? Seems like it’s per year.
Can you clarify what you mean by this?
Overall, I think this post provides evidence that short AI timelines are possible, but doesn’t provide strong evidence that short AI timelines are probable. Here are some posts that provide more arguments for the latter point:
Two-year update on my personal AI timelines
Fun with +12 OOMs of Compute
AI Timelines via Cumulative Optimization Power: Less Long, More Short
Why I think strong general AI is coming soon
What a compute-centric framework says about AI takeoff speeds—draft report
Disagreement with bio anchors that lead to shorter timelines
“AGI Timelines” section of PAIS #2
I agree that Lex’s audience is not representative. I also think this is the biggest sample size poll on the topic that I’ve seen by at least 1 OOM, which counts for a fair amount. Perhaps my wording was wrong.
I think what is implied by the first half of the Anthropic quote is much more than 10% on AGI in the next decade. I included the second part to avoid strongly-selective quoting. It seems to me that saying >10% is mainly a PR-style thing to do to avoid seeming too weird or confident, after all it is compatible with 15% or 90%. When I read the first part of the quote, I think something like ’25% on AGI in the next 5 years, and 50% in 10 years,′ but this is not what they said and I’m going to respect their desire to write vague words.
Sorry. This framing was useful for me and I hoped it would help others, but maybe not. I probably disagree about how strong the evidence from the existence of “sparks of AGI” is. The thing I am aiming for here is something like “imagine the set of possible worlds that look a fair amount like earth, then condition on worlds that have a “sparks of AGI” paper, then how much longer do those world have until AGI” and I think that even not knowing that much else about these worlds, they don’t have very many years.
Yep, graph is per year, I’ve updated my wording to be clearer. Thanks.
When I think about when we will see AGI, I try to use a variety of models weighted by how good and useful they seem. I believe that, when doing this, at least 20% of the total weight should come from models/forecasts that are based substantially in extrapolating from recent ML progress. This recent literature review is a good example of how one might use such weightings.
Thanks for all the links!