I have a bit lower probability for near-term AGI than many people here are. I model my biggest disagreement as about how much work is required to move from high-cost impressive demos to real economic performance. I also have an intuition that it is really hard to automate everything and progress will be bottlenecked by the tasks that are essential but very hard to automate.
I have less probability now on very long timelines (>80 years). Previously I had 39% credence on AGI arriving after 2100, but I now only have about 25% credence.
I also have a bit more credence on short timelines, mostly because I think the potential for massive investment is real, and it doesn’t seem implausible that we could spend >1% of our GDP on AI development at some point in the near future.
I still have pretty much the same reasons for having longer timelines than other people here, though my thinking has become more refined. Here are of my biggest reasons summarized: delays from regulation, difficulty of making AI reliable, the very high bar of automating general physical labor and management, and the fact that previous impressive-seeming AI milestones ended up mattering much less in hindsight than we thought at the time.
Taking these considerations together, my new median is around 2060. My mode is still probably in the 2040s, perhaps 2042.
I want to note that I’m quite impressed with recent AI demos, and I think that we are making quite rapid progress at the moment in the field. My longish timelines are mostly a result of the possibility of delays, which I think are non-trivial.
If AGI is taken to mean, the first year that there is radical economic, technological, or scientific progress, then these are my AGI timelines.
My percentiles
5th: 2029-09-09
25th: 2049-01-17
50th: 2079-01-24
75th: above 2100-01-01
95th: above 2100-01-01
I have a bit lower probability for near-term AGI than many people here are. I model my biggest disagreement as about how much work is required to move from high-cost impressive demos to real economic performance. I also have an intuition that it is really hard to automate everything and progress will be bottlenecked by the tasks that are essential but very hard to automate.
Some updates:
I now have an operationalization of AGI I feel happy about, and I think it’s roughly just as difficult as creating transformative AI (though perhaps still slightly easier).
I have less probability now on very long timelines (>80 years). Previously I had 39% credence on AGI arriving after 2100, but I now only have about 25% credence.
I also have a bit more credence on short timelines, mostly because I think the potential for massive investment is real, and it doesn’t seem implausible that we could spend >1% of our GDP on AI development at some point in the near future.
I still have pretty much the same reasons for having longer timelines than other people here, though my thinking has become more refined. Here are of my biggest reasons summarized: delays from regulation, difficulty of making AI reliable, the very high bar of automating general physical labor and management, and the fact that previous impressive-seeming AI milestones ended up mattering much less in hindsight than we thought at the time.
Taking these considerations together, my new median is around 2060. My mode is still probably in the 2040s, perhaps 2042.
I want to note that I’m quite impressed with recent AI demos, and I think that we are making quite rapid progress at the moment in the field. My longish timelines are mostly a result of the possibility of delays, which I think are non-trivial.