A week ago I recorded a prediction on AI timeline after reading a Vox article on GPT-3 . In general I’m much more spread out in time than the Lesswrong community. Also, I weigh more heavily outside view considerations than detailed inside view information. For example, a main consideration of my prediction is using the heurastic With 50% probability, things will last twice as long as they already have, with the starting time of 1956, the time of the Dartmouth College summer AI conference. If AGI will definitely happen eventually, then the heuristic gives us [21.3, 64, 192] years at the [25th, 50th, 75th] percentiles of AGI to occur. AGI may never happen, but the chance of that is small enough that adjusting for that here will not make a big difference (I put ~10% that AGI will not happen for 500 years or more, but it already matches that distribution quite well).
A more inside view consideration is: what happens if the current machine learning paradigm scales to AGI? Given that assumption, a 50% confidence interval might be [2028, 2045] (since the current burst of machine learning research began in 2012-2013), which is more in line with the Lesswrong predictions and Metaculus community prediction . Taking the super outside view consideration and the outside view-ish consideration together, I get the prediction I made a week ago.
I adapted my prediction to the timeline of this post [1], and compared it with some other commenters predictions [2].
A counter hypothesis I’ve heard (not original to me) is: With 50% probability, we will be half-way through the AI researcher-years required to get AGI.
I think this suggests much shorter timelines, as most researchers have been doing research in the last ~10 years.
It’s not clear to me what reference class makes sense here though. Like, I feel like 50% doesn’t make any sense. It implies that for all outstanding AI problems we’re fifty percent there. We’re 50% of the way to a rat brain, to a human emulation, to a vastly superintelligent AGI, etc. It’s not a clearly natural category for a field to be “done”, and it’s not clear which thing counts as ”done” in this particular field.
I was looking at the NIPS growth numbers last June and I made a joke:
AI researcher anthropics: ‘researchers [should] tend to think AI is ~20 years away because given exponential growth of researchers & careers of ~30 years, the final generation of researchers will make up a majority of all researchers, hence, by SSA+Outside View, one must assume 20 years.’
(Of course, I’m making a rather carbon-chauvinistic assumption here that it’s only human researchers/researcher-years which matter.)
Your prediction has the interesting property that (starting in 2021), you assign more probability to the next n seconds/ n years than to any subsequent period of n seconds/ n years.
Specifically, I think your distribution assigns too much probability about AGI in the immediately next three months/year/5 years, but I feel like we do have a bunch of information that points us away from such short timelines. If one takes that into account, then one might end up with a bump, maybe like so, where the location of the bump is debatable, and the decay afterwards is per Laplace’s rule.
https://elicit.ought.org/builder/J2lLvPmAY quick estimate, mostly influenced by the “outside view” and other commenters (of which I found the reasoning of tim_dettmer to be most convincing).
This links to a uniform distribution, guessing you didn’t mean that! To link to your distribution, take a snapshot of your distribution, and then copy the snapshot url (which appears as a timestamp at the bottom of the page) and link that.
A week ago I recorded a prediction on AI timeline after reading a Vox article on GPT-3 . In general I’m much more spread out in time than the Lesswrong community. Also, I weigh more heavily outside view considerations than detailed inside view information. For example, a main consideration of my prediction is using the heurastic With 50% probability, things will last twice as long as they already have, with the starting time of 1956, the time of the Dartmouth College summer AI conference.
If AGI will definitely happen eventually, then the heuristic gives us [21.3, 64, 192] years at the [25th, 50th, 75th] percentiles of AGI to occur. AGI may never happen, but the chance of that is small enough that adjusting for that here will not make a big difference (I put ~10% that AGI will not happen for 500 years or more, but it already matches that distribution quite well).
A more inside view consideration is: what happens if the current machine learning paradigm scales to AGI? Given that assumption, a 50% confidence interval might be [2028, 2045] (since the current burst of machine learning research began in 2012-2013), which is more in line with the Lesswrong predictions and Metaculus community prediction . Taking the super outside view consideration and the outside view-ish consideration together, I get the prediction I made a week ago.
I adapted my prediction to the timeline of this post [1], and compared it with some other commenters predictions [2].
A counter hypothesis I’ve heard (not original to me) is: With 50% probability, we will be half-way through the AI researcher-years required to get AGI.
I think this suggests much shorter timelines, as most researchers have been doing research in the last ~10 years.
It’s not clear to me what reference class makes sense here though. Like, I feel like 50% doesn’t make any sense. It implies that for all outstanding AI problems we’re fifty percent there. We’re 50% of the way to a rat brain, to a human emulation, to a vastly superintelligent AGI, etc. It’s not a clearly natural category for a field to be “done”, and it’s not clear which thing counts as ”done” in this particular field.
I was looking at the NIPS growth numbers last June and I made a joke:
(Of course, I’m making a rather carbon-chauvinistic assumption here that it’s only human researchers/researcher-years which matter.)
Your prediction has the interesting property that (starting in 2021), you assign more probability to the next n seconds/ n years than to any subsequent period of n seconds/ n years.
Specifically, I think your distribution assigns too much probability about AGI in the immediately next three months/year/5 years, but I feel like we do have a bunch of information that points us away from such short timelines. If one takes that into account, then one might end up with a bump, maybe like so, where the location of the bump is debatable, and the decay afterwards is per Laplace’s rule.
The location of the bump could be estimated by using Daniel Kokotajlo’s answer as the “earliest plausible AGI.”
https://elicit.ought.org/builder/J2lLvPmAY quick estimate, mostly influenced by the “outside view” and other commenters (of which I found the reasoning of tim_dettmer to be most convincing).
This links to a uniform distribution, guessing you didn’t mean that! To link to your distribution, take a snapshot of your distribution, and then copy the snapshot url (which appears as a timestamp at the bottom of the page) and link that.