It surveyed 2,778 AI researchers who had published peer-reviewed research in the prior year in six top AI venues (NeurIPS, ICML, ICLR, AAAI, IJCAI, JMLR); the median time for a 50% chance of AGI was either in 23 or 92 years, depending on how the question was phrased.
Doesn’t that discrepancy (how much answers vary between different ways of asking the question) tell you that the median AI researcher who published at these conferences hasn’t thought about this question sufficiently and/or sanely?
It seems irresponsible to me to update even just a small bit to the specific reference class of which your above statement is true.
If you take people who follow progress closely and have thought more and longer about AGI as a research target specifically, my sense is that the ones who have longer timeline medians tend to say more like 10-20y rather than 23y+. (At the same time, there’s probably a bubble effect in who I follow or talk to, so I can get behind maybe lengthening that range a bit.)
Doing my own reasoning, here are the considerations that I weigh heavily:
we’re within the human range of most skill types already (which is where many of us in the past would have predicted that progress speeds up, and don’t see any evidence of anything that should change our minds on that past prediction – deep learning visibly hitting a wall would have been one conceivable way, but it hasn’t happened yet)
that time for “how long does it take to cross and overshoot the human range at a given skill?” has historically gotten a lot smaller and is maybe even decreasing(?) (e.g., it admittedly took a long time to cross the human expert range in chess, but it took less long in Go, less long at various academic tests or essays, etc., to the point that chess certainly doesn’t constitute a typical baseline anymore)
that progress has been quite fast lately, so that it’s not intuitive to me that there’s a lot of room left to go (sure, agency and reliability and “get even better at reasoning”)
that we’re pushing through compute milestones rather quickly because scaling is still strong with some more room to go, so on priors, the chance that we cross AGI compute thresholds during this scale-up is higher than that we’d cross it once compute increases slow down
that o3 seems to me like significant progress in reliability, one of the things people thought would be hard to make progress on
Given all that, it seems obvious that we should have quite a lot of probability of getting to AGI in a short time (e.g., 3 years). Placing the 50% forecast feels less obvious because I have some sympathy for the view that says these things are notoriously hard to forecast and we should smear out uncertainty more than we’d intuitively think (that said, lately the trend has been that people consistently underpredict progress, and maybe we should just hard-update on that.) Still, even on that “it’s prudent to smear out the uncertainty” view, let’s say that implies that the median would be like 10-20 years away. Even then, if we spread out the earlier half of probability mass uniformly over those 10-20 years, with an added probability bump in the near-term because of the compute scaling arguments (we’re increasing training and runtime compute now but this will have to slow down eventually if AGI isn’t reached in the next 3-6 years or whatever), that IMO very much implies at least 10% for the next 3 years. Which feels practically enormously significant. (And I don’t agree with smearing things out too much anyway, so my own probability is closer to 50%.)
Doesn’t that discrepancy (how much answers vary between different ways of asking the question) tell you that the median AI researcher who published at these conferences hasn’t thought about this question sufficiently and/or sanely?
We know that AI expertise and AI forecasting are separate skills and that we shouldn’t expect AI researchers to be skilled at the latter. So even if researchers have thought sufficiently and sanely about the question of “what kinds of capabilities are we still missing that would be required for AGI”, they would still be lacking the additional skill of “how to translate those missing pieces into a timeline estimate”.
Suppose that a researcher’s conception of current missing pieces is a mental object M, their timeline estimate is a probability function P, and their forecasting expertise F is a function that maps M to P. In this model, F can be pretty crazy, creating vast differences in P depending how you ask, while M is still solid.
I think the implication is that these kinds of surveys cannot tell us anything very precise such as “is 15 years more likely than 23”, but we can use what we know about the nature of F in untrained individuals to try to get a sense of what M might be like. My sense is that answers like “20-93 years” often translate to “I think there are major pieces missing and I have no idea of how to even start approaching them, but if I say something that feels like a long time, maybe someone will figure it out in that time”, “0-5 years” means “we have all the major components and only relatively straightforward engineering work is needed for them”, and numbers in between correspond to Ms that are, well, somewhere in between those.
Suppose that a researcher’s conception of current missing pieces is a mental object M, their timeline estimate is a probability function P, and their forecasting expertise F is a function that maps M to P. In this model, F can be pretty crazy, creating vast differences in P depending how you ask, while M is still solid.
Good point. This would be reasonable if you think someone can be super bad at F and still great at M.
Still, I think estimating “how big is this gap?” and “how long will it take to cross it?” might quite related, so I expect the skills to be correlated or even strongly correlated.
I think their relationship depends on whether crossing the gap requires grind or insight. If it’s mostly about grind then a good expert will be able to estimate it, but insight tends to unpredictable by nature.
Another way of looking at my comment above would be that timelines of less than 5 years would imply the remaining steps mostly requiring grind, and timelines of 20+ years would imply that some amount of insight is needed.
we’re within the human range of most skill types already
That would imply that most professions would be getting automated or having their productivity very significantly increased. My impression from following the news and seeing some studies is that this is happening within copywriting, translation, programming, and illustration. [EDIT: and transcription] Also people are turning to chatbots for some types of therapy, though many people will still intrinsically prefer a human for that and it’s not affecting the employment of human therapists yet. With o3, math (and maybe physics) research is starting to be affected, though it mostly hasn’t been yet.
I might be forgetting some, but the amount of professions left out of that list suggests that there are quite a few skill types that are still untouched. (There are of course a lot of other professions for which there have been moderate productivity boosts, but AFAIK mostly not to the point that it would affect employment.)
Doesn’t that discrepancy (how much answers vary between different ways of asking the question) tell you that the median AI researcher who published at these conferences hasn’t thought about this question sufficiently and/or sanely?
It seems irresponsible to me to update even just a small bit to the specific reference class of which your above statement is true.
If you take people who follow progress closely and have thought more and longer about AGI as a research target specifically, my sense is that the ones who have longer timeline medians tend to say more like 10-20y rather than 23y+. (At the same time, there’s probably a bubble effect in who I follow or talk to, so I can get behind maybe lengthening that range a bit.)
Doing my own reasoning, here are the considerations that I weigh heavily:
we’re within the human range of most skill types already (which is where many of us in the past would have predicted that progress speeds up, and don’t see any evidence of anything that should change our minds on that past prediction – deep learning visibly hitting a wall would have been one conceivable way, but it hasn’t happened yet)
that time for “how long does it take to cross and overshoot the human range at a given skill?” has historically gotten a lot smaller and is maybe even decreasing(?) (e.g., it admittedly took a long time to cross the human expert range in chess, but it took less long in Go, less long at various academic tests or essays, etc., to the point that chess certainly doesn’t constitute a typical baseline anymore)
that progress has been quite fast lately, so that it’s not intuitive to me that there’s a lot of room left to go (sure, agency and reliability and “get even better at reasoning”)
that we’re pushing through compute milestones rather quickly because scaling is still strong with some more room to go, so on priors, the chance that we cross AGI compute thresholds during this scale-up is higher than that we’d cross it once compute increases slow down
that o3 seems to me like significant progress in reliability, one of the things people thought would be hard to make progress on
Given all that, it seems obvious that we should have quite a lot of probability of getting to AGI in a short time (e.g., 3 years). Placing the 50% forecast feels less obvious because I have some sympathy for the view that says these things are notoriously hard to forecast and we should smear out uncertainty more than we’d intuitively think (that said, lately the trend has been that people consistently underpredict progress, and maybe we should just hard-update on that.) Still, even on that “it’s prudent to smear out the uncertainty” view, let’s say that implies that the median would be like 10-20 years away. Even then, if we spread out the earlier half of probability mass uniformly over those 10-20 years, with an added probability bump in the near-term because of the compute scaling arguments (we’re increasing training and runtime compute now but this will have to slow down eventually if AGI isn’t reached in the next 3-6 years or whatever), that IMO very much implies at least 10% for the next 3 years. Which feels practically enormously significant. (And I don’t agree with smearing things out too much anyway, so my own probability is closer to 50%.)
We know that AI expertise and AI forecasting are separate skills and that we shouldn’t expect AI researchers to be skilled at the latter. So even if researchers have thought sufficiently and sanely about the question of “what kinds of capabilities are we still missing that would be required for AGI”, they would still be lacking the additional skill of “how to translate those missing pieces into a timeline estimate”.
Suppose that a researcher’s conception of current missing pieces is a mental object M, their timeline estimate is a probability function P, and their forecasting expertise F is a function that maps M to P. In this model, F can be pretty crazy, creating vast differences in P depending how you ask, while M is still solid.
I think the implication is that these kinds of surveys cannot tell us anything very precise such as “is 15 years more likely than 23”, but we can use what we know about the nature of F in untrained individuals to try to get a sense of what M might be like. My sense is that answers like “20-93 years” often translate to “I think there are major pieces missing and I have no idea of how to even start approaching them, but if I say something that feels like a long time, maybe someone will figure it out in that time”, “0-5 years” means “we have all the major components and only relatively straightforward engineering work is needed for them”, and numbers in between correspond to Ms that are, well, somewhere in between those.
Good point. This would be reasonable if you think someone can be super bad at F and still great at M.
Still, I think estimating “how big is this gap?” and “how long will it take to cross it?” might quite related, so I expect the skills to be correlated or even strongly correlated.
I think their relationship depends on whether crossing the gap requires grind or insight. If it’s mostly about grind then a good expert will be able to estimate it, but insight tends to unpredictable by nature.
Another way of looking at my comment above would be that timelines of less than 5 years would imply the remaining steps mostly requiring grind, and timelines of 20+ years would imply that some amount of insight is needed.
That would imply that most professions would be getting automated or having their productivity very significantly increased. My impression from following the news and seeing some studies is that this is happening within copywriting, translation, programming, and illustration. [EDIT: and transcription] Also people are turning to chatbots for some types of therapy, though many people will still intrinsically prefer a human for that and it’s not affecting the employment of human therapists yet. With o3, math (and maybe physics) research is starting to be affected, though it mostly hasn’t been yet.
I might be forgetting some, but the amount of professions left out of that list suggests that there are quite a few skill types that are still untouched. (There are of course a lot of other professions for which there have been moderate productivity boosts, but AFAIK mostly not to the point that it would affect employment.)