Interesting post—I look forward to reading the rest of this series! (Have you considered making it into a “sequence”?)
Summary of my comment: It seems like this post lists variables that should inform views on how hard developing an AGI will be, but omits variables that should inform views on how much effort will be put into that task at various points, and how conducive the environment will be to those efforts. And it seems to me that AGI timelines are a function of all three of those high-level factors.
(Although note that I’m far from being an expert on AI timelines myself. I’m also not sure if the effort and conduciveness factors can be cleanly separated.)
Detailed version: I was somewhat surprised to see that the “background variables” listed seemed to all be fairly focused on things like neuroscience/biology, without any seeming focused on other economic, scientific, or cultural trends that might impact AI R&D or its effectiveness. By the latter, I mean things like (I spitballed these quickly just now, and some might overlap somewhat):
whether various Moore’s-law-type trends will continue, or slow down, or speed up, and when
relatedly, whether there’ll be major breakthroughs intechnologies other than AI which feed into (or perhaps reduce the value of) AI R&D
whether investment (including e.g. government funding) in AI R&D will increase, decrease, or remain roughly constant
whether we’ll see a proliferation of labs working on “fundamental” AI research, or a consolidation, or not much change
whether there’ll be government regulation on AI research that slows down research, and how much this slows it down
whether AI will come to be strongly seen as a key military technology, and/or governments nationalise AI labs, and/or governments create their own major AI labs
whether there’ll be another “AI winter”
I don’t have any particular reason to believe that views on those specific things I’ve mentioned would do a better job at explaining disagreements about AGI timelines than the variables mentioned in this post would. Perhaps most experts already agree about the things I mentioned, or see them as not very significant. But I’d at least guess that there are things along those lines which either do or should inform views on AGI timelines.
I’d also guess that factors like those I’ve listed would seem increasingly important as we consider increasingly long timelines, and as we consider “slow” or “moderate” takeoff scenarios (like the scenarios in what failure looks like). E.g., I doubt there’d be huge changes in interest in, funding for, or regulation of AI over the next 10 years (though it’s very hard to say), if AI doesn’t become substantially more influential over that time. But over the next 50 years, or if we start seeing major impacts of AI before we reach something like AGI, it seems easy to imagine changes in those factors occurring.
Interesting post—I look forward to reading the rest of this series! (Have you considered making it into a “sequence”?)
Summary of my comment: It seems like this post lists variables that should inform views on how hard developing an AGI will be, but omits variables that should inform views on how much effort will be put into that task at various points, and how conducive the environment will be to those efforts. And it seems to me that AGI timelines are a function of all three of those high-level factors.
(Although note that I’m far from being an expert on AI timelines myself. I’m also not sure if the effort and conduciveness factors can be cleanly separated.)
Detailed version: I was somewhat surprised to see that the “background variables” listed seemed to all be fairly focused on things like neuroscience/biology, without any seeming focused on other economic, scientific, or cultural trends that might impact AI R&D or its effectiveness. By the latter, I mean things like (I spitballed these quickly just now, and some might overlap somewhat):
whether various Moore’s-law-type trends will continue, or slow down, or speed up, and when
relatedly, whether there’ll be major breakthroughs in technologies other than AI which feed into (or perhaps reduce the value of) AI R&D
whether investment (including e.g. government funding) in AI R&D will increase, decrease, or remain roughly constant
whether we’ll see a proliferation of labs working on “fundamental” AI research, or a consolidation, or not much change
whether there’ll be government regulation on AI research that slows down research, and how much this slows it down
whether AI will come to be strongly seen as a key military technology, and/or governments nationalise AI labs, and/or governments create their own major AI labs
whether there’ll be another “AI winter”
I don’t have any particular reason to believe that views on those specific things I’ve mentioned would do a better job at explaining disagreements about AGI timelines than the variables mentioned in this post would. Perhaps most experts already agree about the things I mentioned, or see them as not very significant. But I’d at least guess that there are things along those lines which either do or should inform views on AGI timelines.
I’d also guess that factors like those I’ve listed would seem increasingly important as we consider increasingly long timelines, and as we consider “slow” or “moderate” takeoff scenarios (like the scenarios in what failure looks like). E.g., I doubt there’d be huge changes in interest in, funding for, or regulation of AI over the next 10 years (though it’s very hard to say), if AI doesn’t become substantially more influential over that time. But over the next 50 years, or if we start seeing major impacts of AI before we reach something like AGI, it seems easy to imagine changes in those factors occurring.