Thanks for the thoughtful reply. Here’s my counter-reply.
You frame my response as indicating “disagreements”. But my tweet said “I broadly agree” with you, and merely pointed out ways that I thought your statements were misleading. I do just straight up disagree with you about two specific non-central claims you made, which I’ll get to later. But I’d caution against interpreting me as disagreeing with you by any degree greater than what is literally implied by what I wrote.
Before I get to the specific disagreements, I’ll just bicker about some points you made in response to me. I think this sort of quibbling could last forever and it would serve little purpose to continue past this point, so I release you from any obligation you might think you have to reply to these points. However, you might still enjoy reading my response here, just to understand my perspective in a long-form non-Twitter format.
Note: I continued to edit my response after I clicked “submit”, after realizing a few errors of mine. Apologies if you read an erroneous version.
My quibbles with what you wrote
You said,
Barnett’s critique doesn’t propose an alternative trajectory of hardware progress he thinks is more likely, or spell out what that would mean for the overall forecasts, besides saying that the doubling time has been closer to 3.5 years recently.
The Bio Anchors report includes a conservative analysis that assumes a 3.5 year doubling time with (I think more importantly) a cap on overall hardware efficiency that is only 4 orders of magnitude higher than today’s, as well as a number of other assumptions that are more conservative than the main Bio Anchors report’s; and all of this still produces a “weighted average” best guess of a 50% probability of transformative AI by 2100, with only one of the “anchors” (the “evolution anchor,” which I see as a particularly conservative soft upper bound) estimating a lower probability.
The fact that the median for the conservative analysis is right at 2100 — which indeed is part of the 21st century —means that when you said, “You can run the bio anchors analysis in a lot of different ways, but they all point to transformative AI this century”, you were technically correct, by the slimmest of margins.
I had the sense that many people might interpret your statement as indicating a higher degree of confidence; that is, maybe something like “even the conservative analysis produces a median prediction well before 2100.”
Maybe no one misinterpreted you like that!
It’s very reasonable for to think that no one would have misinterpreted you. But this incorrect interpretation of your statement was, at least to me, the thinking that I remember having at the time I read the sentence.
This is simply an opinion, and I hope to gain more clarity over time as more effort is put into this question, but I’ll give one part of the intuition: I think that conditional on hardware efficiency improvements coming in on the low side, there will be more effort put into increasing efficiency via software and/or via hybrid approaches (e.g., specialized hardware for the specific tasks at hand; optimizing researcher-time and AI development for finding more efficient ways to use compute). So reacting to Bio Anchors by saying “I think the hardware projections are too aggressive; I’m going to tweak them and leave everything else in place” doesn’t seem like the right approach.
I intend to produce fuller thoughts on this point in the coming months. In short: I agree that we shouldn’t tweak the hardware projections and leave everything in place. On the other hand, it seems wrong to me to expect algorithmic efficiency to get faster as hardware progress slows. While it’s true there will be more pressure to innovate, there will also be less hardware progress available to test innovations, which arguably is one of the main bottlenecks to software innovation.
I am confused why you think my operationalization for timing transformative AI seems less relevant than a generic question about timing AGI (note that I am the author of one of the questions you linked).
My operationalization for transformative AI is the standard operationalization used in Open Philanthopy reports, such as Tom Davidson’s report here, when he wrote,
This report evaluates the likelihood of ‘explosive growth’, meaning > 30% annual growth of gross world product (GWP), occurring by 2100.
Davidson himself refers to Ajeya Cotra, writing,
In her draft report, my colleague Ajeya Cotra uses TAI to mean ‘AI which drives Gross World Product (GWP) to grow at ~20-30% per year’ – roughly ten times faster than it is growing currently.
I agree with what you write here,
There are many ways transformative AI might not be reflected in economic growth figures, e.g. if economic growth figures don’t include digital economies; if misaligned AI derails civilization; or if growth is deliberately held back, perhaps with AI help, in order to buy more time for improving things like AI alignment.
However, it’s not clear to me that the questions you linked to lack drawbacks of equal or greater severity to these ones. To clarify, I merely said that “Metaculus has no consensus position on transformative AI” and I think that statement is borne out by the link I gave.
Actual disagreements between us
Now I get to the real disagreements we have/had.
I replied to your statement “Specific arguments for “later than 2100,” including outside-view arguments, seem reasonably close to nonexistent” by pointing to my own analysis, which produced three non-outside view arguments for longer timelines.
You defended your statement as follows,
I’m going to stand by my statement here—these look to be simply ceteris paribus reasons that AI development might take longer than otherwise. I’m not seeing a model or forecast integrating these with other considerations and concluding that our median expectation should be after 2100. (To be clear, I might still stand by my statement if such a model or forecast is added—my statement was meant as an abbreviated argument, and in that sort of context I think it’s reasonable to say “reasonably close to nonexistent” when I mean something like “There aren’t arguments of this form that have gotten a lot of attention/discussion/stress-testing and seem reasonably strong to me or, I claim, a reasonable disinterested evaluator.”)
I have a few things to say here,
“these look to be simply ceteris paribus reasons that AI development might take longer than otherwise” does not back up your actual claim, which was that specific arguments seem reasonably close to nonexistent. It’s not clear to me how you’re using “ceteris paribus” in that sentence, but ceteris paribus is not the same as non-specific which was what I responded to.
I don’t think I need to build an explicit probabilistic model in order to gesture at a point. It seems reasonably clear to me that someone could build a model using the arguments I gave, which would straightforwardly put more probability mass on dates past 2100 (even if the median is still ⇐ 2100). But you’re right that, since this model has yet to be built, it’s uncertain how much of an effect these considerations will have on eventual AI timelines.
In response to my claim that “[Robin Hanson’s] most recent public statements have indicated that he thinks AI is over a century away” you said,
I think the confusion here is whether ems count as transformative AI.
No, that’s not the confusion, but I can see why you’d think that’s the confusion. I made a mistake by linking the AI Impacts interview with Robin Hanson, which admittedly did not support my claim.
In fact, someone replied to the very tweet you criticize with the same objection as the one you gave. They said,
In his “when Robots rule the Earth” book he seems to think said robots will be there “sometime in the next century or so”.
I just want to point out that I have actually offered a model for why you might put the median date past 2100. If anything, it seems quite odd to me that people are so bullish on TAI before 2100; they seem to be doing a Bayesian update on the prior with a likelihood factor of ~ 6, while I think something like ~ 2 matches the strength of the evidence we have quite a bit better.
I think it’s very easy to produce arguments based on extrapolating trends in variables such as “compute” or “model size/capacity”, etc. In 1950 people did the same thing with energy consumption (remember the Kardashev scale?) and it failed quite badly. Any kind of inside view about AI timelines should, in my view, be interpreted as weaker evidence than it might at first glance appear to be.
Thanks for the thoughtful reply. Here’s my counter-reply.
You frame my response as indicating “disagreements”. But my tweet said “I broadly agree” with you, and merely pointed out ways that I thought your statements were misleading. I do just straight up disagree with you about two specific non-central claims you made, which I’ll get to later. But I’d caution against interpreting me as disagreeing with you by any degree greater than what is literally implied by what I wrote.
Before I get to the specific disagreements, I’ll just bicker about some points you made in response to me. I think this sort of quibbling could last forever and it would serve little purpose to continue past this point, so I release you from any obligation you might think you have to reply to these points. However, you might still enjoy reading my response here, just to understand my perspective in a long-form non-Twitter format.
Note: I continued to edit my response after I clicked “submit”, after realizing a few errors of mine. Apologies if you read an erroneous version.
My quibbles with what you wrote
You said,
The fact that the median for the conservative analysis is right at 2100 — which indeed is part of the 21st century — means that when you said, “You can run the bio anchors analysis in a lot of different ways, but they all point to transformative AI this century”, you were technically correct, by the slimmest of margins.
I had the sense that many people might interpret your statement as indicating a higher degree of confidence; that is, maybe something like “even the conservative analysis produces a median prediction well before 2100.”
Maybe no one misinterpreted you like that!
It’s very reasonable for to think that no one would have misinterpreted you. But this incorrect interpretation of your statement was, at least to me, the thinking that I remember having at the time I read the sentence.
I intend to produce fuller thoughts on this point in the coming months. In short: I agree that we shouldn’t tweak the hardware projections and leave everything in place. On the other hand, it seems wrong to me to expect algorithmic efficiency to get faster as hardware progress slows. While it’s true there will be more pressure to innovate, there will also be less hardware progress available to test innovations, which arguably is one of the main bottlenecks to software innovation.
I am confused why you think my operationalization for timing transformative AI seems less relevant than a generic question about timing AGI (note that I am the author of one of the questions you linked).
My operationalization for transformative AI is the standard operationalization used in Open Philanthopy reports, such as Tom Davidson’s report here, when he wrote,
Davidson himself refers to Ajeya Cotra, writing,
I agree with what you write here,
However, it’s not clear to me that the questions you linked to lack drawbacks of equal or greater severity to these ones. To clarify, I merely said that “Metaculus has no consensus position on transformative AI” and I think that statement is borne out by the link I gave.
Actual disagreements between us
Now I get to the real disagreements we have/had.
I replied to your statement “Specific arguments for “later than 2100,” including outside-view arguments, seem reasonably close to nonexistent” by pointing to my own analysis, which produced three non-outside view arguments for longer timelines.
You defended your statement as follows,
I have a few things to say here,
“these look to be simply ceteris paribus reasons that AI development might take longer than otherwise” does not back up your actual claim, which was that specific arguments seem reasonably close to nonexistent. It’s not clear to me how you’re using “ceteris paribus” in that sentence, but ceteris paribus is not the same as non-specific which was what I responded to.
I don’t think I need to build an explicit probabilistic model in order to gesture at a point. It seems reasonably clear to me that someone could build a model using the arguments I gave, which would straightforwardly put more probability mass on dates past 2100 (even if the median is still ⇐ 2100). But you’re right that, since this model has yet to be built, it’s uncertain how much of an effect these considerations will have on eventual AI timelines.
In response to my claim that “[Robin Hanson’s] most recent public statements have indicated that he thinks AI is over a century away” you said,
No, that’s not the confusion, but I can see why you’d think that’s the confusion. I made a mistake by linking the AI Impacts interview with Robin Hanson, which admittedly did not support my claim.
In fact, someone replied to the very tweet you criticize with the same objection as the one you gave. They said,
I replied,
And Robin Hanson liked my tweet, which as far as I can tell, is a strong endorsement of my correctness in this debate.
I just want to point out that I have actually offered a model for why you might put the median date past 2100. If anything, it seems quite odd to me that people are so bullish on TAI before 2100; they seem to be doing a Bayesian update on the prior with a likelihood factor of ~ 6, while I think something like ~ 2 matches the strength of the evidence we have quite a bit better.
I think it’s very easy to produce arguments based on extrapolating trends in variables such as “compute” or “model size/capacity”, etc. In 1950 people did the same thing with energy consumption (remember the Kardashev scale?) and it failed quite badly. Any kind of inside view about AI timelines should, in my view, be interpreted as weaker evidence than it might at first glance appear to be.