To clarify, when I mentioned growth curves, I wasn’t talking about timelines, but rather takeoff speeds.
In my view, rather than indefinite exponential growth based on exploiting a single resource, real-world growth follows sigmoidal curves, eventually plateauing. In the case of a hypothetical AI at a human intelligence level, it would face constraints on its resources allowing it to improve, such as bandwidth, capital, skills, private knowledge, energy, space, robotic manipulation capabilities, material inputs, cooling requirements, legal and regulatory barriers, social acceptance, cybersecurity concerns, competition with humans and other AIs, and of course safety concerns (i.e. it would have its own alignment problem to solve).
I’m sorry you resent that implication. I certainly didn’t mean to offend you or anyone else. It was my honest impression, for example, based on the fact that there hadn’t seemed to be much if any discussion of Robin’s recent article on AI on LW. It just seems to me that much of LW has moved past the foom argument and is solidly on Eliezer’s side, potentially due to selection effects of non-foomers like me getting heavily downvoted like I was on my top-level comment.
I too was talking about takeoff speeds. The website I linked to is takeoffspeeds.com.
Me & the other LWers you criticize do not expect indefinite exponential growth based on exploiting a single resource; we are well aware that real-world growth follows sigmoidal curves. We are well aware of those constraints and considerations and are attempting to model them with things like the model underlying takeoffspeeds.com + various other arguments, scenario exercises, etc.
I agree that much of LW has moved past the foom argument and is solidly on Eliezers side relative to Robin Hanson; Hanson’s views seem increasingly silly as time goes on (though they seemed much more plausible a decade ago, before e.g. the rise of foundation models and the shortening of timelines to AGI). The debate is now more like Yud vs. Christiano/Cotra than Yud vs. Hanson. I don’t think it’s primarily because of selection effects, though I agree that selection effects do tilt the table towards foom here; sorry about that, & thanks for engaging. I don’t think your downvotes are evidence for this though, in fact the pattern of votes (lots of upvotes, but disagreement-downvotes) is evidence for the opposite.
I just skimmed Hanson’s article and find I disagree with almost every paragraph. If you think there’s a good chance you’ll change your mind based on what I say, I’ll take your word for it & invest time in giving a point-by-point rebuttal/reaction.
I can see how both Yudkowsky’s and Hanson’s arguments can be problematic because they either assume fast or slow takeoff scenarios, respectively, and then nearly everything follows from that. So I can imagine why you’d disagree with every one of Hanson’s paragraphs based on that. If you think there’s something he said that is uncorrelated with the takeoff speed disagreement, I might be interested, but I don’t agree with Hanson about everything either, so I’m mainly only interested if it’s also central to AI x-risk. I don’t want you to waste your time.
I guess if you are taking those constraints into consideration, then it is really just a probabilistic feeling about how much those constraints will slow down AI growth? To me, those constraints each seem massive, and getting around all of them within hours or days would be nearly impossible, no matter how intelligent the AI was. Is there any other way we can distinguish between our beliefs?
Is this a prediction that we can use to decide in the future whose model of the world today was more reasonable? I know it’s a timelines question, but timelines are pretty correlated with takeoff speeds I guess.
I think there are probably disagreements I have with Hanson that don’t boil down to takeoff speeds disagreements, but I’m not sure. I’d have to reread the article again to find out.
To be clear, I definitely don’t expect takeoff to take hours or days. Quantitatively I expect something like what takeoffspeeds.com says when you input the values of the variables I mentioned above. So, eyeballing it, it looks like it takes slightly more than 3 years to go from 20% R&D automation to 100% R&D automation, and then to go from 100% R&D automation to “starting to approach the fundamental physical limits of how smart minds running on ordinary human supercomputers can be” in about 6 months, during which period about 8 OOMs of algorithmic efficiency is crossed. To be clear I don’t take that second bit very seriously at all, I think this takeoffspeeds.com model is much better as a model of pre-AGI takeoff than of post-AGI takeoff. But I do think that we’ll probably go from AGI to superintelligent AGI in less than six months. How long it takes to get to nanotech or (name your favorite cool sci-fi technology) is less clear to me, but I expect it to be closer to one year than ten, and possibly more like one month. I would love to discuss this more & read attempts to estimate these quantities.
I didn’t realize you had put so much time into estimating take-off speeds. I think this is a really good idea.
This seems substantially slower than the implicit take-off speed estimates of Eliezer, but maybe I’m missing something.
I think the amount of time you described is probably shorter than I would guess. But I haven’t put nearly as much time into it as you have. In the future, I’d like to.
Still, my guess is that this amount of time is enough that there are multiple competing groups, rather than only one. So it seems to me like there would probably be competition in the world you are describing, making a singleton AI less likely.
Do you think that there will almost certainly be a singleton AI?
It is substantially slower than the takeoff speed estimates of Eliezer, yes. I’m definitely disagreeing with Eliezer on this point. But as far as I can tell my view is closer to Eliezer’s than to Hanson’s, at least in upshot. (I’m a bit confused about this—IIRC Hanson also said somewhere that takeoff would last only a couple of years? Then why is he so confident it’ll be so broadly distributed, why does he think property rights will be respected throughout, why does he think humans will be able to retire peacefully, etc.?)
I also think it’s plausible that there will be multiple competing groups rather than one singleton AI, though not more than 80% plausible; I can easily imagine it just being one singleton.
I think that even if there are multiple competing groups, however, they are very likely to coordinate to disempower humans. From the perspective of the humans it’ll be as if they are an AI singleton, even though from the perspective of the AIs it’ll be some interesting multipolar conflict (that eventually ends with some negotiated peaceful settlement, I imagine)
After all, this is what happened historically with colonialism. Colonial powers (and individuals within conquistador expeditions) were constantly fighting each other.
I agree that much of LW has moved past the foom argument and is solidly on Eliezers side relative to Robin Hanson; Hanson’s views seem increasingly silly as time goes on (though they seemed much more plausible a decade ago, before e.g. the rise of foundation models and the shortening of timelines to AGI). The debate is now more like Yud vs. Christiano/Cotra than Yud vs. Hanson.
It seems worth noting that the views and economic modeling you discuss here seem broadly in keeping with Christiano/Cotra (but with more agressive constants)
Yep! On both timelines and takeoff speeds I’d describe my views as “Like Ajeya Cotra’s and Tom Davidson’s but with different settings of some of the key variables.”
To clarify, when I mentioned growth curves, I wasn’t talking about timelines, but rather takeoff speeds.
In my view, rather than indefinite exponential growth based on exploiting a single resource, real-world growth follows sigmoidal curves, eventually plateauing. In the case of a hypothetical AI at a human intelligence level, it would face constraints on its resources allowing it to improve, such as bandwidth, capital, skills, private knowledge, energy, space, robotic manipulation capabilities, material inputs, cooling requirements, legal and regulatory barriers, social acceptance, cybersecurity concerns, competition with humans and other AIs, and of course safety concerns (i.e. it would have its own alignment problem to solve).
I’m sorry you resent that implication. I certainly didn’t mean to offend you or anyone else. It was my honest impression, for example, based on the fact that there hadn’t seemed to be much if any discussion of Robin’s recent article on AI on LW. It just seems to me that much of LW has moved past the foom argument and is solidly on Eliezer’s side, potentially due to selection effects of non-foomers like me getting heavily downvoted like I was on my top-level comment.
I too was talking about takeoff speeds. The website I linked to is takeoffspeeds.com.
Me & the other LWers you criticize do not expect indefinite exponential growth based on exploiting a single resource; we are well aware that real-world growth follows sigmoidal curves. We are well aware of those constraints and considerations and are attempting to model them with things like the model underlying takeoffspeeds.com + various other arguments, scenario exercises, etc.
I agree that much of LW has moved past the foom argument and is solidly on Eliezers side relative to Robin Hanson; Hanson’s views seem increasingly silly as time goes on (though they seemed much more plausible a decade ago, before e.g. the rise of foundation models and the shortening of timelines to AGI). The debate is now more like Yud vs. Christiano/Cotra than Yud vs. Hanson. I don’t think it’s primarily because of selection effects, though I agree that selection effects do tilt the table towards foom here; sorry about that, & thanks for engaging. I don’t think your downvotes are evidence for this though, in fact the pattern of votes (lots of upvotes, but disagreement-downvotes) is evidence for the opposite.
I just skimmed Hanson’s article and find I disagree with almost every paragraph. If you think there’s a good chance you’ll change your mind based on what I say, I’ll take your word for it & invest time in giving a point-by-point rebuttal/reaction.
I can see how both Yudkowsky’s and Hanson’s arguments can be problematic because they either assume fast or slow takeoff scenarios, respectively, and then nearly everything follows from that. So I can imagine why you’d disagree with every one of Hanson’s paragraphs based on that. If you think there’s something he said that is uncorrelated with the takeoff speed disagreement, I might be interested, but I don’t agree with Hanson about everything either, so I’m mainly only interested if it’s also central to AI x-risk. I don’t want you to waste your time.
I guess if you are taking those constraints into consideration, then it is really just a probabilistic feeling about how much those constraints will slow down AI growth? To me, those constraints each seem massive, and getting around all of them within hours or days would be nearly impossible, no matter how intelligent the AI was. Is there any other way we can distinguish between our beliefs?
If I recall correctly from your writing, you have extremely near-term timelines. Is that correct? I don’t think that AGI is likely to occur sooner than 2031, based on this criteria: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/
Is this a prediction that we can use to decide in the future whose model of the world today was more reasonable? I know it’s a timelines question, but timelines are pretty correlated with takeoff speeds I guess.
I think there are probably disagreements I have with Hanson that don’t boil down to takeoff speeds disagreements, but I’m not sure. I’d have to reread the article again to find out.
To be clear, I definitely don’t expect takeoff to take hours or days. Quantitatively I expect something like what takeoffspeeds.com says when you input the values of the variables I mentioned above. So, eyeballing it, it looks like it takes slightly more than 3 years to go from 20% R&D automation to 100% R&D automation, and then to go from 100% R&D automation to “starting to approach the fundamental physical limits of how smart minds running on ordinary human supercomputers can be” in about 6 months, during which period about 8 OOMs of algorithmic efficiency is crossed. To be clear I don’t take that second bit very seriously at all, I think this takeoffspeeds.com model is much better as a model of pre-AGI takeoff than of post-AGI takeoff. But I do think that we’ll probably go from AGI to superintelligent AGI in less than six months. How long it takes to get to nanotech or (name your favorite cool sci-fi technology) is less clear to me, but I expect it to be closer to one year than ten, and possibly more like one month. I would love to discuss this more & read attempts to estimate these quantities.
I didn’t realize you had put so much time into estimating take-off speeds. I think this is a really good idea.
This seems substantially slower than the implicit take-off speed estimates of Eliezer, but maybe I’m missing something.
I think the amount of time you described is probably shorter than I would guess. But I haven’t put nearly as much time into it as you have. In the future, I’d like to.
Still, my guess is that this amount of time is enough that there are multiple competing groups, rather than only one. So it seems to me like there would probably be competition in the world you are describing, making a singleton AI less likely.
Do you think that there will almost certainly be a singleton AI?
It is substantially slower than the takeoff speed estimates of Eliezer, yes. I’m definitely disagreeing with Eliezer on this point. But as far as I can tell my view is closer to Eliezer’s than to Hanson’s, at least in upshot. (I’m a bit confused about this—IIRC Hanson also said somewhere that takeoff would last only a couple of years? Then why is he so confident it’ll be so broadly distributed, why does he think property rights will be respected throughout, why does he think humans will be able to retire peacefully, etc.?)
I also think it’s plausible that there will be multiple competing groups rather than one singleton AI, though not more than 80% plausible; I can easily imagine it just being one singleton.
I think that even if there are multiple competing groups, however, they are very likely to coordinate to disempower humans. From the perspective of the humans it’ll be as if they are an AI singleton, even though from the perspective of the AIs it’ll be some interesting multipolar conflict (that eventually ends with some negotiated peaceful settlement, I imagine)
After all, this is what happened historically with colonialism. Colonial powers (and individuals within conquistador expeditions) were constantly fighting each other.
It seems worth noting that the views and economic modeling you discuss here seem broadly in keeping with Christiano/Cotra (but with more agressive constants)
Yep! On both timelines and takeoff speeds I’d describe my views as “Like Ajeya Cotra’s and Tom Davidson’s but with different settings of some of the key variables.”