I can see how both Yudkowsky’s and Hanson’s arguments can be problematic because they either assume fast or slow takeoff scenarios, respectively, and then nearly everything follows from that. So I can imagine why you’d disagree with every one of Hanson’s paragraphs based on that. If you think there’s something he said that is uncorrelated with the takeoff speed disagreement, I might be interested, but I don’t agree with Hanson about everything either, so I’m mainly only interested if it’s also central to AI x-risk. I don’t want you to waste your time.
I guess if you are taking those constraints into consideration, then it is really just a probabilistic feeling about how much those constraints will slow down AI growth? To me, those constraints each seem massive, and getting around all of them within hours or days would be nearly impossible, no matter how intelligent the AI was. Is there any other way we can distinguish between our beliefs?
Is this a prediction that we can use to decide in the future whose model of the world today was more reasonable? I know it’s a timelines question, but timelines are pretty correlated with takeoff speeds I guess.
I think there are probably disagreements I have with Hanson that don’t boil down to takeoff speeds disagreements, but I’m not sure. I’d have to reread the article again to find out.
To be clear, I definitely don’t expect takeoff to take hours or days. Quantitatively I expect something like what takeoffspeeds.com says when you input the values of the variables I mentioned above. So, eyeballing it, it looks like it takes slightly more than 3 years to go from 20% R&D automation to 100% R&D automation, and then to go from 100% R&D automation to “starting to approach the fundamental physical limits of how smart minds running on ordinary human supercomputers can be” in about 6 months, during which period about 8 OOMs of algorithmic efficiency is crossed. To be clear I don’t take that second bit very seriously at all, I think this takeoffspeeds.com model is much better as a model of pre-AGI takeoff than of post-AGI takeoff. But I do think that we’ll probably go from AGI to superintelligent AGI in less than six months. How long it takes to get to nanotech or (name your favorite cool sci-fi technology) is less clear to me, but I expect it to be closer to one year than ten, and possibly more like one month. I would love to discuss this more & read attempts to estimate these quantities.
I didn’t realize you had put so much time into estimating take-off speeds. I think this is a really good idea.
This seems substantially slower than the implicit take-off speed estimates of Eliezer, but maybe I’m missing something.
I think the amount of time you described is probably shorter than I would guess. But I haven’t put nearly as much time into it as you have. In the future, I’d like to.
Still, my guess is that this amount of time is enough that there are multiple competing groups, rather than only one. So it seems to me like there would probably be competition in the world you are describing, making a singleton AI less likely.
Do you think that there will almost certainly be a singleton AI?
It is substantially slower than the takeoff speed estimates of Eliezer, yes. I’m definitely disagreeing with Eliezer on this point. But as far as I can tell my view is closer to Eliezer’s than to Hanson’s, at least in upshot. (I’m a bit confused about this—IIRC Hanson also said somewhere that takeoff would last only a couple of years? Then why is he so confident it’ll be so broadly distributed, why does he think property rights will be respected throughout, why does he think humans will be able to retire peacefully, etc.?)
I also think it’s plausible that there will be multiple competing groups rather than one singleton AI, though not more than 80% plausible; I can easily imagine it just being one singleton.
I think that even if there are multiple competing groups, however, they are very likely to coordinate to disempower humans. From the perspective of the humans it’ll be as if they are an AI singleton, even though from the perspective of the AIs it’ll be some interesting multipolar conflict (that eventually ends with some negotiated peaceful settlement, I imagine)
After all, this is what happened historically with colonialism. Colonial powers (and individuals within conquistador expeditions) were constantly fighting each other.
I can see how both Yudkowsky’s and Hanson’s arguments can be problematic because they either assume fast or slow takeoff scenarios, respectively, and then nearly everything follows from that. So I can imagine why you’d disagree with every one of Hanson’s paragraphs based on that. If you think there’s something he said that is uncorrelated with the takeoff speed disagreement, I might be interested, but I don’t agree with Hanson about everything either, so I’m mainly only interested if it’s also central to AI x-risk. I don’t want you to waste your time.
I guess if you are taking those constraints into consideration, then it is really just a probabilistic feeling about how much those constraints will slow down AI growth? To me, those constraints each seem massive, and getting around all of them within hours or days would be nearly impossible, no matter how intelligent the AI was. Is there any other way we can distinguish between our beliefs?
If I recall correctly from your writing, you have extremely near-term timelines. Is that correct? I don’t think that AGI is likely to occur sooner than 2031, based on this criteria: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/
Is this a prediction that we can use to decide in the future whose model of the world today was more reasonable? I know it’s a timelines question, but timelines are pretty correlated with takeoff speeds I guess.
I think there are probably disagreements I have with Hanson that don’t boil down to takeoff speeds disagreements, but I’m not sure. I’d have to reread the article again to find out.
To be clear, I definitely don’t expect takeoff to take hours or days. Quantitatively I expect something like what takeoffspeeds.com says when you input the values of the variables I mentioned above. So, eyeballing it, it looks like it takes slightly more than 3 years to go from 20% R&D automation to 100% R&D automation, and then to go from 100% R&D automation to “starting to approach the fundamental physical limits of how smart minds running on ordinary human supercomputers can be” in about 6 months, during which period about 8 OOMs of algorithmic efficiency is crossed. To be clear I don’t take that second bit very seriously at all, I think this takeoffspeeds.com model is much better as a model of pre-AGI takeoff than of post-AGI takeoff. But I do think that we’ll probably go from AGI to superintelligent AGI in less than six months. How long it takes to get to nanotech or (name your favorite cool sci-fi technology) is less clear to me, but I expect it to be closer to one year than ten, and possibly more like one month. I would love to discuss this more & read attempts to estimate these quantities.
I didn’t realize you had put so much time into estimating take-off speeds. I think this is a really good idea.
This seems substantially slower than the implicit take-off speed estimates of Eliezer, but maybe I’m missing something.
I think the amount of time you described is probably shorter than I would guess. But I haven’t put nearly as much time into it as you have. In the future, I’d like to.
Still, my guess is that this amount of time is enough that there are multiple competing groups, rather than only one. So it seems to me like there would probably be competition in the world you are describing, making a singleton AI less likely.
Do you think that there will almost certainly be a singleton AI?
It is substantially slower than the takeoff speed estimates of Eliezer, yes. I’m definitely disagreeing with Eliezer on this point. But as far as I can tell my view is closer to Eliezer’s than to Hanson’s, at least in upshot. (I’m a bit confused about this—IIRC Hanson also said somewhere that takeoff would last only a couple of years? Then why is he so confident it’ll be so broadly distributed, why does he think property rights will be respected throughout, why does he think humans will be able to retire peacefully, etc.?)
I also think it’s plausible that there will be multiple competing groups rather than one singleton AI, though not more than 80% plausible; I can easily imagine it just being one singleton.
I think that even if there are multiple competing groups, however, they are very likely to coordinate to disempower humans. From the perspective of the humans it’ll be as if they are an AI singleton, even though from the perspective of the AIs it’ll be some interesting multipolar conflict (that eventually ends with some negotiated peaceful settlement, I imagine)
After all, this is what happened historically with colonialism. Colonial powers (and individuals within conquistador expeditions) were constantly fighting each other.