What you are basically saying is “Yudkowsky thought AGI might have happened by now, whereas I didn’t; AGI hasn’t happened by now, therefore we should update from Yud to me by a factor of ~1.5 (and also from Yud to the agi-is-impossible crowd, for that matter)” I agree.
Here’s what I think is going to happen (this is something like my modal projection, obviously I have a lot of uncertainty; also I don’t expect the world economy to be transformed as fast as this projects due to schlep and regulation, and so probably things will take a bit longer than depicted here but only a bit.):
No pressure, but I’d love it if you found time someday to fiddle with the settings of the model at takeoffspeeds.com and then post a screenshot of your own modal or median future. I think that going forward, we should all strive to leave this old “fast vs. slow takeoff” debate in the dust & talk more concretely about variables in this model, or in improved models.
I don’t quite know what “AGI might have happened by now means.”
I thought that we might have built transformative AI by 2023 (I gave it about 5% in 2010 and about 2% in 2018), and I’m not sure that Eliezer and I have meaningfully different timelines. So obviously “now” doesn’t mean 2023.
If “now” means “When AI is having ~$1b/year of impact,” and “AGI” means “AI that can do anything a human can do better” then yes, I think that’s roughly what I’m saying.
But an equivalent way of putting it is that Eliezer thinks weak AI systems will have very little impact, and I think weak AI systems will have a major impact, and so the more impact weak AI systems have the more evidence it gives for my view.
One way of putting it makes it seem like Eliezer would have shorter timelines since AGI might happen at any moment. Another way of putting it makes it seem like Eliezer may have longer timelines because nothing happens in the AGI-runup, and the early AI applications will drive increases in investment and will eventually accelerate R&D.
I don’t know whether Eliezer in fact has shorter or longer timelines, because I don’t think he’s commented publicly recently. So it seems like either way of putting it could be misleading.
Ah, I’m pretty sure Eliezer has shorter timelines than you. He’s been cagy about it but he sure acts like it, and various of his public statements seem to suggest it. I can try to dig them up if you like.
If “now” means “When AI is having ~$1b/year of impact,” and “AGI” means “AI that can do anything a human can do better” then yes, I think that’s roughly what I’m saying.
Yep that’s one way of putting what I said yeah. My model of EY’s view is: Pre-AGI systems will ramp up in revenue & impact at some rate, perhaps the rate that they have ramped up so far. Then at some point we’ll actually hit AGI (or seed AGI) and then FOOM. And that point MIGHT happen later, when AGI is already a ten-trillion-dollar industry, but it’ll probably happen before then. So… I definitely wasn’t interpreting Yudkowsky in the longer-timelines way. His view did imply that maybe nothing super transformative would happen in the run-up to AGI, but not because pre-AGI systems are weak, rather because there just won’t be enough time for them to transform things before AGI comes.
My model is very discontinues, I try to think of AI as AI (and avoid the term AGI).
And sure intelligence has some G measure, and everything we have built so far is low G[1] (humans have high G).
Anyway, at the core I think the jump will happen when an AI system learns the meta task / goal “Search and evaluate”[2], once that happens[3] G would start increasing very fast (versus earlier), and adding resources to such a thing would just accelerate this[4].
And I don’t see how that diverges from this reality or a reality where its not possible to get there, until obviously we get there.
And my intuition says that requires a system that has much higher G than current once, although looking at how that likely played out for us, it might be much lower than my intuition leads me to believe.
(though I would register that trying to form a model with so many knobs to turn is really daunting, so I expect I personally will probably procrastinate a bit before actually putting together one, and I anticipate others to maybe feel similar)
I mean it’s not so daunting if you mostly just defer to Tom & accept the default settings, but then tweak a few settings here and there.
Also it’s very cheap to fiddle with each setting one by one to see how much of an effect it has. Most of them don’t have much of an effect, so you only need to really focus on a few of them (such as the training requirements and the FLOP gap)
Why slow hardware takeoff (just 106 in 4 years, though measuring in dollars is confusing)? I expect it a bit later, but faster, because nanotech breaks manufacturing tech level continuity, channeling theoretical research directly, and with reasonable amount of compute performing relevant research is not a bottleneck. This would go from modern circuit fabs to disassembling the moon and fueling it with fusion (or something of this sort of level of impact), immediately and without any intermediate industrial development process.
I don’t think the model applies once you get to strongly superhuman systems—so, by mid-2027 in the scenario depicted. At that point, yeah, I’d expect the whole economy to be furiously bootstrapping towards nanotech or maybe even there already. Then the dissassemblies begin.
Also, as I mentioned, I think the model might overestimate the speed at which new AI advances can be rolled out into the economy, and converted into higher GDP and more/better hardware. Thus I think we completely agree.
What you are basically saying is “Yudkowsky thought AGI might have happened by now, whereas I didn’t; AGI hasn’t happened by now, therefore we should update from Yud to me by a factor of ~1.5 (and also from Yud to the agi-is-impossible crowd, for that matter)” I agree.
Here’s what I think is going to happen (this is something like my modal projection, obviously I have a lot of uncertainty; also I don’t expect the world economy to be transformed as fast as this projects due to schlep and regulation, and so probably things will take a bit longer than depicted here but only a bit.):
No pressure, but I’d love it if you found time someday to fiddle with the settings of the model at takeoffspeeds.com and then post a screenshot of your own modal or median future. I think that going forward, we should all strive to leave this old “fast vs. slow takeoff” debate in the dust & talk more concretely about variables in this model, or in improved models.
I don’t quite know what “AGI might have happened by now means.”
I thought that we might have built transformative AI by 2023 (I gave it about 5% in 2010 and about 2% in 2018), and I’m not sure that Eliezer and I have meaningfully different timelines. So obviously “now” doesn’t mean 2023.
If “now” means “When AI is having ~$1b/year of impact,” and “AGI” means “AI that can do anything a human can do better” then yes, I think that’s roughly what I’m saying.
But an equivalent way of putting it is that Eliezer thinks weak AI systems will have very little impact, and I think weak AI systems will have a major impact, and so the more impact weak AI systems have the more evidence it gives for my view.
One way of putting it makes it seem like Eliezer would have shorter timelines since AGI might happen at any moment. Another way of putting it makes it seem like Eliezer may have longer timelines because nothing happens in the AGI-runup, and the early AI applications will drive increases in investment and will eventually accelerate R&D.
I don’t know whether Eliezer in fact has shorter or longer timelines, because I don’t think he’s commented publicly recently. So it seems like either way of putting it could be misleading.
Ah, I’m pretty sure Eliezer has shorter timelines than you. He’s been cagy about it but he sure acts like it, and various of his public statements seem to suggest it. I can try to dig them up if you like.
Yep that’s one way of putting what I said yeah. My model of EY’s view is: Pre-AGI systems will ramp up in revenue & impact at some rate, perhaps the rate that they have ramped up so far. Then at some point we’ll actually hit AGI (or seed AGI) and then FOOM. And that point MIGHT happen later, when AGI is already a ten-trillion-dollar industry, but it’ll probably happen before then. So… I definitely wasn’t interpreting Yudkowsky in the longer-timelines way. His view did imply that maybe nothing super transformative would happen in the run-up to AGI, but not because pre-AGI systems are weak, rather because there just won’t be enough time for them to transform things before AGI comes.
Anyhow, I’ll stop trying to speak for him.
My model is very discontinues, I try to think of AI as AI (and avoid the term AGI).
And sure intelligence has some G measure, and everything we have built so far is low G[1] (humans have high G).
Anyway, at the core I think the jump will happen when an AI system learns the meta task / goal “Search and evaluate”[2], once that happens[3] G would start increasing very fast (versus earlier), and adding resources to such a thing would just accelerate this[4].
And I don’t see how that diverges from this reality or a reality where its not possible to get there, until obviously we get there.
I can’t speak to what people have built / are building in private.
Whenever people say AGI, I think AI that can do “search and evaluate” recursively.
And my intuition says that requires a system that has much higher G than current once, although looking at how that likely played out for us, it might be much lower than my intuition leads me to believe.
That is contingent on architecture, if we built a system that cannot scale easily or at all, then this wont happen.
+1 for the push for more quantitative models.
(though I would register that trying to form a model with so many knobs to turn is really daunting, so I expect I personally will probably procrastinate a bit before actually putting together one, and I anticipate others to maybe feel similar)
I mean it’s not so daunting if you mostly just defer to Tom & accept the default settings, but then tweak a few settings here and there.
Also it’s very cheap to fiddle with each setting one by one to see how much of an effect it has. Most of them don’t have much of an effect, so you only need to really focus on a few of them (such as the training requirements and the FLOP gap)
Why slow hardware takeoff (just 106 in 4 years, though measuring in dollars is confusing)? I expect it a bit later, but faster, because nanotech breaks manufacturing tech level continuity, channeling theoretical research directly, and with reasonable amount of compute performing relevant research is not a bottleneck. This would go from modern circuit fabs to disassembling the moon and fueling it with fusion (or something of this sort of level of impact), immediately and without any intermediate industrial development process.
I don’t think the model applies once you get to strongly superhuman systems—so, by mid-2027 in the scenario depicted. At that point, yeah, I’d expect the whole economy to be furiously bootstrapping towards nanotech or maybe even there already. Then the dissassemblies begin.
Also, as I mentioned, I think the model might overestimate the speed at which new AI advances can be rolled out into the economy, and converted into higher GDP and more/better hardware. Thus I think we completely agree.