I don’t quite know what “AGI might have happened by now means.”
I thought that we might have built transformative AI by 2023 (I gave it about 5% in 2010 and about 2% in 2018), and I’m not sure that Eliezer and I have meaningfully different timelines. So obviously “now” doesn’t mean 2023.
If “now” means “When AI is having ~$1b/year of impact,” and “AGI” means “AI that can do anything a human can do better” then yes, I think that’s roughly what I’m saying.
But an equivalent way of putting it is that Eliezer thinks weak AI systems will have very little impact, and I think weak AI systems will have a major impact, and so the more impact weak AI systems have the more evidence it gives for my view.
One way of putting it makes it seem like Eliezer would have shorter timelines since AGI might happen at any moment. Another way of putting it makes it seem like Eliezer may have longer timelines because nothing happens in the AGI-runup, and the early AI applications will drive increases in investment and will eventually accelerate R&D.
I don’t know whether Eliezer in fact has shorter or longer timelines, because I don’t think he’s commented publicly recently. So it seems like either way of putting it could be misleading.
Ah, I’m pretty sure Eliezer has shorter timelines than you. He’s been cagy about it but he sure acts like it, and various of his public statements seem to suggest it. I can try to dig them up if you like.
If “now” means “When AI is having ~$1b/year of impact,” and “AGI” means “AI that can do anything a human can do better” then yes, I think that’s roughly what I’m saying.
Yep that’s one way of putting what I said yeah. My model of EY’s view is: Pre-AGI systems will ramp up in revenue & impact at some rate, perhaps the rate that they have ramped up so far. Then at some point we’ll actually hit AGI (or seed AGI) and then FOOM. And that point MIGHT happen later, when AGI is already a ten-trillion-dollar industry, but it’ll probably happen before then. So… I definitely wasn’t interpreting Yudkowsky in the longer-timelines way. His view did imply that maybe nothing super transformative would happen in the run-up to AGI, but not because pre-AGI systems are weak, rather because there just won’t be enough time for them to transform things before AGI comes.
My model is very discontinues, I try to think of AI as AI (and avoid the term AGI).
And sure intelligence has some G measure, and everything we have built so far is low G[1] (humans have high G).
Anyway, at the core I think the jump will happen when an AI system learns the meta task / goal “Search and evaluate”[2], once that happens[3] G would start increasing very fast (versus earlier), and adding resources to such a thing would just accelerate this[4].
And I don’t see how that diverges from this reality or a reality where its not possible to get there, until obviously we get there.
And my intuition says that requires a system that has much higher G than current once, although looking at how that likely played out for us, it might be much lower than my intuition leads me to believe.
I don’t quite know what “AGI might have happened by now means.”
I thought that we might have built transformative AI by 2023 (I gave it about 5% in 2010 and about 2% in 2018), and I’m not sure that Eliezer and I have meaningfully different timelines. So obviously “now” doesn’t mean 2023.
If “now” means “When AI is having ~$1b/year of impact,” and “AGI” means “AI that can do anything a human can do better” then yes, I think that’s roughly what I’m saying.
But an equivalent way of putting it is that Eliezer thinks weak AI systems will have very little impact, and I think weak AI systems will have a major impact, and so the more impact weak AI systems have the more evidence it gives for my view.
One way of putting it makes it seem like Eliezer would have shorter timelines since AGI might happen at any moment. Another way of putting it makes it seem like Eliezer may have longer timelines because nothing happens in the AGI-runup, and the early AI applications will drive increases in investment and will eventually accelerate R&D.
I don’t know whether Eliezer in fact has shorter or longer timelines, because I don’t think he’s commented publicly recently. So it seems like either way of putting it could be misleading.
Ah, I’m pretty sure Eliezer has shorter timelines than you. He’s been cagy about it but he sure acts like it, and various of his public statements seem to suggest it. I can try to dig them up if you like.
Yep that’s one way of putting what I said yeah. My model of EY’s view is: Pre-AGI systems will ramp up in revenue & impact at some rate, perhaps the rate that they have ramped up so far. Then at some point we’ll actually hit AGI (or seed AGI) and then FOOM. And that point MIGHT happen later, when AGI is already a ten-trillion-dollar industry, but it’ll probably happen before then. So… I definitely wasn’t interpreting Yudkowsky in the longer-timelines way. His view did imply that maybe nothing super transformative would happen in the run-up to AGI, but not because pre-AGI systems are weak, rather because there just won’t be enough time for them to transform things before AGI comes.
Anyhow, I’ll stop trying to speak for him.
My model is very discontinues, I try to think of AI as AI (and avoid the term AGI).
And sure intelligence has some G measure, and everything we have built so far is low G[1] (humans have high G).
Anyway, at the core I think the jump will happen when an AI system learns the meta task / goal “Search and evaluate”[2], once that happens[3] G would start increasing very fast (versus earlier), and adding resources to such a thing would just accelerate this[4].
And I don’t see how that diverges from this reality or a reality where its not possible to get there, until obviously we get there.
I can’t speak to what people have built / are building in private.
Whenever people say AGI, I think AI that can do “search and evaluate” recursively.
And my intuition says that requires a system that has much higher G than current once, although looking at how that likely played out for us, it might be much lower than my intuition leads me to believe.
That is contingent on architecture, if we built a system that cannot scale easily or at all, then this wont happen.