One thing we can say is that Eliezer was wrong to claim that you could have an AI that could takeoff in hours to weeks, because compute bottlenecks do matter a lot, and they prevent the pure software singularity from happening.
So we can fairly clearly call this a win for slow takeoff views, though I do think Paul’s operationalization is wrong IMO for technical reasons.
I strongly disagree, I think hours-to-weeks is still on the menu. Also, note that Paul himself said this:
My intuition is that by the time that you have an AI which is superhuman at every task (e.g. for $10/h of hardware it strictly dominates hiring a remote human for any task) then you are likely weeks rather than months from the singularity.
But mostly this is because I think “strictly dominates” is a very hard standard which we will only meet long after AI systems are driving the large majority of technical progress in computer software, computer hardware, robotics, etc. (Also note that we can fail to meet that standard by computing costs rising based on demand for AI.)
So one argument for fast takeoff is: What if strictly dominates turns out to be in reach? What if e.g. we get AgentGPT-6 and it’s good enough to massively automate AI R&D, and then it synthesizes knowledge from biology, neurology, psychology, and ML to figure out how the human brain is so data-efficient, and boom, after a few weeks of tinkering we have something as data-efficient as the human brain but also bigger and faster and able to learn in parallel from distributed copies? Also we’ve built up some awesome learning environments/curricula to give it ten lifetimes of elite tutoring & self-play in all important subjects? So we jump from ‘massively automate AI R&D’ to ‘strictly dominates’ in a few weeks?
Also, doesn’t Tom’s model support a pure software singularity being possible?
Thanks for sharing your models btw that’s good of you. I strongly agree that conditional on your timelines/model-settings, Paul will overall come out looking significantly more correct than Eliezer.
I think the key disagreement I have admittedly with fast-takeoff views is I don’t find a pure-software singularity that likely, because eventually AIs will have to interface with the physical world like robotics to do a lot, or get humans to do stuff, and this is not too fast.
To be clear, I think this can be done if we take a time-scale of years, and is barely doable on the time-scales of months, but I think the physical interface is the rate-limiting step to takeoff, and a good argument that this either can be done as fast as software, a good argument that the physical interfaces not mattering at all for the AI use cases that transform the world, or good evidence that the physical interface bottleneck doesn’t exist or matter in practice would make me put significantly higher credence in fast-takeoff views.
Similarly, if it turns out that it’s as easy to create very-high quality robotics and the simulation software as it is to create actual software, this would shift my position significantly towards fast-takeoff views.
That said, I was being too harsh on totally ruling that one out, but I do find it reasonably low probability in my world models of how AI goes.
I strongly disagree, I think hours-to-weeks is still on the menu. Also, note that Paul himself said this:
So one argument for fast takeoff is: What if strictly dominates turns out to be in reach? What if e.g. we get AgentGPT-6 and it’s good enough to massively automate AI R&D, and then it synthesizes knowledge from biology, neurology, psychology, and ML to figure out how the human brain is so data-efficient, and boom, after a few weeks of tinkering we have something as data-efficient as the human brain but also bigger and faster and able to learn in parallel from distributed copies? Also we’ve built up some awesome learning environments/curricula to give it ten lifetimes of elite tutoring & self-play in all important subjects? So we jump from ‘massively automate AI R&D’ to ‘strictly dominates’ in a few weeks?
Also, doesn’t Tom’s model support a pure software singularity being possible?
Thanks for sharing your models btw that’s good of you. I strongly agree that conditional on your timelines/model-settings, Paul will overall come out looking significantly more correct than Eliezer.
I think the key disagreement I have admittedly with fast-takeoff views is I don’t find a pure-software singularity that likely, because eventually AIs will have to interface with the physical world like robotics to do a lot, or get humans to do stuff, and this is not too fast.
To be clear, I think this can be done if we take a time-scale of years, and is barely doable on the time-scales of months, but I think the physical interface is the rate-limiting step to takeoff, and a good argument that this either can be done as fast as software, a good argument that the physical interfaces not mattering at all for the AI use cases that transform the world, or good evidence that the physical interface bottleneck doesn’t exist or matter in practice would make me put significantly higher credence in fast-takeoff views.
Similarly, if it turns out that it’s as easy to create very-high quality robotics and the simulation software as it is to create actual software, this would shift my position significantly towards fast-takeoff views.
That said, I was being too harsh on totally ruling that one out, but I do find it reasonably low probability in my world models of how AI goes.