I disagree with the first one. I think that the spectrum of human-level AGI is actually quite wide, and that for most tasks we’ll get AGIs that are better than most humans significantly before we get AGIs that are better than all humans. But the latter is much more relevant for recursive self-improvement, because it’s bottlenecked by innovation, which is driven primarily by the best human researchers. E.g. I think it’d be pretty difficult to speed up AI progress dramatically using millions of copies of an average human.
Also, by default I think people talk about FOOM in a way that ignores regulations, governance, etc. Whereas in fact I expect these to put significant constraints on the pace of progress after human-level AGI.
If we have millions of copies of the best human researchers, without governance constraints on the pace of progress… Then compute constraints become the biggest thing. It seems plausible that you get a software-only singularity, but it also seems plausible that you need to wait for AI innovation of new chip manufacturing to actually cash out in the real world.
I broadly agree with the second one, though I don’t know how many people there are left with 30-year timelines. But 20 years to superintelligence doesn’t seem unreasonable to me (though it’s above my median). In general I’ve updated lately that Kurzweil was more right than I used to think about there being a significant gap between AGI and ASI. Part of this is because I expect the problem of multi-agent credit assignment over long time horizons to be difficult.
I disagree with the first one. I think that the spectrum of human-level AGI is actually quite wide, and that for most tasks we’ll get AGIs that are better than most humans significantly before we get AGIs that are better than all humans. But the latter is much more relevant for recursive self-improvement, because it’s bottlenecked by innovation, which is driven primarily by the best human researchers. E.g. I think it’d be pretty difficult to speed up AI progress dramatically using millions of copies of an average human.
Also, by default I think people talk about FOOM in a way that ignores regulations, governance, etc. Whereas in fact I expect these to put significant constraints on the pace of progress after human-level AGI.
If we have millions of copies of the best human researchers, without governance constraints on the pace of progress… Then compute constraints become the biggest thing. It seems plausible that you get a software-only singularity, but it also seems plausible that you need to wait for AI innovation of new chip manufacturing to actually cash out in the real world.
I broadly agree with the second one, though I don’t know how many people there are left with 30-year timelines. But 20 years to superintelligence doesn’t seem unreasonable to me (though it’s above my median). In general I’ve updated lately that Kurzweil was more right than I used to think about there being a significant gap between AGI and ASI. Part of this is because I expect the problem of multi-agent credit assignment over long time horizons to be difficult.