That said, I still take issue with reference class forecasting as support for this statement:
I don’t believe in feasibility of any scenario like AGI foom.
Considering that the general question “is the foom scenario feasible?” doesn’t have any concrete timelines attached to it, the speed and direction of AI research don’t bear too heavily on it. All you can say about it based on reference class forecasting is that it’s a long way away if it’s both possible and requires much AI research progress.
Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.
I’m not sure “disruptive technology” is the obvious category for AGI. The term basically dereferences to “engineered human-level intelligence”, easily suggesting comparisons to various humans, hominids, primates, etc.
Oops. You’re totally right.
That said, I still take issue with reference class forecasting as support for this statement:
Considering that the general question “is the foom scenario feasible?” doesn’t have any concrete timelines attached to it, the speed and direction of AI research don’t bear too heavily on it. All you can say about it based on reference class forecasting is that it’s a long way away if it’s both possible and requires much AI research progress.
I’m not sure “disruptive technology” is the obvious category for AGI. The term basically dereferences to “engineered human-level intelligence”, easily suggesting comparisons to various humans, hominids, primates, etc.