In particular, Yudkowsky’s claim that a superintelligence is efficient wrt humanity on all cognitive tasks is IMO flat out infeasible/unattainable (insomuch as we include human aligned technology when evaluating the capabilities of humanity).
In particular, Yudkowsky’s claim that a superintelligence is efficient wrt humanity on all cognitive tasks is IMO flat out infeasible/unattainable (insomuch as we include human aligned technology when evaluating the capabilities of humanity).
I agree, in a trivial sense: One can always construct trivial tasks that stump AI because AI, by definition cannot solve the problem, like being a closet.
But that’s the only case where I expect impossibility/infesability for AI.
In particular, I suspect that any attempt to extend it in non-trivial domains probably fails.
Strongly upvoted!
I endorse the entirety of this post, and if anything I hold some objections/reservations more strongly than you have presented them here[1].
I very much appreciate that you have grounded these objections firmly in the theory and practice of modern machine learning.
In particular, Yudkowsky’s claim that a superintelligence is efficient wrt humanity on all cognitive tasks is IMO flat out infeasible/unattainable (insomuch as we include human aligned technology when evaluating the capabilities of humanity).
To respond to a footnote:
I agree, in a trivial sense: One can always construct trivial tasks that stump AI because AI, by definition cannot solve the problem, like being a closet.
But that’s the only case where I expect impossibility/infesability for AI.
In particular, I suspect that any attempt to extend it in non-trivial domains probably fails.