I don’t think the distinction is supposed to be merely the distinction between Narrow AI and AGI. The “tool AI” oracle is still supposed to be a general AI that can solve many varied sorts of problems, especially important problems like existential risk.
I think this depends on the development path. A situation in which a team writes a piece of code that can solve any problem is very different from a situation in which thousands of teams write thousands of programs that interface together, with a number of humans interspersed throughout the mix, each of which is a narrow AI designed to solve some subset of the problem. The first seems incredibly dangerous (but also incredibly hard); the second seems like the sort of thing that will be difficult to implement if its reach exceeds its grasp. FAI style thinkers are still useful in the second scenario- but they’re no longer the core component. The first seems like the future according to EY, the second like the future according to Hanson, and the second would be able to help solve many varied sorts of problems, especially important problems like existential risk.
I think this depends on the development path. A situation in which a team writes a piece of code that can solve any problem is very different from a situation in which thousands of teams write thousands of programs that interface together, with a number of humans interspersed throughout the mix, each of which is a narrow AI designed to solve some subset of the problem. The first seems incredibly dangerous (but also incredibly hard); the second seems like the sort of thing that will be difficult to implement if its reach exceeds its grasp. FAI style thinkers are still useful in the second scenario- but they’re no longer the core component. The first seems like the future according to EY, the second like the future according to Hanson, and the second would be able to help solve many varied sorts of problems, especially important problems like existential risk.