Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays, and I wish he’d spent a few years arguing with some of them to get a better picture of how unlikely this is.
While I can’t comment on AGI researchers, I think you underestimate e.g. more mainstream AI researchers such as Stuart Russell and Geoff Hinton, or cognitive scientists like Josh Tenenbaum, or even more AI-focused machine learning people like Andrew Ng, Daphne Koller, Michael Jordan, Dan Klein, Rich Sutton, Judea Pearl, Leslie Kaelbling, and Leslie Valiant (and this list is no doubt incomplete). They might not be claiming that they’ll have AI in 20 years, but that’s likely because they are actually grappling with the relevant issues and therefore see how hard the problem is likely to be.
Not that it strikes me as completely unreasonable that we would have a major breakthrough that gives us AI in 20 years, but it’s hard to see what the candidate would be. But I have only been thinking about these issues for a couple years, so I still maintain a pretty high degree of uncertainty about all of these claims.
I do think I basically agree with you re: inductive learning and program creation, though. When you say non-self-modifying Oracle AI, do you also mean that the Oracle AI doesn’t get to do inductive learning? Because I suspect that inductive learning of some sort is fundamentally necessary, for reasons that you yourself nicely outline here.
I agree that top mainstream AI guy Peter Norvig was way the heck more sensible than the reference class of declared “AGI researchers” when I talked to him about FAI and CEV, and that estimates should be substantially adjusted accordingly.
Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.
While I can’t comment on AGI researchers, I think you underestimate e.g. more mainstream AI researchers such as Stuart Russell and Geoff Hinton, or cognitive scientists like Josh Tenenbaum, or even more AI-focused machine learning people like Andrew Ng, Daphne Koller, Michael Jordan, Dan Klein, Rich Sutton, Judea Pearl, Leslie Kaelbling, and Leslie Valiant (and this list is no doubt incomplete). They might not be claiming that they’ll have AI in 20 years, but that’s likely because they are actually grappling with the relevant issues and therefore see how hard the problem is likely to be.
Not that it strikes me as completely unreasonable that we would have a major breakthrough that gives us AI in 20 years, but it’s hard to see what the candidate would be. But I have only been thinking about these issues for a couple years, so I still maintain a pretty high degree of uncertainty about all of these claims.
I do think I basically agree with you re: inductive learning and program creation, though. When you say non-self-modifying Oracle AI, do you also mean that the Oracle AI doesn’t get to do inductive learning? Because I suspect that inductive learning of some sort is fundamentally necessary, for reasons that you yourself nicely outline here.
I agree that top mainstream AI guy Peter Norvig was way the heck more sensible than the reference class of declared “AGI researchers” when I talked to him about FAI and CEV, and that estimates should be substantially adjusted accordingly.
Yes. I wonder if there’s a good explanation why narrow AI folks are so much more sensible than AGI folks on those subjects.
Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.