I agree that top mainstream AI guy Peter Norvig was way the heck more sensible than the reference class of declared “AGI researchers” when I talked to him about FAI and CEV, and that estimates should be substantially adjusted accordingly.
Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.
I agree that top mainstream AI guy Peter Norvig was way the heck more sensible than the reference class of declared “AGI researchers” when I talked to him about FAI and CEV, and that estimates should be substantially adjusted accordingly.
Yes. I wonder if there’s a good explanation why narrow AI folks are so much more sensible than AGI folks on those subjects.
Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.