Yes. I wonder if there’s a good explanation why narrow AI folks are so much more sensible than AGI folks on those subjects.
Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.
Yes. I wonder if there’s a good explanation why narrow AI folks are so much more sensible than AGI folks on those subjects.
Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.