I included the word “sufficient” as an ass-covering move, because one facet of the problem is we don’t really know what will serve as a “sufficient” amount of training data in what context.
But, what specific types of tasks do you think machines still can’t do, given sufficient training data? If your answer is something like “physics research,” I would rejoinder that if you could generate training data for that job, a machine could do it.
Grand pronouncements with an ass-covering move look silly :-)
One obvious problem is that you are assuming stability. Consider modeling something that changes (in complex ways) with time—like the economy of the United States. Is “training data” from the 1950s relevant to the currrent situation?
Generally speaking, the speed at which your “training data” gets stale puts an upper limit on the relevant data that you can possibly have and that, in turn, puts an upper limit on the complexity of the model (NNs included) that you can build on its basis.
I don’t see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We’ve seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?
I don’t think it says anything like that.
I included the word “sufficient” as an ass-covering move, because one facet of the problem is we don’t really know what will serve as a “sufficient” amount of training data in what context.
But, what specific types of tasks do you think machines still can’t do, given sufficient training data? If your answer is something like “physics research,” I would rejoinder that if you could generate training data for that job, a machine could do it.
Grand pronouncements with an ass-covering move look silly :-)
One obvious problem is that you are assuming stability. Consider modeling something that changes (in complex ways) with time—like the economy of the United States. Is “training data” from the 1950s relevant to the currrent situation?
Generally speaking, the speed at which your “training data” gets stale puts an upper limit on the relevant data that you can possibly have and that, in turn, puts an upper limit on the complexity of the model (NNs included) that you can build on its basis.
I don’t see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We’ve seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?