Again, the fact that he got the reasons right (the hype cycle, the fact human chess players were performing very differently from what the AI designers were doing, etc...) lifts him up a bit. I don’t know what he’s been up to since then, though.
Can we find a good baseline predictor today who’s performing as well?
Sorry, I may not be explaining myself very well. I agree that Dreyfus is quite smart, and writes well. I also agree that he may have had good arguments against AI progress in the 1960s. But I don’t agree that this is how you should evaluate prophets. Prophets are doing prediction—a standard statistical problem. The way to evaluate predictors is on datasets. A single prediction success, no matter how eloquent, is not really proof of the efficacy of the prophet.
If, on the other hand, a prophet consistently applies an algorithm and predicts correctly, well that becomes interesting and worthy of further study. The modern day prophet is Nate Silver, a humble statistician. He said recently that he is uncomfortable with his fame because all he is doing is simple statistical models + a little special sauce relevant to the domain. So my question to you is this: in what way can you improve your prediction algorithm by studying Dreyfus?
You ask him: “when will the singularity happen?” “When will a machine pass the Turing test?” “When will machines do this or that?” His answer for anything not trivially possible is “never.” Naysaying is “boring,” algorithmically.
I don’t think Dreyfus is a generally good prophet. I think he made a great prediction in 1965, and that it would have been hard to see at the time that it was a good prediction. The lessons to draw, in my opinion, were “sometimes outsiders have very correct predictions”, and “some of the features of Dreyfus’s predictions (the specific examples, decomposition and understanding) are (weak) signs of good predictive ability”.
Ignore Dreyfus himself for the moment. A paper was published that made correct predictions, and gave correct explanations for them, at a time when most experts in the field disagreed. The question is, was there a better cognitive strategy or a better prediction algorithm those experts could have followed, which would have allowed them to recognize the rightness of that paper?
“Never” is not a testable prediction. Break down predictions into finite-time-horizon groups and judge each against the baseline of “nothing happens in the next n years”.
Much later edit: IlyaShpitser has correctly pointed out that my comment makes no sense.
Of course “never” is testable. The way to falsify is to exhibit a counterexample. “Human beings will never design a heavier than air flying machine” (Lord Kelvin, 1895), “a computer will never beat the human world champion in chess,” etc. All falsified, therefore, all testable. If anything, an infinite horizon statement like “never” is more vulnerable to falsification, and therefore should get more “scientific respect.”
It’s only testable in one direction—if you like, “never” is testable but “ever” isn’t. I don’t have a formal argument to hand, but it seems vaguely to me that a hypothesis preferably-ought to be falsifiable both ways.
Again, the fact that he got the reasons right (the hype cycle, the fact human chess players were performing very differently from what the AI designers were doing, etc...) lifts him up a bit. I don’t know what he’s been up to since then, though.
Can we find a good baseline predictor today who’s performing as well?
Absolutely: “the singularity will never happen, MIRI is wasting its time.”
Can you present the arguments for this (at the level of Dreyfus in http://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf )?
Sorry, I may not be explaining myself very well. I agree that Dreyfus is quite smart, and writes well. I also agree that he may have had good arguments against AI progress in the 1960s. But I don’t agree that this is how you should evaluate prophets. Prophets are doing prediction—a standard statistical problem. The way to evaluate predictors is on datasets. A single prediction success, no matter how eloquent, is not really proof of the efficacy of the prophet.
If, on the other hand, a prophet consistently applies an algorithm and predicts correctly, well that becomes interesting and worthy of further study. The modern day prophet is Nate Silver, a humble statistician. He said recently that he is uncomfortable with his fame because all he is doing is simple statistical models + a little special sauce relevant to the domain. So my question to you is this: in what way can you improve your prediction algorithm by studying Dreyfus?
You ask him: “when will the singularity happen?” “When will a machine pass the Turing test?” “When will machines do this or that?” His answer for anything not trivially possible is “never.” Naysaying is “boring,” algorithmically.
I don’t think Dreyfus is a generally good prophet. I think he made a great prediction in 1965, and that it would have been hard to see at the time that it was a good prediction. The lessons to draw, in my opinion, were “sometimes outsiders have very correct predictions”, and “some of the features of Dreyfus’s predictions (the specific examples, decomposition and understanding) are (weak) signs of good predictive ability”.
Ignore Dreyfus himself for the moment. A paper was published that made correct predictions, and gave correct explanations for them, at a time when most experts in the field disagreed. The question is, was there a better cognitive strategy or a better prediction algorithm those experts could have followed, which would have allowed them to recognize the rightness of that paper?
“Never” is not a testable prediction. Break down predictions into finite-time-horizon groups and judge each against the baseline of “nothing happens in the next n years”.
Much later edit: IlyaShpitser has correctly pointed out that my comment makes no sense.
Of course “never” is testable. The way to falsify is to exhibit a counterexample. “Human beings will never design a heavier than air flying machine” (Lord Kelvin, 1895), “a computer will never beat the human world champion in chess,” etc. All falsified, therefore, all testable. If anything, an infinite horizon statement like “never” is more vulnerable to falsification, and therefore should get more “scientific respect.”
It’s only testable in one direction—if you like, “never” is testable but “ever” isn’t. I don’t have a formal argument to hand, but it seems vaguely to me that a hypothesis preferably-ought to be falsifiable both ways.