Hi Jessica. Nice post and I agree with many of your points. Certainly, I believe—as you do—that a number of bad actors are wielding the specter of AGI sloppily and irresponsibly, either to consciously defraud people or on account of buying into something that speaks more to the messianic than to science. Perhaps ironically, one frequent debate that I have had with Gary in the past is that while he is vocally critical of exuberance over deep learning, he is himself partial to speaking rosily of nearish-term AGI, and of claiming progress (or being on the verge of progress) towards it. On the other hand, I am considerably more skeptical.
While I enjoyed the post and think we agree on many points, if you don’t mind I would like to respectfully note that I’ve been quoted here slightly out of context and would like to supply that missing context. To be sure, I think your post is written well and with honest intentions, and I know how easy it is to miss some context in Twitter threads [especially as it seems that many tweets have been deleted from this thread].
Regarding my friend Gary Marcus. I like Gary and we communicate fairly regularly, but we don’t always agree on the science or on the discourse.
In this particular case, he was specifically insisting to be “the first” to make a set of arguments (which contradicted my understanding of history). When I say “Saying it louder ≠ saying it first.”, I am truly just pushing back on this specific point—the assertion that he said it first. Among others, Judea Pearl has argued the limits of curve fitting far more rigorously and far earlier in history.
There is nothing dishonest or contradictory about agreeing with a broader point and simultaneously disagreeing with a man’s claim to have originated it. I took exception to the hyperbole, not the message.
After the quote, your post notes: “but, this is essentially admitting that Marcus is correct, while still criticizing him for saying it”—what is there to “admit”? I myself have made similar critical arguments both in technical papers, position papers, blog posts, and the popular press. Characterizing “agreement” as “admitting” something, makes the false insinuation that somehow I have been on the wrong side of the debate.
For the record, I am acknowledged in Gary’s paper on limitations of deep learning (which you reference here) for giving him a large amount of constructive feedback and have myself, perhaps in defiance of Adam Smith’s aphorism, been vocally critical of my own community within the technical forums, recently publishing “Troubling Trends in Machine Learning Scholarship” at the ICML debates (https://arxiv.org/abs/1807.03341) which was subsequently published by CACM. I suspect this piece (as well as much of my writing at http://approximatelycorrect.com and in other formal position papers) is in the spirit of the sort of critical writing that you appear to encourage.
Further, I’d like to address the second part of the discussion. When I say “Saying X doesn’t solve Y is pretty easy. But where are your concrete solutions for Y?” My point here is that Gary doesn’t just push back on the false claims made about current technology that doesn’t do Y. He also sometimes makes public attacks on the people working on X. It would seem that their crime is that they haven’t developed technical solutions to the grand challenge of Y. If failing to solve these particular moonshots (true reasoning, solving common sense, an elegant formulation of symbol manipulation and synthesis with pattern recognition) is a crime, then Gary too is just as guilty and the attack ought to be levied with greater humility. These attacks strike me as inappropriate and misplaced (compared to the more reasonable push-back on misinformation in the public sphere. ***To be clear, while I understand why you might have drawn that conclusion from this half-tweet, I do not believe that one must have a solution in hand to levy criticism and my writing and technical papers attest to this.***
Hi Jessica. Nice post and I agree with many of your points. Certainly, I believe—as you do—that a number of bad actors are wielding the specter of AGI sloppily and irresponsibly, either to consciously defraud people or on account of buying into something that speaks more to the messianic than to science. Perhaps ironically, one frequent debate that I have had with Gary in the past is that while he is vocally critical of exuberance over deep learning, he is himself partial to speaking rosily of nearish-term AGI, and of claiming progress (or being on the verge of progress) towards it. On the other hand, I am considerably more skeptical.
While I enjoyed the post and think we agree on many points, if you don’t mind I would like to respectfully note that I’ve been quoted here slightly out of context and would like to supply that missing context. To be sure, I think your post is written well and with honest intentions, and I know how easy it is to miss some context in Twitter threads [especially as it seems that many tweets have been deleted from this thread].
Regarding my friend Gary Marcus. I like Gary and we communicate fairly regularly, but we don’t always agree on the science or on the discourse.
In this particular case, he was specifically insisting to be “the first” to make a set of arguments (which contradicted my understanding of history). When I say “Saying it louder ≠ saying it first.”, I am truly just pushing back on this specific point—the assertion that he said it first. Among others, Judea Pearl has argued the limits of curve fitting far more rigorously and far earlier in history.
There is nothing dishonest or contradictory about agreeing with a broader point and simultaneously disagreeing with a man’s claim to have originated it. I took exception to the hyperbole, not the message.
After the quote, your post notes: “but, this is essentially admitting that Marcus is correct, while still criticizing him for saying it”—what is there to “admit”? I myself have made similar critical arguments both in technical papers, position papers, blog posts, and the popular press. Characterizing “agreement” as “admitting” something, makes the false insinuation that somehow I have been on the wrong side of the debate.
For the record, I am acknowledged in Gary’s paper on limitations of deep learning (which you reference here) for giving him a large amount of constructive feedback and have myself, perhaps in defiance of Adam Smith’s aphorism, been vocally critical of my own community within the technical forums, recently publishing “Troubling Trends in Machine Learning Scholarship” at the ICML debates (https://arxiv.org/abs/1807.03341) which was subsequently published by CACM. I suspect this piece (as well as much of my writing at http://approximatelycorrect.com and in other formal position papers) is in the spirit of the sort of critical writing that you appear to encourage.
Further, I’d like to address the second part of the discussion. When I say “Saying X doesn’t solve Y is pretty easy. But where are your concrete solutions for Y?” My point here is that Gary doesn’t just push back on the false claims made about current technology that doesn’t do Y. He also sometimes makes public attacks on the people working on X. It would seem that their crime is that they haven’t developed technical solutions to the grand challenge of Y. If failing to solve these particular moonshots (true reasoning, solving common sense, an elegant formulation of symbol manipulation and synthesis with pattern recognition) is a crime, then Gary too is just as guilty and the attack ought to be levied with greater humility. These attacks strike me as inappropriate and misplaced (compared to the more reasonable push-back on misinformation in the public sphere. ***To be clear, while I understand why you might have drawn that conclusion from this half-tweet, I do not believe that one must have a solution in hand to levy criticism and my writing and technical papers attest to this.***
Thanks a lot for the clarification, and sorry I took the quote out of context! I’ve added a note linking to this response.