I think the point being made there is different. For example, the contemporary question is, “how do we improve deep reinforcement learning?” to which the standard answer is “we make it model-based!” (or, I say near-equivalently, “we make it hierarchical!”, since the hierarchy is a broad approach to model embedding). But people don’t know how to do model-based reinforcement learning in a way that works, and the first paper to suggest that was in 1991. If there’s a person whose entire insight is that it needs to be model-based, it makes sense to mock them if they think they’re being bold or original; if there’s a person whose insight is that the right shape of model is XYZ, then they are actually making a bold claim because it could turn out to be wrong, and they might even be original. And this remains true even if 5-10 years from now everyone knows how to make deep RL model-based.
The point is not that the nonconformists were wrong—the revolutionary AI thing was indeed in the class of neural networks—the point is that someone is mistaken if they think that knowing which class the market / culture thinks is “revolutionary” gives them any actual advantage. You might bias towards working on neural network approaches, but so is everyone else; you’re just chasing a fad rather than holding onto a secret, even if the fad turns out to be correct. A secret looks like believing a thing about how to make neural networks work that other people don’t believe, and that thing turning out to be right.
However, the hard question to ask is: suppose you were writing that essay today, for the first time—would you choose AI / neural networks as your example? Or, to put it another way:
“These so-called nonconformists are really just conformists, and in any case they’re wrong.”
and
“These so-called nonconformists are really just conformists… they’re right, of course, they’re totally right, but, well… they’re not as nonconformist as they claim, is all.”
… read very differently, in a rhetorical sense.
And, to put it yet a third way: to say that what Eliezer meant was the latter, when what he wrote is the former, is not quite the same as saying that the former may be false, but the latter remains true. And if what Eliezer meant was the former, then it’s reasonable to ask whether we ought to re-examine the rest of his reasoning on this topic.
Mostly but not entirely tangentially:
The point is not that the nonconformists were wrong—the revolutionary AI thing was indeed in the class of neural networks—the point is that someone is mistaken if they think that knowing which class the market / culture thinks is “revolutionary” gives them any actual advantage. You might bias towards working on neural network approaches, but so is everyone else; you’re just chasing a fad rather than holding onto a secret, even if the fad turns out to be correct.
Well, but did people bias toward working on NN approaches? Did they bias enough? I’m given to understand that the current NN revolution was enabled by technological advantages that were unavailable back then; is this the whole reason? And did anyone back then know or predict that with more hardware, NNs would do all the stuff they now do for us? If not—could they have? These aren’t trivial questions, I think; and how we answer them does plausibly affect the extent to which we judge Eliezer’s points to stand or fall.
Finally, on the subject of whether someone’s being bold and original: suppose that I propose method X, which is nothing at all like what the establishment currently uses. Clearly, this proposal is bold and original. The establishment rejects my proposal, and keeps on doing things their way. If I later propose X again, am I still being bold and original? What if someone else says “Like Said, I think that we should X” (remember, the establishment thus far continues to reject X)—are they being bold and original? More importantly—does it really matter? “Is <critic/outsider/etc.> being bold and/or original” seems to me to be a pointless form of discourse. The question is: is it right to propose that we should move in some direction?
(I have often had such experiences, in fact, where I say “we should do X”, and encounter responses like “bah, you’ve said that already” or “bah, that’s an old idea”. And I think: yes, yes, of course I’ve said it before, of course it’s an old idea, but it’s an old idea that you are still failing to do! My suggestion is neither bold nor original, but it is both right and still ignored! It isn’t “new” in the sense of “this is the first time anyone’s ever suggested it”, but it sure as heck is “new” in the sense of “thus far, you haven’t done this, despite me and others having said many times that you should do it”! What place does “bold and original” have in an argument like this? None, I’d say.)
I think you’re misreading Eliezer’s article; even with major advances in neural networks, we don’t have general intelligence, which was the standard that he was holding them to in 2007, not “state of the art on most practical AI applications.” He also stresses the “people outside the field”—to a machine learning specialist, the suggestion “use neural networks” is not nearly enough to go off of. “What kind?” they might ask exasperatedly, or even if you were suggesting “well, why not make it as deep as the actual human cortex?” they might point out the ways in which backpropagation fails to work on that scale, without those defects having an obvious remedy. In the context—the Seeing With Fresh Eyes sequence—it seems pretty clear that it’s about thinking that this is a brilliant new idea as opposed to the thing that lots of people think.
Where’s your impression coming from? [I do agree that Eliezer has been critical of neural networks elsewhere, but I think generally in precise and narrow ways, as opposed to broadly underestimating them.]
I think the point being made there is different. For example, the contemporary question is, “how do we improve deep reinforcement learning?” to which the standard answer is “we make it model-based!” (or, I say near-equivalently, “we make it hierarchical!”, since the hierarchy is a broad approach to model embedding). But people don’t know how to do model-based reinforcement learning in a way that works, and the first paper to suggest that was in 1991. If there’s a person whose entire insight is that it needs to be model-based, it makes sense to mock them if they think they’re being bold or original; if there’s a person whose insight is that the right shape of model is XYZ, then they are actually making a bold claim because it could turn out to be wrong, and they might even be original. And this remains true even if 5-10 years from now everyone knows how to make deep RL model-based.
The point is not that the nonconformists were wrong—the revolutionary AI thing was indeed in the class of neural networks—the point is that someone is mistaken if they think that knowing which class the market / culture thinks is “revolutionary” gives them any actual advantage. You might bias towards working on neural network approaches, but so is everyone else; you’re just chasing a fad rather than holding onto a secret, even if the fad turns out to be correct. A secret looks like believing a thing about how to make neural networks work that other people don’t believe, and that thing turning out to be right.
Yes, indeed, I think your account makes sense.
However, the hard question to ask is: suppose you were writing that essay today, for the first time—would you choose AI / neural networks as your example? Or, to put it another way:
“These so-called nonconformists are really just conformists, and in any case they’re wrong.”
and
“These so-called nonconformists are really just conformists… they’re right, of course, they’re totally right, but, well… they’re not as nonconformist as they claim, is all.”
… read very differently, in a rhetorical sense.
And, to put it yet a third way: to say that what Eliezer meant was the latter, when what he wrote is the former, is not quite the same as saying that the former may be false, but the latter remains true. And if what Eliezer meant was the former, then it’s reasonable to ask whether we ought to re-examine the rest of his reasoning on this topic.
Mostly but not entirely tangentially:
Well, but did people bias toward working on NN approaches? Did they bias enough? I’m given to understand that the current NN revolution was enabled by technological advantages that were unavailable back then; is this the whole reason? And did anyone back then know or predict that with more hardware, NNs would do all the stuff they now do for us? If not—could they have? These aren’t trivial questions, I think; and how we answer them does plausibly affect the extent to which we judge Eliezer’s points to stand or fall.
Finally, on the subject of whether someone’s being bold and original: suppose that I propose method X, which is nothing at all like what the establishment currently uses. Clearly, this proposal is bold and original. The establishment rejects my proposal, and keeps on doing things their way. If I later propose X again, am I still being bold and original? What if someone else says “Like Said, I think that we should X” (remember, the establishment thus far continues to reject X)—are they being bold and original? More importantly—does it really matter? “Is <critic/outsider/etc.> being bold and/or original” seems to me to be a pointless form of discourse. The question is: is it right to propose that we should move in some direction?
(I have often had such experiences, in fact, where I say “we should do X”, and encounter responses like “bah, you’ve said that already” or “bah, that’s an old idea”. And I think: yes, yes, of course I’ve said it before, of course it’s an old idea, but it’s an old idea that you are still failing to do! My suggestion is neither bold nor original, but it is both right and still ignored! It isn’t “new” in the sense of “this is the first time anyone’s ever suggested it”, but it sure as heck is “new” in the sense of “thus far, you haven’t done this, despite me and others having said many times that you should do it”! What place does “bold and original” have in an argument like this? None, I’d say.)
I think you’re misreading Eliezer’s article; even with major advances in neural networks, we don’t have general intelligence, which was the standard that he was holding them to in 2007, not “state of the art on most practical AI applications.” He also stresses the “people outside the field”—to a machine learning specialist, the suggestion “use neural networks” is not nearly enough to go off of. “What kind?” they might ask exasperatedly, or even if you were suggesting “well, why not make it as deep as the actual human cortex?” they might point out the ways in which backpropagation fails to work on that scale, without those defects having an obvious remedy. In the context—the Seeing With Fresh Eyes sequence—it seems pretty clear that it’s about thinking that this is a brilliant new idea as opposed to the thing that lots of people think.
Where’s your impression coming from? [I do agree that Eliezer has been critical of neural networks elsewhere, but I think generally in precise and narrow ways, as opposed to broadly underestimating them.]