(This is part of a more general question: how much of the science cited in the Sequences holds up? Certainly nearly all the psychology has to be either discarded outright or tagged with “[replication needed]”, but what about the other stuff? The mockery of “neural networks” as the standard “revolutionary AI thing” reads differently today; was the fact that NNs weren’t yet the solution to (seemingly) everything essential to Eliezer’s actual points, or peripheral? How many of the conclusions drawn in the Sequences are based on facts which are, well, not factual anymore? Do any essential points have to be re-examined?)
I think the point being made there is different. For example, the contemporary question is, “how do we improve deep reinforcement learning?” to which the standard answer is “we make it model-based!” (or, I say near-equivalently, “we make it hierarchical!”, since the hierarchy is a broad approach to model embedding). But people don’t know how to do model-based reinforcement learning in a way that works, and the first paper to suggest that was in 1991. If there’s a person whose entire insight is that it needs to be model-based, it makes sense to mock them if they think they’re being bold or original; if there’s a person whose insight is that the right shape of model is XYZ, then they are actually making a bold claim because it could turn out to be wrong, and they might even be original. And this remains true even if 5-10 years from now everyone knows how to make deep RL model-based.
The point is not that the nonconformists were wrong—the revolutionary AI thing was indeed in the class of neural networks—the point is that someone is mistaken if they think that knowing which class the market / culture thinks is “revolutionary” gives them any actual advantage. You might bias towards working on neural network approaches, but so is everyone else; you’re just chasing a fad rather than holding onto a secret, even if the fad turns out to be correct. A secret looks like believing a thing about how to make neural networks work that other people don’t believe, and that thing turning out to be right.
However, the hard question to ask is: suppose you were writing that essay today, for the first time—would you choose AI / neural networks as your example? Or, to put it another way:
“These so-called nonconformists are really just conformists, and in any case they’re wrong.”
and
“These so-called nonconformists are really just conformists… they’re right, of course, they’re totally right, but, well… they’re not as nonconformist as they claim, is all.”
… read very differently, in a rhetorical sense.
And, to put it yet a third way: to say that what Eliezer meant was the latter, when what he wrote is the former, is not quite the same as saying that the former may be false, but the latter remains true. And if what Eliezer meant was the former, then it’s reasonable to ask whether we ought to re-examine the rest of his reasoning on this topic.
Mostly but not entirely tangentially:
The point is not that the nonconformists were wrong—the revolutionary AI thing was indeed in the class of neural networks—the point is that someone is mistaken if they think that knowing which class the market / culture thinks is “revolutionary” gives them any actual advantage. You might bias towards working on neural network approaches, but so is everyone else; you’re just chasing a fad rather than holding onto a secret, even if the fad turns out to be correct.
Well, but did people bias toward working on NN approaches? Did they bias enough? I’m given to understand that the current NN revolution was enabled by technological advantages that were unavailable back then; is this the whole reason? And did anyone back then know or predict that with more hardware, NNs would do all the stuff they now do for us? If not—could they have? These aren’t trivial questions, I think; and how we answer them does plausibly affect the extent to which we judge Eliezer’s points to stand or fall.
Finally, on the subject of whether someone’s being bold and original: suppose that I propose method X, which is nothing at all like what the establishment currently uses. Clearly, this proposal is bold and original. The establishment rejects my proposal, and keeps on doing things their way. If I later propose X again, am I still being bold and original? What if someone else says “Like Said, I think that we should X” (remember, the establishment thus far continues to reject X)—are they being bold and original? More importantly—does it really matter? “Is <critic/outsider/etc.> being bold and/or original” seems to me to be a pointless form of discourse. The question is: is it right to propose that we should move in some direction?
(I have often had such experiences, in fact, where I say “we should do X”, and encounter responses like “bah, you’ve said that already” or “bah, that’s an old idea”. And I think: yes, yes, of course I’ve said it before, of course it’s an old idea, but it’s an old idea that you are still failing to do! My suggestion is neither bold nor original, but it is both right and still ignored! It isn’t “new” in the sense of “this is the first time anyone’s ever suggested it”, but it sure as heck is “new” in the sense of “thus far, you haven’t done this, despite me and others having said many times that you should do it”! What place does “bold and original” have in an argument like this? None, I’d say.)
I think you’re misreading Eliezer’s article; even with major advances in neural networks, we don’t have general intelligence, which was the standard that he was holding them to in 2007, not “state of the art on most practical AI applications.” He also stresses the “people outside the field”—to a machine learning specialist, the suggestion “use neural networks” is not nearly enough to go off of. “What kind?” they might ask exasperatedly, or even if you were suggesting “well, why not make it as deep as the actual human cortex?” they might point out the ways in which backpropagation fails to work on that scale, without those defects having an obvious remedy. In the context—the Seeing With Fresh Eyes sequence—it seems pretty clear that it’s about thinking that this is a brilliant new idea as opposed to the thing that lots of people think.
Where’s your impression coming from? [I do agree that Eliezer has been critical of neural networks elsewhere, but I think generally in precise and narrow ways, as opposed to broadly underestimating them.]
I would also be interested in this. Being able to update our canon in this way strikes me as one of the key goals that I want the LessWrong community to be able to do. If people have any UI suggestions, or ways for us to facilitate that kind of updating in a productive way, I would be very interested in hearing them.
It so happens that I’ve given some thought to this question.
I had the idea (while reading yet another of the innumerable discussions of the replication crisis) of adding, to readthesequences.com, a “psychoskeptic mode” feature—where you’d click a button to turn on said mode, and then on every page you visited, you’d see every psychology-related claim red-penned (with, perhaps, annotations or footnotes detailing the specific reasons for skepticism, if any).
Doing this would involve two challenges, one informational and one technical; and, unfortunately, the former is more tedious and also more important.
The informational challenge is simply the fact that someone would have to go through every single essay in the Sequences, and note which specific parts of the post—which paragraphs, which sentences, which word ranges—constituted claims of scientific (and, in this case specifically, psychological) fact. Quite a tedious job, but without this data, the whole project is moot.
The technical challenge consists first of actually inserting the appropriate markup into the source files (still tedious, but a whole order of magnitude less so) and implementing the toggle feature and the UI for it (trivial on both counts).
(And that’s just for claims of fact, per se! It would be a whole additional informational challenge to identify parts of the reasoning in each essay, and conclusions, which depended in some way on those claims. That may be too ambitious for now; I’d be happy if the basic version of my idea could be implemented.)
Now, if there’s community interest in such a project, I can, with relative ease, put together a simple platform for crowdsourcing the informational challenge (it would be something as simple as adding “talk pages” to readthesequences.com, or something along these lines, which would not be terribly difficult to do). Or, alternatively, I would be more than happy to lend advice and support to some similar effort by you (i.e., the LessWrong team), or anyone else.
The question is whether this is worth doing at all, and what we can reasonably expect the result to be. Are the Sequences worth updating in this sort of way? I genuinely don’t know (even though, as I’ve said many times, I consider them to be a supremely useful and excellent body of writing).
I don’t think just tagging the claims would be very valuable. To be valuable the website would need to provide information about how well the particular claim holds up.
The informational challenge is simply the fact that someone would have to go through every single essay in the Sequences, and note which specific parts of the post—which paragraphs, which sentences, which word ranges—constituted claims of scientific (and, in this case specifically, psychological) fact. Quite a tedious job, but without this data, the whole project is moot.
This could take a while, and it’d be important to have it so that if someone ‘abandons’ the project, their work is still available. If I decided to read (and take the necessary notes on), if not a “page” a day, then at least 7 “pages” a day, then that part of the project would be complete...in a year. (The TOC says 333 pages.)*
You (or anyone else who wishes to contribute) should feel free to use these talk pages to post notes, commentary, or anything else relevant. (If you prefer to use Google Docs, or any other such tool, to do the required editing, then I’d ask that you make the doc publicly viewable, and place a link to it on the relevant Sequence post’s Talk page.)
(A list of every single essay that is part of Rationality:A–Z—including the interludes, introductions, etc.—along with links to Talk pages, can be found here.)
Edit: You’ll also find that you can now view each page’s source, in either native wiki format or Markdown format, via links at the top-left of the page.
I also want to know this.
(This is part of a more general question: how much of the science cited in the Sequences holds up? Certainly nearly all the psychology has to be either discarded outright or tagged with “[replication needed]”, but what about the other stuff? The mockery of “neural networks” as the standard “revolutionary AI thing” reads differently today; was the fact that NNs weren’t yet the solution to (seemingly) everything essential to Eliezer’s actual points, or peripheral? How many of the conclusions drawn in the Sequences are based on facts which are, well, not factual anymore? Do any essential points have to be re-examined?)
I think the point being made there is different. For example, the contemporary question is, “how do we improve deep reinforcement learning?” to which the standard answer is “we make it model-based!” (or, I say near-equivalently, “we make it hierarchical!”, since the hierarchy is a broad approach to model embedding). But people don’t know how to do model-based reinforcement learning in a way that works, and the first paper to suggest that was in 1991. If there’s a person whose entire insight is that it needs to be model-based, it makes sense to mock them if they think they’re being bold or original; if there’s a person whose insight is that the right shape of model is XYZ, then they are actually making a bold claim because it could turn out to be wrong, and they might even be original. And this remains true even if 5-10 years from now everyone knows how to make deep RL model-based.
The point is not that the nonconformists were wrong—the revolutionary AI thing was indeed in the class of neural networks—the point is that someone is mistaken if they think that knowing which class the market / culture thinks is “revolutionary” gives them any actual advantage. You might bias towards working on neural network approaches, but so is everyone else; you’re just chasing a fad rather than holding onto a secret, even if the fad turns out to be correct. A secret looks like believing a thing about how to make neural networks work that other people don’t believe, and that thing turning out to be right.
Yes, indeed, I think your account makes sense.
However, the hard question to ask is: suppose you were writing that essay today, for the first time—would you choose AI / neural networks as your example? Or, to put it another way:
“These so-called nonconformists are really just conformists, and in any case they’re wrong.”
and
“These so-called nonconformists are really just conformists… they’re right, of course, they’re totally right, but, well… they’re not as nonconformist as they claim, is all.”
… read very differently, in a rhetorical sense.
And, to put it yet a third way: to say that what Eliezer meant was the latter, when what he wrote is the former, is not quite the same as saying that the former may be false, but the latter remains true. And if what Eliezer meant was the former, then it’s reasonable to ask whether we ought to re-examine the rest of his reasoning on this topic.
Mostly but not entirely tangentially:
Well, but did people bias toward working on NN approaches? Did they bias enough? I’m given to understand that the current NN revolution was enabled by technological advantages that were unavailable back then; is this the whole reason? And did anyone back then know or predict that with more hardware, NNs would do all the stuff they now do for us? If not—could they have? These aren’t trivial questions, I think; and how we answer them does plausibly affect the extent to which we judge Eliezer’s points to stand or fall.
Finally, on the subject of whether someone’s being bold and original: suppose that I propose method X, which is nothing at all like what the establishment currently uses. Clearly, this proposal is bold and original. The establishment rejects my proposal, and keeps on doing things their way. If I later propose X again, am I still being bold and original? What if someone else says “Like Said, I think that we should X” (remember, the establishment thus far continues to reject X)—are they being bold and original? More importantly—does it really matter? “Is <critic/outsider/etc.> being bold and/or original” seems to me to be a pointless form of discourse. The question is: is it right to propose that we should move in some direction?
(I have often had such experiences, in fact, where I say “we should do X”, and encounter responses like “bah, you’ve said that already” or “bah, that’s an old idea”. And I think: yes, yes, of course I’ve said it before, of course it’s an old idea, but it’s an old idea that you are still failing to do! My suggestion is neither bold nor original, but it is both right and still ignored! It isn’t “new” in the sense of “this is the first time anyone’s ever suggested it”, but it sure as heck is “new” in the sense of “thus far, you haven’t done this, despite me and others having said many times that you should do it”! What place does “bold and original” have in an argument like this? None, I’d say.)
I think you’re misreading Eliezer’s article; even with major advances in neural networks, we don’t have general intelligence, which was the standard that he was holding them to in 2007, not “state of the art on most practical AI applications.” He also stresses the “people outside the field”—to a machine learning specialist, the suggestion “use neural networks” is not nearly enough to go off of. “What kind?” they might ask exasperatedly, or even if you were suggesting “well, why not make it as deep as the actual human cortex?” they might point out the ways in which backpropagation fails to work on that scale, without those defects having an obvious remedy. In the context—the Seeing With Fresh Eyes sequence—it seems pretty clear that it’s about thinking that this is a brilliant new idea as opposed to the thing that lots of people think.
Where’s your impression coming from? [I do agree that Eliezer has been critical of neural networks elsewhere, but I think generally in precise and narrow ways, as opposed to broadly underestimating them.]
I would also be interested in this. Being able to update our canon in this way strikes me as one of the key goals that I want the LessWrong community to be able to do. If people have any UI suggestions, or ways for us to facilitate that kind of updating in a productive way, I would be very interested in hearing them.
It so happens that I’ve given some thought to this question.
I had the idea (while reading yet another of the innumerable discussions of the replication crisis) of adding, to readthesequences.com, a “psychoskeptic mode” feature—where you’d click a button to turn on said mode, and then on every page you visited, you’d see every psychology-related claim red-penned (with, perhaps, annotations or footnotes detailing the specific reasons for skepticism, if any).
Doing this would involve two challenges, one informational and one technical; and, unfortunately, the former is more tedious and also more important.
The informational challenge is simply the fact that someone would have to go through every single essay in the Sequences, and note which specific parts of the post—which paragraphs, which sentences, which word ranges—constituted claims of scientific (and, in this case specifically, psychological) fact. Quite a tedious job, but without this data, the whole project is moot.
The technical challenge consists first of actually inserting the appropriate markup into the source files (still tedious, but a whole order of magnitude less so) and implementing the toggle feature and the UI for it (trivial on both counts).
(And that’s just for claims of fact, per se! It would be a whole additional informational challenge to identify parts of the reasoning in each essay, and conclusions, which depended in some way on those claims. That may be too ambitious for now; I’d be happy if the basic version of my idea could be implemented.)
Now, if there’s community interest in such a project, I can, with relative ease, put together a simple platform for crowdsourcing the informational challenge (it would be something as simple as adding “talk pages” to readthesequences.com, or something along these lines, which would not be terribly difficult to do). Or, alternatively, I would be more than happy to lend advice and support to some similar effort by you (i.e., the LessWrong team), or anyone else.
The question is whether this is worth doing at all, and what we can reasonably expect the result to be. Are the Sequences worth updating in this sort of way? I genuinely don’t know (even though, as I’ve said many times, I consider them to be a supremely useful and excellent body of writing).
I don’t think just tagging the claims would be very valuable. To be valuable the website would need to provide information about how well the particular claim holds up.
This could take a while, and it’d be important to have it so that if someone ‘abandons’ the project, their work is still available. If I decided to read (and take the necessary notes on), if not a “page” a day, then at least 7 “pages” a day, then that part of the project would be complete...in a year. (The TOC says 333 pages.)*
A way that might not catch everything would be to search readthesequences.com for “psy” (short, should get around most spelling mistakes). https://www.readthesequences.com/Search?q=psy&action=search.
A general ‘color this word red’ feature would be interesting.
*I might do this in a google doc. Alternate tool suggestions are welcome. Sharing available upon request (and providing email).
I have added wikipedia-style Talk pages to readthesequences.com. (Example.)
You (or anyone else who wishes to contribute) should feel free to use these talk pages to post notes, commentary, or anything else relevant. (If you prefer to use Google Docs, or any other such tool, to do the required editing, then I’d ask that you make the doc publicly viewable, and place a link to it on the relevant Sequence post’s Talk page.)
(A list of every single essay that is part of Rationality:A–Z—including the interludes, introductions, etc.—along with links to Talk pages, can be found here.)
Edit: You’ll also find that you can now view each page’s source, in either native wiki format or Markdown format, via links at the top-left of the page.