Socially helpful. For the rest, I’ll claim you’re making an error, although entirely my fault. I’m claiming there’s a type of article you should be citing in your intros. Luke reads a summary of those articles and says, “Wow, do people think we’re weird, we’re so mainstream”. I say “Most of you think you’re weird, and you probably did too till recenrlt; it’s on you to know where you’re mainstream”. You say “Dummy, he’s clearly interested in that since he already mentioned the first example of it.”
Fair description? No, of course not, Luke has read more than a summary. He’s read stuff. Mainstream stuff?
Anyway, I shouldn’t say I have a good knowledge of the precise thing i mean by “mainstream” in your field, but I meant something pretty specific:
Very recent
Research article
Not-self or buddies
Not too general
Not just “classic”
Similar methodology or aims (I mean extremely similar except in a few ways)
High impact one way or another
What if you’re too novel for to come up with articles meeting all those criteria. There’s an answer for that.
“LessWrong should have a one-page answer for the question: What part of the established literature are you building upon and what are you doing that is novel?”
is not even close to the same as
“suggests mentioning the most popular example of an idea in an article’s um, history of an idea section” if you consider “Oxford Handbook of Thinking and Reasoning” to be “the first example of it”
Anyway, probably different in philosophy, so I’ll retract my claim. I’ve never seen any good introduction from LessWrong and this is not a start. It’s such a pre-start it implies the opposite (“I’m rich, I own my own car”), with a jab at the opposition (“Why don’t they see I’m rich, I own my own car”). Anyway, that’s my perception.
I was very unclear in what I wrote. I also find what you wrote less than perfectly clear. Efficient Scholarship will not equip one to do research in many fields but in many fields given a narrow enough goal you’d be the equal of an average maters student.
“suggests mentioning the most popular example of an idea in an article’s um, history of an idea section” if you consider “Oxford Handbook of Thinking and Reasoning” to be “the first example of it”
(This comment precedes my edit) abovementioned lukeprog comment makes the point that Luke is an academic in his weighting of priority over exposure when discussing the genealogy of an idea. Apologies for my previous lack of clarity.
The overwhelming majority of LW lurkers will have barely dipped their toes in the Sequences. The more committed commenters have certainly not universally read them all. Luke has read dozens of books on rationality alone.
Upon reflection we probably disagree on very little seeing as you were originally addressing an average LW reader’s view on the origins of the ideas LW talks about, not a specific person’s. If you could expand upon your last paragraph I’d be grateful. The article (and as of writing, the comments) do not mention philosophy (by name) at all.
Yes, Lukeprog seems mostly academically likeable, and the text snippet in the comments on this page from the paper he’s prepping is more like what I would hope for.
I was using “philosophy” as a byword for far, unlike, and what Luke may be closer to. The specific ref is that it’s what Luke used (I believe) before explicitly just replacing with “Cog Sci”.
Anyway, I don’t disagree with you much (within the range of my having misinterpreted), so I’ll skip the meta-talking and just try to say what I am thinking.
I try to imagine myself as a reviewer of a Singularity Institute paper. I’m not an expert in that field, so I’m trying to translate myself, and probably failing. Nonetheless, sometimes I would refuse to review the paper. In the SI institute’s case, basically that would happen because I thought the journal wasn’t worth my time or the intro was such a turn-off, I decided the paper was not worth deciphering. I’m assuming, in these cases, that I’m a well meaning but weak reviewer in the sense that this is not my exact area of expertise. In these cases, I really need a good intro, and typically I would skim all the cited papers in the intro once I committed to reviewing. Reading those papers should make me feel like the paper I then review is an incremental improvement over that collection. People talk about “least publishable units” a lot, but there’s probably also a most publishable unit. With rare exceptions. If it’s one of those exceptions, then it should be published in Science, say (or another high-profile general interest journal).
So, I now imagine myself as a researcher at the Singularity Institute (again, badly translating myself). I have ideas I think are pretty good that are also a little too novel and maybe a little too contrarian. I have a problem in that the points are important enough to be high-profile, but my evidence is maybe not correspondingly fantastic (no definitive experiments, etc); in other words, I’m coming out of left field, a bit. I’d first submit to quite high profile journals that occasionally publish wacky stuff (PNAS is the classic example). One such publication would be a huge deal, and failure often leads to PLoS ONE (which is why its impact factor is relatively high despite its acceptance rate being very high; not perfectly on topic for you, depending). I would simultaneously put out papers that had lesser but utterly inarguable results in appropriately specialist journals; this would convince people I was sane (but would probably diverge pretty strongly from my true interests). So, this may seem a bit like what the Singularity Institute is doing (publishing more mainstream articles outside FAI), but the bar seems (to me, in my ignorance of this area) set too low. A lot of really low impact articles do not help. I’d look for weird areas of overlap where I could throw my best idea into a rigorous framework and get published because the framework is sane and the wackiness of the premise is then fine (two Singularity-ish examples I’ve seen: economics of upload, computational neuroscience of uploads)
If this is all totally redundant to your thinking already, no worries, and I won’t be shocked. Cheers and small apologies to Luke.
I try to imagine myself as a reviewer of a Singularity Institute paper. I’m not an expert in that field, so I’m trying to translate myself, and probably failing.
To become an expert in the field you will have to at least once build an AI going FOOM or instead read Omohundro’s ‘The Basic AI Drives’ paper and cite it as corroborative evidence for any of your claims suffering from a lack of knowledge of how actual AGI is to come about :-)
Socially helpful. For the rest, I’ll claim you’re making an error, although entirely my fault. I’m claiming there’s a type of article you should be citing in your intros. Luke reads a summary of those articles and says, “Wow, do people think we’re weird, we’re so mainstream”. I say “Most of you think you’re weird, and you probably did too till recenrlt; it’s on you to know where you’re mainstream”. You say “Dummy, he’s clearly interested in that since he already mentioned the first example of it.”
Fair description? No, of course not, Luke has read more than a summary. He’s read stuff. Mainstream stuff?
Anyway, I shouldn’t say I have a good knowledge of the precise thing i mean by “mainstream” in your field, but I meant something pretty specific:
Very recent Research article Not-self or buddies Not too general Not just “classic” Similar methodology or aims (I mean extremely similar except in a few ways) High impact one way or another
What if you’re too novel for to come up with articles meeting all those criteria. There’s an answer for that.
Probably I should be clearer.
“LessWrong should have a one-page answer for the question: What part of the established literature are you building upon and what are you doing that is novel?”
is not even close to the same as
“suggests mentioning the most popular example of an idea in an article’s um, history of an idea section” if you consider “Oxford Handbook of Thinking and Reasoning” to be “the first example of it”
Anyway, probably different in philosophy, so I’ll retract my claim. I’ve never seen any good introduction from LessWrong and this is not a start. It’s such a pre-start it implies the opposite (“I’m rich, I own my own car”), with a jab at the opposition (“Why don’t they see I’m rich, I own my own car”). Anyway, that’s my perception.
I was very unclear in what I wrote. I also find what you wrote less than perfectly clear. Efficient Scholarship will not equip one to do research in many fields but in many fields given a narrow enough goal you’d be the equal of an average maters student.
(This comment precedes my edit) abovementioned lukeprog comment makes the point that Luke is an academic in his weighting of priority over exposure when discussing the genealogy of an idea. Apologies for my previous lack of clarity.
The overwhelming majority of LW lurkers will have barely dipped their toes in the Sequences. The more committed commenters have certainly not universally read them all. Luke has read dozens of books on rationality alone.
Upon reflection we probably disagree on very little seeing as you were originally addressing an average LW reader’s view on the origins of the ideas LW talks about, not a specific person’s. If you could expand upon your last paragraph I’d be grateful. The article (and as of writing, the comments) do not mention philosophy (by name) at all.
Yes, Lukeprog seems mostly academically likeable, and the text snippet in the comments on this page from the paper he’s prepping is more like what I would hope for.
I was using “philosophy” as a byword for far, unlike, and what Luke may be closer to. The specific ref is that it’s what Luke used (I believe) before explicitly just replacing with “Cog Sci”.
Anyway, I don’t disagree with you much (within the range of my having misinterpreted), so I’ll skip the meta-talking and just try to say what I am thinking.
I try to imagine myself as a reviewer of a Singularity Institute paper. I’m not an expert in that field, so I’m trying to translate myself, and probably failing. Nonetheless, sometimes I would refuse to review the paper. In the SI institute’s case, basically that would happen because I thought the journal wasn’t worth my time or the intro was such a turn-off, I decided the paper was not worth deciphering. I’m assuming, in these cases, that I’m a well meaning but weak reviewer in the sense that this is not my exact area of expertise. In these cases, I really need a good intro, and typically I would skim all the cited papers in the intro once I committed to reviewing. Reading those papers should make me feel like the paper I then review is an incremental improvement over that collection. People talk about “least publishable units” a lot, but there’s probably also a most publishable unit. With rare exceptions. If it’s one of those exceptions, then it should be published in Science, say (or another high-profile general interest journal).
So, I now imagine myself as a researcher at the Singularity Institute (again, badly translating myself). I have ideas I think are pretty good that are also a little too novel and maybe a little too contrarian. I have a problem in that the points are important enough to be high-profile, but my evidence is maybe not correspondingly fantastic (no definitive experiments, etc); in other words, I’m coming out of left field, a bit. I’d first submit to quite high profile journals that occasionally publish wacky stuff (PNAS is the classic example). One such publication would be a huge deal, and failure often leads to PLoS ONE (which is why its impact factor is relatively high despite its acceptance rate being very high; not perfectly on topic for you, depending). I would simultaneously put out papers that had lesser but utterly inarguable results in appropriately specialist journals; this would convince people I was sane (but would probably diverge pretty strongly from my true interests). So, this may seem a bit like what the Singularity Institute is doing (publishing more mainstream articles outside FAI), but the bar seems (to me, in my ignorance of this area) set too low. A lot of really low impact articles do not help. I’d look for weird areas of overlap where I could throw my best idea into a rigorous framework and get published because the framework is sane and the wackiness of the premise is then fine (two Singularity-ish examples I’ve seen: economics of upload, computational neuroscience of uploads)
If this is all totally redundant to your thinking already, no worries, and I won’t be shocked. Cheers and small apologies to Luke.
To become an expert in the field you will have to at least once build an AI going FOOM or instead read Omohundro’s ‘The Basic AI Drives’ paper and cite it as corroborative evidence for any of your claims suffering from a lack of knowledge of how actual AGI is to come about :-)