It sounds like you think I’m nitpicking relatively minor points while ignoring the main significance of the paper. What do you think that main significance is?
The paper has an abstract and clearly-written discussions, which you presumably read. I know that you know perfectly well what the implications of the scaling curves and meta-learning are for AI risk and OA’s AGI research programme. That your response is to feign Socratic ignorance and sealion me here, disgenuously asking, ‘gosh, I just don’t know, gwern, what does this paper show other than a mix of SOTA and non-SOTA performance, I am but a humble ML practitioner plying my usual craft of training and finetuning’, shows what extreme bad faith you are arguing in, and it is, sir, bullshit and I will have none of it.
If you think that this does not show what it shows about DL scaling and meta-learning, have the guts to say so, don’t meander around complaining about which of dozens of benchmarks you thought a nearly 100-page paper should’ve talked more about and then retreating to feigned ignorance when challenged.
When you boil it all down, Nostalgebraist is basically Reviewer #3.
That your response is to feign Socratic ignorance and sealion me here, disgenuously asking, ‘gosh, I just don’t know, gwern, what does this paper show other than a mix of SOTA and non-SOTA performance, I am but a humble ML practitioner plying my usual craft of training and finetuning’, shows what extreme bad faith you are arguing in, and it is, sir, bullshit and I will have none of it.
Unless I’m missing some context in previous discussions, this strikes me as extremely antagonistic, uncharitable, and uncalled for. This pattern matches to the kind of shit I would expect to see on the political side of reddit, not LW.
Since I’m not feigning ignorance—I was genuinely curious to hear your view of the paper—there’s little I can do to productively continue this conversation.
Responding mainly to register (in case there’s any doubt) that I don’t agree with your account of my beliefs and motivations, and also to register my surprise at the confidence with which you assert things I know to be false.
The paper has an abstract and clearly-written discussions, which you presumably read. I know that you know perfectly well what the implications of the scaling curves and meta-learning are for AI risk and OA’s AGI research programme. That your response is to feign Socratic ignorance and sealion me here, disgenuously asking, ‘gosh, I just don’t know, gwern, what does this paper show other than a mix of SOTA and non-SOTA performance, I am but a humble ML practitioner plying my usual craft of training and finetuning’, shows what extreme bad faith you are arguing in, and it is, sir, bullshit and I will have none of it.
If you think that this does not show what it shows about DL scaling and meta-learning, have the guts to say so, don’t meander around complaining about which of dozens of benchmarks you thought a nearly 100-page paper should’ve talked more about and then retreating to feigned ignorance when challenged.
Unless I’m missing some context in previous discussions, this strikes me as extremely antagonistic, uncharitable, and uncalled for. This pattern matches to the kind of shit I would expect to see on the political side of reddit, not LW.
Strongly downvoted.
Since I’m not feigning ignorance—I was genuinely curious to hear your view of the paper—there’s little I can do to productively continue this conversation.
Responding mainly to register (in case there’s any doubt) that I don’t agree with your account of my beliefs and motivations, and also to register my surprise at the confidence with which you assert things I know to be false.