It seems that the research team at Microsoft that trained Turing-NLG (the largest non-sparse language model other than GPT-3, I think) never published a paper on it. They just published a short blog post, on February. Is this normal? The researchers have an obvious incentive to publish such a paper, which would probably be cited a lot.
[EDIT: hmm maybe it’s just that they’ve submitted a paper to NeurIPS 2020.]
[EDIT 2: NeurIPS permits putting the submission on arXiv beforehand, so why haven’t they?]
It seems that the research team at Microsoft that trained Turing-NLG (the largest non-sparse language model other than GPT-3, I think) never published a paper on it. They just published a short blog post, on February. Is this normal? The researchers have an obvious incentive to publish such a paper, which would probably be cited a lot.
[EDIT: hmm maybe it’s just that they’ve submitted a paper to NeurIPS 2020.]
[EDIT 2: NeurIPS permits putting the submission on arXiv beforehand, so why haven’t they?]