For context, this timing implies that all of these are pre-GPT-3/CLIP/DALL-E/MLP-Mixer & my scaling-hypothesis writeup, possibly pre-MuZero and much of the recent planning/model-based DRL work (eg MuZero Unplugged, Decision Transformer), and pre much of the Distill.pub Circuits work & the semi-supervised revolution. Already some of the quotes are endearingly obsolete:
“[For instance] using convolutional attention mechanisms and applying it to graphs structures and training to learn how to represent code by training it on GitHub corpora…that kind of incremental progress would carry us to [...] superintelligence.” (P21).
(Convolutions? OK grandpa. But he’s right that the program synthesis Transformers trained on Github are pretty sweet.*) Unfortunately still contemporary are the pessimistic quotes:
“Those people who say that’s going to continue are saying it as more of a form of religion. It’s blind faith unsupported by facts. But if you have studied cognition, if you have studied the properties of language… [...] you recognise that there are many things that deep learning [...] right now isn’t doing.” (P23).
“My hunch is that deep learning isn’t going anywhere. It has very good solutions for problems where you have large amounts of labelled data, and fairly well-defined tasks and lots of compute thrown at problems. This doesn’t describe many tasks we care about.” (P10).
“If you think you can build the solution even if you don’t know what the problem is, you probably think you can do AI” (P2).
I assume if interviewed now, they’d say the same things but even more loudly and angrily—the typical pessimism masquerading as intellectual seriousness.
* I wrote this before OA/GH Copilot, but it makes the point even more strongly.
Here just a quick response. The intended point of the paper was to allow readers to engage with the position opposite of the one they hold at the time. If read with attention to detail and arguments that could change their minds, it is unlikely to strengthen the readers views, but instead make the reader more uncertain about their position.
There’s considerable fuzziness and speculation at each position along the spectrum from optimism to pessimism. No position depended on a view papers alone, so I disagree with the claim that progress within the last year will make the analysis completely irrelavent and tip the balance very clearly to one side. Worldviews which are non-falsifyable at this stage, played a role in views on both sides.
I can confirm that the experts I interviewed were neither loud or angry. We should probably not assume (no matter the side of the debate we support) that the views of annonymous experts, who do not share our views, isn’t rooted in intellectual seriousness.
For context, this timing implies that all of these are pre-GPT-3/CLIP/DALL-E/MLP-Mixer & my scaling-hypothesis writeup, possibly pre-MuZero and much of the recent planning/model-based DRL work (eg MuZero Unplugged, Decision Transformer), and pre much of the Distill.pub Circuits work & the semi-supervised revolution. Already some of the quotes are endearingly obsolete:
(Convolutions? OK grandpa. But he’s right that the program synthesis Transformers trained on Github are pretty sweet.*) Unfortunately still contemporary are the pessimistic quotes:
I assume if interviewed now, they’d say the same things but even more loudly and angrily—the typical pessimism masquerading as intellectual seriousness.
* I wrote this before OA/GH Copilot, but it makes the point even more strongly.
I so appreciate your candid reaction.
Here just a quick response. The intended point of the paper was to allow readers to engage with the position opposite of the one they hold at the time. If read with attention to detail and arguments that could change their minds, it is unlikely to strengthen the readers views, but instead make the reader more uncertain about their position.
There’s considerable fuzziness and speculation at each position along the spectrum from optimism to pessimism. No position depended on a view papers alone, so I disagree with the claim that progress within the last year will make the analysis completely irrelavent and tip the balance very clearly to one side. Worldviews which are non-falsifyable at this stage, played a role in views on both sides.
I can confirm that the experts I interviewed were neither loud or angry. We should probably not assume (no matter the side of the debate we support) that the views of annonymous experts, who do not share our views, isn’t rooted in intellectual seriousness.
Thanks for reading my paper Gwern!