Thanks! What’s a phyg attractor? Google turns up nothing.
To say a bit more about my skepticism—there are various reasons why one might want to focus on agent foundations and similar stuff even if you also think that deep learning is about to boom and be super effective and profitable. For example, you might think the deep-learning-based stuff is super hard to align relative to other paradigms. Or you might think that we won’t be able to align it until we are less confused about fundamental issues, and the way to deconfuse ourselves is to think in formal abstractions rather than messing around with big neural nets. Or you might think that both ways are viable but the formal abstraction route is relatively neglected. So the fact that MIRI bet on agent foundations stuff doesn’t seem like strong evidence that they were surprised by the deep learning boom, or at least, more surprised than their typical contemporaries.
Like I said in the parent comment—investing in AF can be a good bet, even if it’s low probability of success. And I mostly agree with your rationalizations there, but they are post-hoc. I challenge you to find early evidence (ideally 2010 or earlier—for reasons explained in a moment) documenting that MIRI leaders “also think that deep learning is about to boom and be super effective and profitable”.
The connectivist-futurists (Moravec/Kurzweil) were already predicting a timeline for AGI in the 2020′s through brain reverse engineering. EY/MIRI implicitly/explicitly critiqued that and literally invested time/money/resources in hiring/training up people (a whole community arguably!) in knowledge/beliefs very different from—and mostly useless for understanding—the connectivist/DL path to AGI.
So if you truly believed in 2010, after hearing some recent neuroscience-phd’s first presentation on how they were going to reverse engineer the brain (DeepMind), and you actually gave that even a 50% chance of success—do you truly believe it would be wise to invest the way MIRI did? And to be hostile to connectivist/DL approaches as they still are? Do you not think they at least burned some bridges? Have you seen EY’s recent thread, where he attempts a blatant revision-history critique of Moravec? (Moravec actually claimed AGI around 2028, not 2010, which seems surprisingly on-track prescient to me now in 2021).
Again, quoting Rob from above:
13. People haven’t tried very hard to find non-MIRI-ish approaches that might work.
Which I read as dismissing the DL-friendly alignment research tracks: IRL/CRL/value learning, etc. And EY explicitly dismisses most alignment research in some other recent thread.
I don’t know what to believe yet; I appreciate the evidence you are giving here (in particular your experience as someone who has been around in the community longer than me). My skepticism was about the inference from MIRI did abstract AF research --> MIRI thought deep learning would be much less effective than it in fact was.
I do remember reading some old posts from EY about connectionism that suggest that he at least failed to predict the deep learning boost in advance. That’s different from confidently predicting it wouldn’t happen though.
I too think that Moravec et al deserve praise for successfully predicting the deep learning boom and having accurate AI timelines 40 years in advance.
Old LessWrong meme—phyg is rot13 cult. For a while people were making “are we a cult” posts so much that it was actually messing with LessWrong’s SEO. Hence phyg.
Thanks! What’s a phyg attractor? Google turns up nothing.
Ask google what LW is—ie just start typing lesswrong or “lesswrong is a” and see the auto-complete. Using the word ‘phyg’ is an LW community norm attempt to re-train google.
Thanks! What’s a phyg attractor? Google turns up nothing.
To say a bit more about my skepticism—there are various reasons why one might want to focus on agent foundations and similar stuff even if you also think that deep learning is about to boom and be super effective and profitable. For example, you might think the deep-learning-based stuff is super hard to align relative to other paradigms. Or you might think that we won’t be able to align it until we are less confused about fundamental issues, and the way to deconfuse ourselves is to think in formal abstractions rather than messing around with big neural nets. Or you might think that both ways are viable but the formal abstraction route is relatively neglected. So the fact that MIRI bet on agent foundations stuff doesn’t seem like strong evidence that they were surprised by the deep learning boom, or at least, more surprised than their typical contemporaries.
Skepticism of what?
Like I said in the parent comment—investing in AF can be a good bet, even if it’s low probability of success. And I mostly agree with your rationalizations there, but they are post-hoc. I challenge you to find early evidence (ideally 2010 or earlier—for reasons explained in a moment) documenting that MIRI leaders “also think that deep learning is about to boom and be super effective and profitable”.
The connectivist-futurists (Moravec/Kurzweil) were already predicting a timeline for AGI in the 2020′s through brain reverse engineering. EY/MIRI implicitly/explicitly critiqued that and literally invested time/money/resources in hiring/training up people (a whole community arguably!) in knowledge/beliefs very different from—and mostly useless for understanding—the connectivist/DL path to AGI.
So if you truly believed in 2010, after hearing some recent neuroscience-phd’s first presentation on how they were going to reverse engineer the brain (DeepMind), and you actually gave that even a 50% chance of success—do you truly believe it would be wise to invest the way MIRI did? And to be hostile to connectivist/DL approaches as they still are? Do you not think they at least burned some bridges? Have you seen EY’s recent thread, where he attempts a blatant revision-history critique of Moravec? (Moravec actually claimed AGI around 2028, not 2010, which seems surprisingly on-track prescient to me now in 2021).
Again, quoting Rob from above:
Which I read as dismissing the DL-friendly alignment research tracks: IRL/CRL/value learning, etc. And EY explicitly dismisses most alignment research in some other recent thread.
I don’t know what to believe yet; I appreciate the evidence you are giving here (in particular your experience as someone who has been around in the community longer than me). My skepticism was about the inference from MIRI did abstract AF research --> MIRI thought deep learning would be much less effective than it in fact was.
I do remember reading some old posts from EY about connectionism that suggest that he at least failed to predict the deep learning boost in advance. That’s different from confidently predicting it wouldn’t happen though.
I too think that Moravec et al deserve praise for successfully predicting the deep learning boom and having accurate AI timelines 40 years in advance.
Old LessWrong meme—phyg is rot13 cult. For a while people were making “are we a cult” posts so much that it was actually messing with LessWrong’s SEO. Hence phyg.
Ask google what LW is—ie just start typing lesswrong or “lesswrong is a” and see the auto-complete. Using the word ‘phyg’ is an LW community norm attempt to re-train google.