Interesting—it’s not clear to me how that dialogue addresses the common misconception.
My brief zero-effort counter-argument to that dialogue is: it’s hard to make rockets or airplanes safe without first mastering aerospace engineering.
So I think it’s super obvious that EY/MIRI/LW took the formalist side over connectivist, which I discuss more explicitly in the intro to my most recent 2021 post, which links to my 2015 post which discussed the closely connected ULM vs EM brain theories, which then links to my 2010 post discussing a half-baked connectivist alignment idea with some interesting early debate vs LW formalists (and also my successful prediction of first computer Go champion 5 years in advance).
So I’ve been here a while, and I even had a number of conversations with MIRI’s 2 person ML-but-not-DL alignment group (Jessica & Jack) when that was briefly a thing, and it would be extremely ambitious revisionist history to claim that EY/MIRI didn’t implicitly if not explicitly bet against connectivism.
So that’s why I asked Rob about point 13 above—as it seems unjustifiably dismissive of the now dominant connectivist-friendly alignment research (and said dismissal substantiates my point).
But I’m not here to get in some protracted argument about this. So why am I here? Because loitering on the event-horizon of phyg attractors are obvious historical schelling points to meet other interesting people. Speaking of which, we should chat—I really liked your Birds,Brains,Planes post in particular, I actually wrote up something quite similar a while ago.
Thanks! What’s a phyg attractor? Google turns up nothing.
To say a bit more about my skepticism—there are various reasons why one might want to focus on agent foundations and similar stuff even if you also think that deep learning is about to boom and be super effective and profitable. For example, you might think the deep-learning-based stuff is super hard to align relative to other paradigms. Or you might think that we won’t be able to align it until we are less confused about fundamental issues, and the way to deconfuse ourselves is to think in formal abstractions rather than messing around with big neural nets. Or you might think that both ways are viable but the formal abstraction route is relatively neglected. So the fact that MIRI bet on agent foundations stuff doesn’t seem like strong evidence that they were surprised by the deep learning boom, or at least, more surprised than their typical contemporaries.
Like I said in the parent comment—investing in AF can be a good bet, even if it’s low probability of success. And I mostly agree with your rationalizations there, but they are post-hoc. I challenge you to find early evidence (ideally 2010 or earlier—for reasons explained in a moment) documenting that MIRI leaders “also think that deep learning is about to boom and be super effective and profitable”.
The connectivist-futurists (Moravec/Kurzweil) were already predicting a timeline for AGI in the 2020′s through brain reverse engineering. EY/MIRI implicitly/explicitly critiqued that and literally invested time/money/resources in hiring/training up people (a whole community arguably!) in knowledge/beliefs very different from—and mostly useless for understanding—the connectivist/DL path to AGI.
So if you truly believed in 2010, after hearing some recent neuroscience-phd’s first presentation on how they were going to reverse engineer the brain (DeepMind), and you actually gave that even a 50% chance of success—do you truly believe it would be wise to invest the way MIRI did? And to be hostile to connectivist/DL approaches as they still are? Do you not think they at least burned some bridges? Have you seen EY’s recent thread, where he attempts a blatant revision-history critique of Moravec? (Moravec actually claimed AGI around 2028, not 2010, which seems surprisingly on-track prescient to me now in 2021).
Again, quoting Rob from above:
13. People haven’t tried very hard to find non-MIRI-ish approaches that might work.
Which I read as dismissing the DL-friendly alignment research tracks: IRL/CRL/value learning, etc. And EY explicitly dismisses most alignment research in some other recent thread.
I don’t know what to believe yet; I appreciate the evidence you are giving here (in particular your experience as someone who has been around in the community longer than me). My skepticism was about the inference from MIRI did abstract AF research --> MIRI thought deep learning would be much less effective than it in fact was.
I do remember reading some old posts from EY about connectionism that suggest that he at least failed to predict the deep learning boost in advance. That’s different from confidently predicting it wouldn’t happen though.
I too think that Moravec et al deserve praise for successfully predicting the deep learning boom and having accurate AI timelines 40 years in advance.
Old LessWrong meme—phyg is rot13 cult. For a while people were making “are we a cult” posts so much that it was actually messing with LessWrong’s SEO. Hence phyg.
Thanks! What’s a phyg attractor? Google turns up nothing.
Ask google what LW is—ie just start typing lesswrong or “lesswrong is a” and see the auto-complete. Using the word ‘phyg’ is an LW community norm attempt to re-train google.
Interesting—it’s not clear to me how that dialogue addresses the common misconception.
My brief zero-effort counter-argument to that dialogue is: it’s hard to make rockets or airplanes safe without first mastering aerospace engineering.
So I think it’s super obvious that EY/MIRI/LW took the formalist side over connectivist, which I discuss more explicitly in the intro to my most recent 2021 post, which links to my 2015 post which discussed the closely connected ULM vs EM brain theories, which then links to my 2010 post discussing a half-baked connectivist alignment idea with some interesting early debate vs LW formalists (and also my successful prediction of first computer Go champion 5 years in advance).
So I’ve been here a while, and I even had a number of conversations with MIRI’s 2 person ML-but-not-DL alignment group (Jessica & Jack) when that was briefly a thing, and it would be extremely ambitious revisionist history to claim that EY/MIRI didn’t implicitly if not explicitly bet against connectivism.
So that’s why I asked Rob about point 13 above—as it seems unjustifiably dismissive of the now dominant connectivist-friendly alignment research (and said dismissal substantiates my point).
But I’m not here to get in some protracted argument about this. So why am I here? Because loitering on the event-horizon of phyg attractors are obvious historical schelling points to meet other interesting people. Speaking of which, we should chat—I really liked your Birds,Brains,Planes post in particular, I actually wrote up something quite similar a while ago.
Thanks! What’s a phyg attractor? Google turns up nothing.
To say a bit more about my skepticism—there are various reasons why one might want to focus on agent foundations and similar stuff even if you also think that deep learning is about to boom and be super effective and profitable. For example, you might think the deep-learning-based stuff is super hard to align relative to other paradigms. Or you might think that we won’t be able to align it until we are less confused about fundamental issues, and the way to deconfuse ourselves is to think in formal abstractions rather than messing around with big neural nets. Or you might think that both ways are viable but the formal abstraction route is relatively neglected. So the fact that MIRI bet on agent foundations stuff doesn’t seem like strong evidence that they were surprised by the deep learning boom, or at least, more surprised than their typical contemporaries.
Skepticism of what?
Like I said in the parent comment—investing in AF can be a good bet, even if it’s low probability of success. And I mostly agree with your rationalizations there, but they are post-hoc. I challenge you to find early evidence (ideally 2010 or earlier—for reasons explained in a moment) documenting that MIRI leaders “also think that deep learning is about to boom and be super effective and profitable”.
The connectivist-futurists (Moravec/Kurzweil) were already predicting a timeline for AGI in the 2020′s through brain reverse engineering. EY/MIRI implicitly/explicitly critiqued that and literally invested time/money/resources in hiring/training up people (a whole community arguably!) in knowledge/beliefs very different from—and mostly useless for understanding—the connectivist/DL path to AGI.
So if you truly believed in 2010, after hearing some recent neuroscience-phd’s first presentation on how they were going to reverse engineer the brain (DeepMind), and you actually gave that even a 50% chance of success—do you truly believe it would be wise to invest the way MIRI did? And to be hostile to connectivist/DL approaches as they still are? Do you not think they at least burned some bridges? Have you seen EY’s recent thread, where he attempts a blatant revision-history critique of Moravec? (Moravec actually claimed AGI around 2028, not 2010, which seems surprisingly on-track prescient to me now in 2021).
Again, quoting Rob from above:
Which I read as dismissing the DL-friendly alignment research tracks: IRL/CRL/value learning, etc. And EY explicitly dismisses most alignment research in some other recent thread.
I don’t know what to believe yet; I appreciate the evidence you are giving here (in particular your experience as someone who has been around in the community longer than me). My skepticism was about the inference from MIRI did abstract AF research --> MIRI thought deep learning would be much less effective than it in fact was.
I do remember reading some old posts from EY about connectionism that suggest that he at least failed to predict the deep learning boost in advance. That’s different from confidently predicting it wouldn’t happen though.
I too think that Moravec et al deserve praise for successfully predicting the deep learning boom and having accurate AI timelines 40 years in advance.
Old LessWrong meme—phyg is rot13 cult. For a while people were making “are we a cult” posts so much that it was actually messing with LessWrong’s SEO. Hence phyg.
Ask google what LW is—ie just start typing lesswrong or “lesswrong is a” and see the auto-complete. Using the word ‘phyg’ is an LW community norm attempt to re-train google.