EY’s belief distribution about NNs and early DL from over a decade ago and how that reflects on his predictive track record has already been extensively litigated in other recent threads like here. I mostly agree that EY 2008 and later is somewhat cautious/circumspect about making explicitly future-disprovable predictions, but he surely did seem to exude skepticism which complements my interpretation of his actions.
That being said I also largely agree that MIRI’s research path was chosen specifically to try and be more generic than any viable route to AGI. But one could consider that also as something of a failure or missed opportunity vs investing more in studying neural networks, the neuroscience of human alignment, etc.
But I’ve always said (perhaps not in public, but nonetheless) that I thought MIRI had a very small chance of success, but it was still a reasonable bet for at least one team to make, just in case the connectivists were all wrong about this DL thing.
EY’s belief distribution about NNs and early DL from over a decade ago and how that reflects on his predictive track record has already been extensively litigated in other recent threads like here. I mostly agree that EY 2008 and later is somewhat cautious/circumspect about making explicitly future-disprovable predictions, but he surely did seem to exude skepticism which complements my interpretation of his actions.
That being said I also largely agree that MIRI’s research path was chosen specifically to try and be more generic than any viable route to AGI. But one could consider that also as something of a failure or missed opportunity vs investing more in studying neural networks, the neuroscience of human alignment, etc.
But I’ve always said (perhaps not in public, but nonetheless) that I thought MIRI had a very small chance of success, but it was still a reasonable bet for at least one team to make, just in case the connectivists were all wrong about this DL thing.