+1. While I will also respect the request to not state them in the comments, I would bet that you could sample 10 ICML/NeurIPS/ICLR/AISTATS authors and learn about >10 well-defined, not entirely overlapping obstacles of this sort.
We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to take it down.
I don’t want people to skim this post and get the impression that this is a common view in ML.
The problem with asking individual authors is that most researchers in ML don’t have a wide enough perspective to realize how close we are. Over the past decade of ML, it seems that people in the trenches of ML almost always think their research is going slower than it is because only a few researchers have broad enough gears models to plan the whole thing in their heads. If you aren’t trying to run the search for the foom-grade model in your head at all times, you won’t see it coming.
That said, they’d all be right about what bottlenecks there are. Just not how fast we’re gonna solve them.
The fact that Google essentially panicked and speed-overhauled internally when ChatGPT dropped is a good example of this. Google has very competent engineers, and a very high interest in predicting competition, and they were working on the same problem, and they clearly did not see this coming, despite it being the biggest threat to their monopoly in a long time.
Similarly, I hung out with some computer scientists working on natural language progressing two days ago. And they had been utterly blindsided by it, and were hateful of it, because they basically felt that a lot of stuff they had been banging their heads against and considered unsolvable in the near future had simply, overnight, been solved. They were expressing concern that their department, which until just now had been considered a decent, cutting-edge approach, might be defunded and closed down.
I am not in computer science, I can only observe this from the outside. But I am very much seeing that statements made confidently about limitations by supposed experts have repeatedly become worthless within years, and that people are blindsided by the accelerations and achievements of people who work in closely related fields. Also that explanations of how novel systems work by people in related fields often clearly represent how these novel systems worked a year or two ago, and are no longer accurate in ways that may first seem subtle, but make a huge difference.
For what it’s worth, I do think that’s true. There are some obstacles that would be incredibly difficult to overcome in 6 months, for anyone. But they are few, and dwindling.
Such a survey was done recently, IIRC. I don’t remember the title or authors but I remember reading through it to see what barriers people cited, and being unimpressed. :( I wish I could find it again.
+1. While I will also respect the request to not state them in the comments, I would bet that you could sample 10 ICML/NeurIPS/ICLR/AISTATS authors and learn about >10 well-defined, not entirely overlapping obstacles of this sort.
I don’t want people to skim this post and get the impression that this is a common view in ML.
The problem with asking individual authors is that most researchers in ML don’t have a wide enough perspective to realize how close we are. Over the past decade of ML, it seems that people in the trenches of ML almost always think their research is going slower than it is because only a few researchers have broad enough gears models to plan the whole thing in their heads. If you aren’t trying to run the search for the foom-grade model in your head at all times, you won’t see it coming.
That said, they’d all be right about what bottlenecks there are. Just not how fast we’re gonna solve them.
The fact that Google essentially panicked and speed-overhauled internally when ChatGPT dropped is a good example of this. Google has very competent engineers, and a very high interest in predicting competition, and they were working on the same problem, and they clearly did not see this coming, despite it being the biggest threat to their monopoly in a long time.
Similarly, I hung out with some computer scientists working on natural language progressing two days ago. And they had been utterly blindsided by it, and were hateful of it, because they basically felt that a lot of stuff they had been banging their heads against and considered unsolvable in the near future had simply, overnight, been solved. They were expressing concern that their department, which until just now had been considered a decent, cutting-edge approach, might be defunded and closed down.
I am not in computer science, I can only observe this from the outside. But I am very much seeing that statements made confidently about limitations by supposed experts have repeatedly become worthless within years, and that people are blindsided by the accelerations and achievements of people who work in closely related fields. Also that explanations of how novel systems work by people in related fields often clearly represent how these novel systems worked a year or two ago, and are no longer accurate in ways that may first seem subtle, but make a huge difference.
I don’t want people to skim this post and get the impression that this is a common view in ML.
So you’re saying that in ML, there is a view that there are obstacles that a well funded lab can’t overcome in 6 months.
For what it’s worth, I do think that’s true. There are some obstacles that would be incredibly difficult to overcome in 6 months, for anyone. But they are few, and dwindling.
Yes.
Such a survey was done recently, IIRC. I don’t remember the title or authors but I remember reading through it to see what barriers people cited, and being unimpressed. :( I wish I could find it again.
This one? https://link.springer.com/article/10.1007/s13748-021-00239-1
LW discussion https://www.lesswrong.com/posts/GXnppjWaQLSKRvnSB/deep-limitations-examining-expert-disagreement-over-deep
I think so, thanks!