The problem with asking individual authors is that most researchers in ML don’t have a wide enough perspective to realize how close we are. Over the past decade of ML, it seems that people in the trenches of ML almost always think their research is going slower than it is because only a few researchers have broad enough gears models to plan the whole thing in their heads. If you aren’t trying to run the search for the foom-grade model in your head at all times, you won’t see it coming.
That said, they’d all be right about what bottlenecks there are. Just not how fast we’re gonna solve them.
The fact that Google essentially panicked and speed-overhauled internally when ChatGPT dropped is a good example of this. Google has very competent engineers, and a very high interest in predicting competition, and they were working on the same problem, and they clearly did not see this coming, despite it being the biggest threat to their monopoly in a long time.
Similarly, I hung out with some computer scientists working on natural language progressing two days ago. And they had been utterly blindsided by it, and were hateful of it, because they basically felt that a lot of stuff they had been banging their heads against and considered unsolvable in the near future had simply, overnight, been solved. They were expressing concern that their department, which until just now had been considered a decent, cutting-edge approach, might be defunded and closed down.
I am not in computer science, I can only observe this from the outside. But I am very much seeing that statements made confidently about limitations by supposed experts have repeatedly become worthless within years, and that people are blindsided by the accelerations and achievements of people who work in closely related fields. Also that explanations of how novel systems work by people in related fields often clearly represent how these novel systems worked a year or two ago, and are no longer accurate in ways that may first seem subtle, but make a huge difference.
The problem with asking individual authors is that most researchers in ML don’t have a wide enough perspective to realize how close we are. Over the past decade of ML, it seems that people in the trenches of ML almost always think their research is going slower than it is because only a few researchers have broad enough gears models to plan the whole thing in their heads. If you aren’t trying to run the search for the foom-grade model in your head at all times, you won’t see it coming.
That said, they’d all be right about what bottlenecks there are. Just not how fast we’re gonna solve them.
The fact that Google essentially panicked and speed-overhauled internally when ChatGPT dropped is a good example of this. Google has very competent engineers, and a very high interest in predicting competition, and they were working on the same problem, and they clearly did not see this coming, despite it being the biggest threat to their monopoly in a long time.
Similarly, I hung out with some computer scientists working on natural language progressing two days ago. And they had been utterly blindsided by it, and were hateful of it, because they basically felt that a lot of stuff they had been banging their heads against and considered unsolvable in the near future had simply, overnight, been solved. They were expressing concern that their department, which until just now had been considered a decent, cutting-edge approach, might be defunded and closed down.
I am not in computer science, I can only observe this from the outside. But I am very much seeing that statements made confidently about limitations by supposed experts have repeatedly become worthless within years, and that people are blindsided by the accelerations and achievements of people who work in closely related fields. Also that explanations of how novel systems work by people in related fields often clearly represent how these novel systems worked a year or two ago, and are no longer accurate in ways that may first seem subtle, but make a huge difference.