5. What is the update / implication of this, in your opinion?
Personal opinion:
Progress in model-based RL is far more relevant to getting us closer to AGI than other fields like NLP or image recognition or neuroscience or ML hardware. I worry that once the research community shifts its focus towards RL, the AGI timeline will collapse—not necessarily because there are no more critical insights left to be discovered, but because it’s fundamentally the right path to work on and whatever obstacles remain will buckle quickly once we throw enough warm bodies at them. I think—and this is highly controversial—that the focus on NLP and Vision Transformer has served as a distraction for a couple of years and actually delayed progress towards AGI.
If curiosity-driven exploration gets thrown into the mix and Starcraft/Dota gets solved (for real this time) with comparable data efficiency as humans, that would be a shrieking fire alarm to me (but not to many other people I imagine, as “this has all been done before”).
Personal opinion:
Progress in model-based RL is far more relevant to getting us closer to AGI than other fields like NLP or image recognition or neuroscience or ML hardware. I worry that once the research community shifts its focus towards RL, the AGI timeline will collapse—not necessarily because there are no more critical insights left to be discovered, but because it’s fundamentally the right path to work on and whatever obstacles remain will buckle quickly once we throw enough warm bodies at them. I think—and this is highly controversial—that the focus on NLP and Vision Transformer has served as a distraction for a couple of years and actually delayed progress towards AGI.
If curiosity-driven exploration gets thrown into the mix and Starcraft/Dota gets solved (for real this time) with comparable data efficiency as humans, that would be a shrieking fire alarm to me (but not to many other people I imagine, as “this has all been done before”).
Isn’t this paper already a shrieking fire alarm?