To the extent that one might not have predicted scientists to hold these views, I can see why this paper might cause a positive predictive update on brainlike AGI.
However, technological development is not a zero-sum game. Opportunities or enthusiasm in neuroscience doesn’t in itself make prosaic AGI less likely and I don’t feel like any of the provided arguments are knockdown arguments against ANN’s leading to prosaic AGI.
I don’t particularly find arguments about human level intelligence being unprecedented outside of humans convincing, in part because of the analogy to “the god of the gaps”. Many predictions about what computers can’t do have been falsified, sometimes in unexpected ways (ie: arguments about AI not being able to make art). Moreover, that more is different in AI and the development of single-shot models seem powerful argument about the potential of prosaic AI systems when scaled.
However, technological development is not a zero-sum game. Opportunities or enthusiasm in neuroscience doesn’t in itself make prosaic AGI less likely and I don’t feel like any of the provided arguments are knockdown arguments against ANN’s leading to prosaic AGI.
Completely agreed!
I believe there are two distinct arguments at play in the paper and that they are not mutually exclusive. I think the first is “in contrast to the optimism of those outside the field, many front-line AI researchers believe that major new breakthroughs are needed before we can build artificial systems capable of doing all that a human, or even a much simpler animal like a mouse, can do” and the second is “a better understanding of neural computation will reveal basic ingredients of intelligence and catalyze the next revolution in AI, eventually leading to artificial agents with capabilities that match and perhaps even surpass those of humans.”
The first argument can be read as a reason to negatively update on prosaic AGI (unless you see these ‘major new breakthroughs’ as also being prosaic) and the second argument can be read as a reason to positively update on brain-like AGI. To be clear, I agree that the second argument is not a good reason to negatively update on prosaic AGI.
Understood. Maybe if the first argument was more concrete, we can examine it’s predictions. For example, what fundamental limitations exist in current systems? What should a breakthrough do (at least conceptually) in order to move us into the new paradigm?
I think it’s reasonable that understanding the brain better may yield insights but I believe Paul’s comment about return on existing insights diminishing over time. Technologies like dishbrain seem exciting and might change that trend?
To the extent that one might not have predicted scientists to hold these views, I can see why this paper might cause a positive predictive update on brainlike AGI.
However, technological development is not a zero-sum game. Opportunities or enthusiasm in neuroscience doesn’t in itself make prosaic AGI less likely and I don’t feel like any of the provided arguments are knockdown arguments against ANN’s leading to prosaic AGI.
I don’t particularly find arguments about human level intelligence being unprecedented outside of humans convincing, in part because of the analogy to “the god of the gaps”. Many predictions about what computers can’t do have been falsified, sometimes in unexpected ways (ie: arguments about AI not being able to make art). Moreover, that more is different in AI and the development of single-shot models seem powerful argument about the potential of prosaic AI systems when scaled.
Completely agreed!
I believe there are two distinct arguments at play in the paper and that they are not mutually exclusive. I think the first is “in contrast to the optimism of those outside the field, many front-line AI researchers believe that major new breakthroughs are needed before we can build artificial systems capable of doing all that a human, or even a much simpler animal like a mouse, can do” and the second is “a better understanding of neural computation will reveal basic ingredients of intelligence and catalyze the next revolution in AI, eventually leading to artificial agents with capabilities that match and perhaps even surpass those of humans.”
The first argument can be read as a reason to negatively update on prosaic AGI (unless you see these ‘major new breakthroughs’ as also being prosaic) and the second argument can be read as a reason to positively update on brain-like AGI. To be clear, I agree that the second argument is not a good reason to negatively update on prosaic AGI.
Understood. Maybe if the first argument was more concrete, we can examine it’s predictions. For example, what fundamental limitations exist in current systems? What should a breakthrough do (at least conceptually) in order to move us into the new paradigm?
I think it’s reasonable that understanding the brain better may yield insights but I believe Paul’s comment about return on existing insights diminishing over time. Technologies like dishbrain seem exciting and might change that trend?