I actually think another lesson from both evolution and LLMs is that it might not require much or any novel philosophy or insight to create useful cognitive systems, including AGI. I expect high-quality explicit philosophy to be one way of making progress, but not the only one.
Evolution itself did not do any philosophy in the course of creating general intelligence, and humans themselves often manage to grow intellectually and get smarter without doing natural philosophy, explicit metacognition, or deep introspection.
So even if LLMs and other current DL paradigm methods plateau, I think it’s plausible, even likely, that capabilities research like Voyager will continue making progress for a lot longer. Maybe Voyager-like approaches will scale all the way to AGI, but even if they also plateau, I expect that there are ways of getting unblocked other than doing explicit philosophy of intelligence research or massive evolutionary simulations.
In terms of responses to arguments in the post: it’s not that there are no blockers, or that there’s just one thing we need, or that big evolutionary simulations will work or be feasible any time soon. It’s just that explicit philosophy isn’t the only way of filling in the missing pieces, however large and many they may be.
I actually think another lesson from both evolution and LLMs is that it might not require much or any novel philosophy or insight to create useful cognitive systems, including AGI. I expect high-quality explicit philosophy to be one way of making progress, but not the only one.
Evolution itself did not do any philosophy in the course of creating general intelligence, and humans themselves often manage to grow intellectually and get smarter without doing natural philosophy, explicit metacognition, or deep introspection.
So even if LLMs and other current DL paradigm methods plateau, I think it’s plausible, even likely, that capabilities research like Voyager will continue making progress for a lot longer. Maybe Voyager-like approaches will scale all the way to AGI, but even if they also plateau, I expect that there are ways of getting unblocked other than doing explicit philosophy of intelligence research or massive evolutionary simulations.
In terms of responses to arguments in the post: it’s not that there are no blockers, or that there’s just one thing we need, or that big evolutionary simulations will work or be feasible any time soon. It’s just that explicit philosophy isn’t the only way of filling in the missing pieces, however large and many they may be.
Related—“There are always many ways through the garden of forking paths, and something needs only one path to happen.”