I’m not denying your point, Caledonian—right now, our best conception of a test for smarts in the sense we want is the Turing test, and the Turing test is pretty poor. If we actually understood intelligence, we could answer your questions. But as long as we’re all being physicalists, here, we’re obliged to believe that the human brain is a computing machine—special purpose, massively parallel, but almost certainly Turing-complete and no more. And by analogy with the computing machines we should expect to be able to scale the algorithm to bigger problems.
I’m not saying it’s practical. It could be the obvious scalings would be like scaling the Bogosort. But it would seem to be special pleading to claim it was impossible in theory.
I’m not denying your point, Caledonian—right now, our best conception of a test for smarts in the sense we want is the Turing test, and the Turing test is pretty poor. If we actually understood intelligence, we could answer your questions. But as long as we’re all being physicalists, here, we’re obliged to believe that the human brain is a computing machine—special purpose, massively parallel, but almost certainly Turing-complete and no more. And by analogy with the computing machines we should expect to be able to scale the algorithm to bigger problems.
I’m not saying it’s practical. It could be the obvious scalings would be like scaling the Bogosort. But it would seem to be special pleading to claim it was impossible in theory.