Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant:
To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain.
As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on… [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world.
Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:
The emulation argument (see section 7.3)
The evolutionary argument (see section 7.4)
This whole paragraph doesn’t seem to belong to section 1.11.
This whole paragraph doesn’t seem to belong to section 1.11.