Much of professional analytic philosophy makes my heart sink too. Reading Kant isn’t fun—even if he gains in translation. But I don’t think we can just write off Kant’s work, let alone the whole of still unscientised modern philosophy. In particular, Kant’s exploration of what he calls “The Transcendental Unity of Apperception” (aka the unity of the self) cuts to the heart of the SIAI project—not least the hypothetical and allegedly imminent creation of unitary, software-based digital mind(s) existing at some level of computational abstraction. No one understands how organic brains manage to solve the binding problem (cf. http://lafollejournee02.com/texts/body_and_health/Neurology/Binding.pdf) - let alone how to program a classical digital computer to do likewise. The solution IMO bears on everything from Moravec’s Paradox (why is a sesame-seed-brained bumble bee more competent in open-field contexts than DARPA’s finest?) to the alleged prospect of mind uploading, to the Hard Problem of consciousness.
Presumably, superintelligence can’t be more stunted in its intellectual capacities than biological humans. Therefore, hypothetical nonbiological AGI will need a capacity to e.g. explore multiple state spaces of consciousness; close Levine’s Explanatory Gap (cf. http://cognet.mit.edu/posters/TUCSON3/Levine.html) map out the “neural correlates of consciousness”; and investigate qualia that natural selection hasn’t recruited for any information-processing purpose at all. Yet classical digital computers are still zombies. No one understands how classical digital computers (or a massively classically parallel connectionist architecture, etc) could be otherwise / or indeed have any insight into their zombiehood. [At this point, some hard-nosed behaviourist normally interjects that biological robots _are_zombies—and qualia are a figment of the diseased philosophical imagination. Curiously, the behaviourist never opts to forgo anaesthesia before surgery. Why not save money and permit his surgeons to use merely muscle relaxants to induce muscular paralysis instead?]
The philosophy of language?Anyone who believes in the possibility of singleton AGI should at least be aware of Wittgenstein’s Anti-Private Language Argument. (cf. http://en.wikipedia.org/wiki/Wittgenstein_on_Rules_and_Private_Language) What is the nature of the linguistic competence, i.e. the capacity for meaning and reference, possessed by a notional singleton superintelligence?
Anyone who has studied Peter Singer—or Gary Francione—may wonder if the idea of distinctively Human-Friendly AGI is even intellectually coherent. (cf. “Aryan-Friendly” AGI or “Cannibal-Friendly” AGI?) Why not an impartial Sentience-Friendly AGI?
Hostility to “philosophical” questions has sometimes had intellectually and ethically catastrophic consequences in the natural sciences. Thus the naive positivism of the Copenhagen school retarded progress in pre-Everett quantum mechanics for over half a century. Everett himself, despairing at the reception of his work, went off to work for the Pentagon designing software targeting cities in thermonuclear war. In countless quasi-classical Everett branches, his software was presumably used in nuclear Armageddon.
And so forth...
Note that I’m not arguing that SIAI / lesswrongers don’t have illuminating
responses to all of the points above (and more!), merely that it might be naive to suggest that all of modern philosophy, Kant, and even Plato (cf. the Allegory of the Cave) are simply irrelevant. The price of ignoring philosophy isn’t to transcend it but simply to give bad philosophical assumptions a free pass. History suggests that generation after generation believes they have finally solved all the problems of philosophy; and time and again philosophy buries its gravediggers.
Much of professional analytic philosophy makes my heart sink too. Reading Kant isn’t fun—even if he gains in translation. But I don’t think we can just write off Kant’s work, let alone the whole of still unscientised modern philosophy. In particular, Kant’s exploration of what he calls “The Transcendental Unity of Apperception” (aka the unity of the self) cuts to the heart of the SIAI project—not least the hypothetical and allegedly imminent creation of unitary, software-based digital mind(s) existing at some level of computational abstraction. No one understands how organic brains manage to solve the binding problem (cf. http://lafollejournee02.com/texts/body_and_health/Neurology/Binding.pdf) - let alone how to program a classical digital computer to do likewise. The solution IMO bears on everything from Moravec’s Paradox (why is a sesame-seed-brained bumble bee more competent in open-field contexts than DARPA’s finest?) to the alleged prospect of mind uploading, to the Hard Problem of consciousness.
Presumably, superintelligence can’t be more stunted in its intellectual capacities than biological humans. Therefore, hypothetical nonbiological AGI will need a capacity to e.g. explore multiple state spaces of consciousness; close Levine’s Explanatory Gap (cf. http://cognet.mit.edu/posters/TUCSON3/Levine.html) map out the “neural correlates of consciousness”; and investigate qualia that natural selection hasn’t recruited for any information-processing purpose at all. Yet classical digital computers are still zombies. No one understands how classical digital computers (or a massively classically parallel connectionist architecture, etc) could be otherwise / or indeed have any insight into their zombiehood. [At this point, some hard-nosed behaviourist normally interjects that biological robots _are_zombies—and qualia are a figment of the diseased philosophical imagination. Curiously, the behaviourist never opts to forgo anaesthesia before surgery. Why not save money and permit his surgeons to use merely muscle relaxants to induce muscular paralysis instead?]
The philosophy of language?Anyone who believes in the possibility of singleton AGI should at least be aware of Wittgenstein’s Anti-Private Language Argument. (cf. http://en.wikipedia.org/wiki/Wittgenstein_on_Rules_and_Private_Language) What is the nature of the linguistic competence, i.e. the capacity for meaning and reference, possessed by a notional singleton superintelligence?
Anyone who has studied Peter Singer—or Gary Francione—may wonder if the idea of distinctively Human-Friendly AGI is even intellectually coherent. (cf. “Aryan-Friendly” AGI or “Cannibal-Friendly” AGI?) Why not an impartial Sentience-Friendly AGI?
Hostility to “philosophical” questions has sometimes had intellectually and ethically catastrophic consequences in the natural sciences. Thus the naive positivism of the Copenhagen school retarded progress in pre-Everett quantum mechanics for over half a century. Everett himself, despairing at the reception of his work, went off to work for the Pentagon designing software targeting cities in thermonuclear war. In countless quasi-classical Everett branches, his software was presumably used in nuclear Armageddon.
And so forth...
Note that I’m not arguing that SIAI / lesswrongers don’t have illuminating responses to all of the points above (and more!), merely that it might be naive to suggest that all of modern philosophy, Kant, and even Plato (cf. the Allegory of the Cave) are simply irrelevant. The price of ignoring philosophy isn’t to transcend it but simply to give bad philosophical assumptions a free pass. History suggests that generation after generation believes they have finally solved all the problems of philosophy; and time and again philosophy buries its gravediggers.
But this time is different? Maybe...