Some debate whether machines can have minds at all. The most famous argument against machines achieving general intelligence comes from Hubert Dreyfus. The most famous argument against the claim that an AI can have mental states is John Searle’s Chinese Room argument, to which there are many replies. The argument comes in several variations. Most Less Wrongers have already concluded that yes, machines can have minds. Others debate whether machines can be conscious.
There is much debate on the significance of variations on the Turing Test. There is also lots of interplay between artificial intelligence work and philosophical logic. There is some debate over whether minds are multiply realizable, though most accept that they are. There is some literature on the problem of embodied cognition—human minds can only do certain things because of their long development; can these achievements be replicated in a machine written “from scratch”?
Philosophy of Artificial Intelligence (links)
Earlier, I provided an overview of formal epistemology, a field of philosophy highly relevant to the discussions on Less Wrong. Today I do the same for another branch of philosophy: the philosophy of artificial intelligence (here’s another overview).
Some debate whether machines can have minds at all. The most famous argument against machines achieving general intelligence comes from Hubert Dreyfus. The most famous argument against the claim that an AI can have mental states is John Searle’s Chinese Room argument, to which there are many replies. The argument comes in several variations. Most Less Wrongers have already concluded that yes, machines can have minds. Others debate whether machines can be conscious.
There is much debate on the significance of variations on the Turing Test. There is also lots of interplay between artificial intelligence work and philosophical logic. There is some debate over whether minds are multiply realizable, though most accept that they are. There is some literature on the problem of embodied cognition—human minds can only do certain things because of their long development; can these achievements be replicated in a machine written “from scratch”?
Of greater interest to me and perhaps most Less Wrongers is the ethics of artificial intelligence. Most of the work here so far is on the rights of robots. For Less Wrongers, the more pressing concern is that of creating AIs that behave ethically. (In 2009, robots programmed to cooperate evolved to lie to each other.) Perhaps the most pressing is the need to develop Friendly AI, but as far as I can find, no work on Good’s intelligence explosion singularity idea has been published in a major peer-reviewed journal except for David Chalmers’ “The Singularity: A Philosophical Analysis” (Journal of Consciousness Studies 17: 7-65). The next closest thing may be something like “On the Morality of Artificial Agents” by Floridi & Sanders.
Perhaps the best overview of the philosophy of artificial intelligence is chapter 26 of Russell & Norvig’s Artificial Intelligence: A Modern Approach.