The most famous proponent of this “those are mere programs” view may be John Searle and his Chinese Room. I wouldn’t call that the weakest argument against AI, although I think his argument is flawed.
Many years ago when I first became interested in strong AI, my boss encouraged me to read Searle’s Chinese Room paper, saying that it was a critically important criticism and that any attempt at AI needed to address it.
To this day, I’m still shocked that anyone considers Searle’s argument meaningful. It was pretty clear, even back then with my lesser understanding of debate tactics, that he had simply ‘defined away’ the problem. That I had been told this was a ‘critically important criticism’ was even more shocking.
I’ve since read critical papers with what I would consider a much stronger foundation, such as those claiming that without whole-body and experience simulation, you won’t able to get something sufficiently human. But the Searle category of argument still seems to be the most common, in spite of its lack of content.
He didn’t define away the problem; his flaw differed from the tautological. The fatal flaw he introduced was creating a computational process and then substituting in himself for that computational process when it came time to evaluate whether that process “understood” Chinese. Since he’s a component of the process, it doesn’t matter whether -he- understands Chinese, only whether the -process- understands Chinese.
One has to commend Searle though from coming up with such a clear example of what he thinks is wrong with the then-current model of AI. I wish all people could formulate their phylosophical ideas, right or wrong, in such a fashion. Even when they are wrong, they can be quite fruitful, as can be seen in the many papers (example still referring to Searle and his Chinese Room, or even more famously in the EPR paradox paper.
The most famous proponent of this “those are mere programs” view may be John Searle and his Chinese Room. I wouldn’t call that the weakest argument against AI, although I think his argument is flawed.
Many years ago when I first became interested in strong AI, my boss encouraged me to read Searle’s Chinese Room paper, saying that it was a critically important criticism and that any attempt at AI needed to address it.
To this day, I’m still shocked that anyone considers Searle’s argument meaningful. It was pretty clear, even back then with my lesser understanding of debate tactics, that he had simply ‘defined away’ the problem. That I had been told this was a ‘critically important criticism’ was even more shocking.
I’ve since read critical papers with what I would consider a much stronger foundation, such as those claiming that without whole-body and experience simulation, you won’t able to get something sufficiently human. But the Searle category of argument still seems to be the most common, in spite of its lack of content.
He didn’t define away the problem; his flaw differed from the tautological. The fatal flaw he introduced was creating a computational process and then substituting in himself for that computational process when it came time to evaluate whether that process “understood” Chinese. Since he’s a component of the process, it doesn’t matter whether -he- understands Chinese, only whether the -process- understands Chinese.
Every time I read something by Searle, my blood pressure rises a couple of standard deviations.
One has to commend Searle though from coming up with such a clear example of what he thinks is wrong with the then-current model of AI. I wish all people could formulate their phylosophical ideas, right or wrong, in such a fashion. Even when they are wrong, they can be quite fruitful, as can be seen in the many papers (example still referring to Searle and his Chinese Room, or even more famously in the EPR paradox paper.