The Chinese Room argument is actually pretty good if you read it as a criticism of suggestively named LISP tokens, which I think were popular roughly around that time. But of course, it fails completely once you try to make it into a general proof of why computers can’t think. Then again, Searle didn’t claim it impossible for computers to think, he just said that they’d need similar “causal powers” as the human brain.
Also, the argument that “When they claim that a mind can emerge from “a system” without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology” is actually pretty reasonable. Steelmanned, the Chinese Room would be an attack on people who were putting together suggestively named tokens and building systems that could perform crude manipulations on their input and then claiming that this was major progress towards building a mind, while having no good theory of why exactly these particular kinds of symbol manipulations should be expected to produce a mind.
Or look at something like SHRDLU: it’s superficially very impressive and gives an impression that you’re dealing with something intelligent, but IIRC, it was just a huge bunch of hand-coded rules for addressing various kinds of queries, and the approach didn’t scale to more complex domains because the amount of rules you’d needed to have program in would have blown up. In the context of programs like those, Searle’s complaints about dumb systems that do symbol manipulation without any real understanding of what they’re doing make a lot more sense.
Except that if you do word2vec or similar on a huge dataset of (suggestively named or not) tokens you can actually learn a great deal of their semantic relations. It hasn’t been fully demonstrated yet, but I think that if you could ground only a small fraction of these tokens to sensory experiences, they you could infer the “meaning” (in an operational sense) of all of the other tokens.
The Chinese Room argument is actually pretty good if you read it as a criticism of suggestively named LISP tokens, which I think were popular roughly around that time. But of course, it fails completely once you try to make it into a general proof of why computers can’t think. Then again, Searle didn’t claim it impossible for computers to think, he just said that they’d need similar “causal powers” as the human brain.
Also, the argument that “When they claim that a mind can emerge from “a system” without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology” is actually pretty reasonable. Steelmanned, the Chinese Room would be an attack on people who were putting together suggestively named tokens and building systems that could perform crude manipulations on their input and then claiming that this was major progress towards building a mind, while having no good theory of why exactly these particular kinds of symbol manipulations should be expected to produce a mind.
Or look at something like SHRDLU: it’s superficially very impressive and gives an impression that you’re dealing with something intelligent, but IIRC, it was just a huge bunch of hand-coded rules for addressing various kinds of queries, and the approach didn’t scale to more complex domains because the amount of rules you’d needed to have program in would have blown up. In the context of programs like those, Searle’s complaints about dumb systems that do symbol manipulation without any real understanding of what they’re doing make a lot more sense.
Yep, that’s the good predictions I managed to extract from the paper in my case studies :-)
Which reminds me, I really should get around reading the case studies. Tomorrow on the train back home, the latest.
Except that if you do word2vec or similar on a huge dataset of (suggestively named or not) tokens you can actually learn a great deal of their semantic relations. It hasn’t been fully demonstrated yet, but I think that if you could ground only a small fraction of these tokens to sensory experiences, they you could infer the “meaning” (in an operational sense) of all of the other tokens.