I didn’t say they were good flasifiable predictions—just that they were there. And it was a cogent critique that people were using misleading terms which implied their programs had more implicit capacity than they actually did.
I think this article is interesting—but not thanks to Searle.
Essentially what we’re seeing here is the Parable Effect—you can endlessly retell a story from new points of view. I’m NOT suggesting this is a horrible thing as anything that makes you think is, in my opinion, a good step. This is my interpretation of what the Stuart is doing when mining “Searle’s Intuitions”.
The weakness of parables, though, is they are entirely impressionistic. This is why I give more credit to Stuart in his exploration than I do “The Chinese Room”. The CR parable is technically horrifically flawed. Also, the implications that it points to not specific issues with AI but rather vague ones is another example of how a parable may indicate a broad swath of observation but doesn’t actually contain useful detail.
Obvious Problem is Obvious
Even at the time of Searle’s logical analysis most computer scientists entering the field understood they were up against a huge wall of complexity when it came to AI. I think the mistakes were not so much in realizing that an IBM 360 or, for that matter, a smaller circuit were going contain a “brain”—the mistakes were in trying to assume what the basic building blocks were going to be.
Because the actual weaknesses in approach are empirical in nature the actual refinements over time in AI research are not about philosophical impossibilities but rather just data and theoretics. Dennet, as an example, tries to frame computing theory so-as to make the theoretics more understandable. He does this rather than to a priori deduce the actual nature of computing’s future (sans hypothesis).
So, while I’ll agree that anyone can use any narrative as a kickstarter to thinking the value of the original narrative is going to be not just where the narrative is “inspired from” but also the details of actual empircal relevance involved. This is a stark contrast of, say, Schrodinger’s Cat versus the Chinese Room. The robustness of one is immensely higher.
The flip-side is AI researchers can easily ignore the Chinese Room entirely without risk of blundering. The parable actually doesn’t provide anything on the order of guidance Searles seems to suggest it does.
I didn’t say they were good flasifiable predictions—just that they were there. And it was a cogent critique that people were using misleading terms which implied their programs had more implicit capacity than they actually did.
I think this article is interesting—but not thanks to Searle.
Essentially what we’re seeing here is the Parable Effect—you can endlessly retell a story from new points of view. I’m NOT suggesting this is a horrible thing as anything that makes you think is, in my opinion, a good step. This is my interpretation of what the Stuart is doing when mining “Searle’s Intuitions”.
The weakness of parables, though, is they are entirely impressionistic. This is why I give more credit to Stuart in his exploration than I do “The Chinese Room”. The CR parable is technically horrifically flawed. Also, the implications that it points to not specific issues with AI but rather vague ones is another example of how a parable may indicate a broad swath of observation but doesn’t actually contain useful detail.
Obvious Problem is Obvious Even at the time of Searle’s logical analysis most computer scientists entering the field understood they were up against a huge wall of complexity when it came to AI. I think the mistakes were not so much in realizing that an IBM 360 or, for that matter, a smaller circuit were going contain a “brain”—the mistakes were in trying to assume what the basic building blocks were going to be.
Because the actual weaknesses in approach are empirical in nature the actual refinements over time in AI research are not about philosophical impossibilities but rather just data and theoretics. Dennet, as an example, tries to frame computing theory so-as to make the theoretics more understandable. He does this rather than to a priori deduce the actual nature of computing’s future (sans hypothesis).
So, while I’ll agree that anyone can use any narrative as a kickstarter to thinking the value of the original narrative is going to be not just where the narrative is “inspired from” but also the details of actual empircal relevance involved. This is a stark contrast of, say, Schrodinger’s Cat versus the Chinese Room. The robustness of one is immensely higher.
The flip-side is AI researchers can easily ignore the Chinese Room entirely without risk of blundering. The parable actually doesn’t provide anything on the order of guidance Searles seems to suggest it does.