I don’t think the argument leads to any falsifiable prediction, you are stretching it beyond its scope.
Yes, that’s what I was doing. I was trying to find something falsifiable from the intuitions behind it. A lot of thought and experience went into those intuitions; it would be useful to get something out of them, when possible.
Ok, but keep in mind that the hindsight bias it’s hard to avoid when making predictions in the past whose outcome you already know.
Since we already know that GOFAI approaches hit diminishing returns and didn’t get anywhere close to providing human-level AI, it might be tempting to say that Searle was addressing GOFAI. But GOFAI failed because of complexity issues: both in creating explicit formal models of common knowledge and in doing inference on these formal models due to combinatorial explosion. Searle didn’t refer to complexity, hence I don’t think his analysis was relevant in forecasting the failure of GOFAI.
I didn’t say they were good flasifiable predictions—just that they were there. And it was a cogent critique that people were using misleading terms which implied their programs had more implicit capacity than they actually did.
I think this article is interesting—but not thanks to Searle.
Essentially what we’re seeing here is the Parable Effect—you can endlessly retell a story from new points of view. I’m NOT suggesting this is a horrible thing as anything that makes you think is, in my opinion, a good step. This is my interpretation of what the Stuart is doing when mining “Searle’s Intuitions”.
The weakness of parables, though, is they are entirely impressionistic. This is why I give more credit to Stuart in his exploration than I do “The Chinese Room”. The CR parable is technically horrifically flawed. Also, the implications that it points to not specific issues with AI but rather vague ones is another example of how a parable may indicate a broad swath of observation but doesn’t actually contain useful detail.
Obvious Problem is Obvious
Even at the time of Searle’s logical analysis most computer scientists entering the field understood they were up against a huge wall of complexity when it came to AI. I think the mistakes were not so much in realizing that an IBM 360 or, for that matter, a smaller circuit were going contain a “brain”—the mistakes were in trying to assume what the basic building blocks were going to be.
Because the actual weaknesses in approach are empirical in nature the actual refinements over time in AI research are not about philosophical impossibilities but rather just data and theoretics. Dennet, as an example, tries to frame computing theory so-as to make the theoretics more understandable. He does this rather than to a priori deduce the actual nature of computing’s future (sans hypothesis).
So, while I’ll agree that anyone can use any narrative as a kickstarter to thinking the value of the original narrative is going to be not just where the narrative is “inspired from” but also the details of actual empircal relevance involved. This is a stark contrast of, say, Schrodinger’s Cat versus the Chinese Room. The robustness of one is immensely higher.
The flip-side is AI researchers can easily ignore the Chinese Room entirely without risk of blundering. The parable actually doesn’t provide anything on the order of guidance Searles seems to suggest it does.
Yes, that’s what I was doing. I was trying to find something falsifiable from the intuitions behind it. A lot of thought and experience went into those intuitions; it would be useful to get something out of them, when possible.
Ok, but keep in mind that the hindsight bias it’s hard to avoid when making predictions in the past whose outcome you already know.
Since we already know that GOFAI approaches hit diminishing returns and didn’t get anywhere close to providing human-level AI, it might be tempting to say that Searle was addressing GOFAI.
But GOFAI failed because of complexity issues: both in creating explicit formal models of common knowledge and in doing inference on these formal models due to combinatorial explosion. Searle didn’t refer to complexity, hence I don’t think his analysis was relevant in forecasting the failure of GOFAI.
I didn’t say they were good flasifiable predictions—just that they were there. And it was a cogent critique that people were using misleading terms which implied their programs had more implicit capacity than they actually did.
I think this article is interesting—but not thanks to Searle.
Essentially what we’re seeing here is the Parable Effect—you can endlessly retell a story from new points of view. I’m NOT suggesting this is a horrible thing as anything that makes you think is, in my opinion, a good step. This is my interpretation of what the Stuart is doing when mining “Searle’s Intuitions”.
The weakness of parables, though, is they are entirely impressionistic. This is why I give more credit to Stuart in his exploration than I do “The Chinese Room”. The CR parable is technically horrifically flawed. Also, the implications that it points to not specific issues with AI but rather vague ones is another example of how a parable may indicate a broad swath of observation but doesn’t actually contain useful detail.
Obvious Problem is Obvious Even at the time of Searle’s logical analysis most computer scientists entering the field understood they were up against a huge wall of complexity when it came to AI. I think the mistakes were not so much in realizing that an IBM 360 or, for that matter, a smaller circuit were going contain a “brain”—the mistakes were in trying to assume what the basic building blocks were going to be.
Because the actual weaknesses in approach are empirical in nature the actual refinements over time in AI research are not about philosophical impossibilities but rather just data and theoretics. Dennet, as an example, tries to frame computing theory so-as to make the theoretics more understandable. He does this rather than to a priori deduce the actual nature of computing’s future (sans hypothesis).
So, while I’ll agree that anyone can use any narrative as a kickstarter to thinking the value of the original narrative is going to be not just where the narrative is “inspired from” but also the details of actual empircal relevance involved. This is a stark contrast of, say, Schrodinger’s Cat versus the Chinese Room. The robustness of one is immensely higher.
The flip-side is AI researchers can easily ignore the Chinese Room entirely without risk of blundering. The parable actually doesn’t provide anything on the order of guidance Searles seems to suggest it does.