I object to this point and would be interested to see a defense of it:
Block makes a number of arguments about the nature of comprehension and intelligence based on Blockhead—but many (including Daniel Dennett, and myself) think that these arguments are deeply flawed, and the example of Blockhead is not useful for gaining either conceptual insight or practical inspiration. Why not? Well, it’s absurdly unrealistic; you could never get anywhere near implementing it in real life.
I’ll fire back with a quote from David Lewis (paraphrased, I can’t find the original) “This possibility is so strange and outlandish that some people refuse to learn anything from it.”
If you write out the arguments, they don’t depend on Blockhead actually happening in the future, or even actually happening in any world that shares our laws of physics. As far as I can tell. So whether or not Blockhead is realistic is irrelevant.
Edge cases make bad law, but they make great mathematics. Philosophy is more like math than law. QED. (Actually, some parts of philosophy really are more like law than math. I don’t think this part is, though.)
Later, you say:
More importantly, even though Blockhead gets the right answer on all the inputs we give it, it’s not doing anything remotely like thinking or reasoning.
Wasn’t that exactly the point Block was trying to make with the Blockhead thought experiment?From the paper:
…two systems could have actual and potential behavior typical of familiar intelligent beings, that the two systems could be exactly alike in their actual and potential behavior, and in their behavioral dispositions and capacities and counterfactual behavioral properties (i.e., what behaviors, behavioral dispositions, and behavioral capacities they would have exhibited had their stimuli differed)--the two systems could be alike in all these ways, yet there could be a difference in the information processing that mediates their stimuli and responses that determines that one is not at all intelligent while the other is fully intelligent.
Maybe he later went on to derive other conclusions, and it is those that you object to? I haven’t followed the literature as closely as I’d like.
Yeah, actually, I think your counterargument is correct. I basically had a cached thought that Block was trying to do with Blockhead a similar thing to what Searle was trying to do with the Chinese Room. Should have checked it more carefully.
I’ve now edited to remove my critique of Block himself, while still keeping the argument that Blockhead is uninformative about AI for (some of) the same reasons that bayesianism is.
I object to this point and would be interested to see a defense of it:
I’ll fire back with a quote from David Lewis (paraphrased, I can’t find the original) “This possibility is so strange and outlandish that some people refuse to learn anything from it.”
If you write out the arguments, they don’t depend on Blockhead actually happening in the future, or even actually happening in any world that shares our laws of physics. As far as I can tell. So whether or not Blockhead is realistic is irrelevant.
Edge cases make bad law, but they make great mathematics. Philosophy is more like math than law. QED. (Actually, some parts of philosophy really are more like law than math. I don’t think this part is, though.)
Later, you say:
Wasn’t that exactly the point Block was trying to make with the Blockhead thought experiment? From the paper:
Maybe he later went on to derive other conclusions, and it is those that you object to? I haven’t followed the literature as closely as I’d like.
Yeah, actually, I think your counterargument is correct. I basically had a cached thought that Block was trying to do with Blockhead a similar thing to what Searle was trying to do with the Chinese Room. Should have checked it more carefully.
I’ve now edited to remove my critique of Block himself, while still keeping the argument that Blockhead is uninformative about AI for (some of) the same reasons that bayesianism is.