That is a 6,777 word dialogue that covers many things. Can you summarize the part that is an account of aboutness specifically?
Skimming it, you seem to me to be saying that a physical system A is about a physical system B if each state that B is in (up to some equivalence relation) causes A to be in a distinct state (up to some equivalence relation). Hence, the pebbles in the bucket are “about” the sheep in the field because the number of sheep in the field causes the number of pebbles in the bucket to take on a certain value.
I write that summary knowing that it probably misses something crucial in your account. As I say, I only skimmed the essay, trying to skip jokes and blatant caricatures (e.g., when your foil says, “Now, I’d like to move on to the issue of how logic kills cute baby seals—”). My summary is just to give you a launching point from which to correct potential misunderstandings, should you care to.
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of “truth” and “semantics” and “aboutness” for people who are having trouble with it. I’m not sure there’s a part I could slice off for this question, given that someone is asking the question at all.
Maybe I’d ask, “In what sense are the pebbles not about the sheep? If the pebbles are about the sheep, in what sense is this at all mysterious?”
I make no claims about aboutness. Rather, I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called “aboutness” which remains unresolved, it’s up to you to define it.
I make no claims about aboutness. Rather, I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called “aboutness” which remains unresolved, it’s up to you to define it.
Then, to call this an “account of aboutness”, you should explain what it is about the human mind that makes it feel as though there is this thing called “aboutness” that feels so mysterious to so many. As you put so well here: “Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.”
If you did this in your essay, it was too dispersed for me to see it as I skimmed. What I saw was a caricature of the rationalizations people use to justify their beliefs. I didn’t see the origin of the intuitions standing behind their beliefs.
I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called “aboutness” which remains unresolved, it’s up to you to define it.
Yes, I think this was the part that was missing in the initial reply.
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of “truth” and “semantics” and “aboutness” for people who are having trouble with it. I’m not sure there’s a part I could slice off for this question, given that someone is asking the question at all.
The story only addresses how representation works in a simple mechanism exterior to a mind (i.e., the pebble method works because the number of pebbles is made to track the number of sheep). The position one usually takes in these matters is that the semantics of artefacts (like the meaning of a sound) are contingent and dependent upon convention, but that the semantics of minds (like the subject matter of a thought) are intrinsic to their nature.
It is easy to see that the semantics of the pebbles has some element of contingency—they might have been counting elephants rather than sheep. It is also easy to see that the semantics of the pebbles derives from the shepherd’s purposes and actions. So there is no challenge here to the usual position, stated above.
But what you don’t address is the semantics of minds. Do you agree with the distinction between intrinsic and mind-dependent representation? If so, how does intrinsic representation come about? What is it about the physical aspect of a thought that connects it to its meaning?
What’s in the shepherd that’s not in the pebbles, exactly?
Let’s move to the automated pebble-tracking system where a curtain twitches as the sheep passes, causing a pebble to fall into the bucket (the fabric is called Sensory Modality, from a company called Natural Selections). What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
Do you agree that there is a phenomenon of subjective meaning to be accounted for? The question of meaning does not originate with problems like “why does pebble-tracking work?”. It arises because we attribute semantic content both to certain artefacts and to our own mental states.
If we view the number of pebbles as representing the number of sheep, this is possible because of the causal structure, but it actually occurs because of “human interpretation”. Now if we go to mental states themselves, do you propose to explain their representational semantics in exactly the same way – human interpretation; which creates foundationless circularity – or do you propose to explain the semantics of human thought in some other way – and if so in what way – or will you deny that human thoughts have a semantics at all?
Even as a reductionist, I’ll point out that the shepherd seems to have something in him that singles out the sheep specifically, as opposed to all other possible referents. The sheep-tracking system, in contrast, could just as well be counting sheep-noses instead of sheep. Or it could be counting sheep-passings—not the sheep themselves, but rather just their act of passing past the fabric. It’s only when the shepherd is added to the system that the sheep-out-in-the-field get specified as the referents of the pebbles.
One’s initial impulse might be to say that you just need “higher resolution”. The idea is that the pebble machine just doesn’t have a high-enough resolution to differentiate sheep from sheep-passings or sheep-noses, while the shepherd’s brain does. This then leads to questions such as, How much resolution is enough to make meaning? Does the machine (without the shepherd) fail to be a referring thing altogether? Or does its “low resolution” just mean that it refers to some big semantic blob that includes sheep, sheep-noses, sheep-passings, etc.?
Personally, I don’t think that this is the right approach to take. I think it’s better to direct our energy towards resolving our confusion surrounding the concept of a computation.
Thought I accounted for aboutness already, in The Simple Truth. Please explain what aspect of aboutness I failed to account for here.
That is a 6,777 word dialogue that covers many things. Can you summarize the part that is an account of aboutness specifically?
Skimming it, you seem to me to be saying that a physical system A is about a physical system B if each state that B is in (up to some equivalence relation) causes A to be in a distinct state (up to some equivalence relation). Hence, the pebbles in the bucket are “about” the sheep in the field because the number of sheep in the field causes the number of pebbles in the bucket to take on a certain value.
I write that summary knowing that it probably misses something crucial in your account. As I say, I only skimmed the essay, trying to skip jokes and blatant caricatures (e.g., when your foil says, “Now, I’d like to move on to the issue of how logic kills cute baby seals—”). My summary is just to give you a launching point from which to correct potential misunderstandings, should you care to.
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of “truth” and “semantics” and “aboutness” for people who are having trouble with it. I’m not sure there’s a part I could slice off for this question, given that someone is asking the question at all.
Maybe I’d ask, “In what sense are the pebbles not about the sheep? If the pebbles are about the sheep, in what sense is this at all mysterious?”
I make no claims about aboutness. Rather, I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called “aboutness” which remains unresolved, it’s up to you to define it.
Then, to call this an “account of aboutness”, you should explain what it is about the human mind that makes it feel as though there is this thing called “aboutness” that feels so mysterious to so many. As you put so well here: “Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.”
If you did this in your essay, it was too dispersed for me to see it as I skimmed. What I saw was a caricature of the rationalizations people use to justify their beliefs. I didn’t see the origin of the intuitions standing behind their beliefs.
Yes, I think this was the part that was missing in the initial reply.
Agree with Tyrrell_McAllister. You need to be a lot more specific when you make a claim like this.
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of “truth” and “semantics” and “aboutness” for people who are having trouble with it. I’m not sure there’s a part I could slice off for this question, given that someone is asking the question at all.
I see that Dr. Searle stopped by and loaned you some Special Causal Powers™ for your bucket(s) in that story.
I shall have to make a note of this post to use in my Searle work.
The story only addresses how representation works in a simple mechanism exterior to a mind (i.e., the pebble method works because the number of pebbles is made to track the number of sheep). The position one usually takes in these matters is that the semantics of artefacts (like the meaning of a sound) are contingent and dependent upon convention, but that the semantics of minds (like the subject matter of a thought) are intrinsic to their nature.
It is easy to see that the semantics of the pebbles has some element of contingency—they might have been counting elephants rather than sheep. It is also easy to see that the semantics of the pebbles derives from the shepherd’s purposes and actions. So there is no challenge here to the usual position, stated above.
But what you don’t address is the semantics of minds. Do you agree with the distinction between intrinsic and mind-dependent representation? If so, how does intrinsic representation come about? What is it about the physical aspect of a thought that connects it to its meaning?
What’s in the shepherd that’s not in the pebbles, exactly?
Let’s move to the automated pebble-tracking system where a curtain twitches as the sheep passes, causing a pebble to fall into the bucket (the fabric is called Sensory Modality, from a company called Natural Selections). What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
Do you agree that there is a phenomenon of subjective meaning to be accounted for? The question of meaning does not originate with problems like “why does pebble-tracking work?”. It arises because we attribute semantic content both to certain artefacts and to our own mental states.
If we view the number of pebbles as representing the number of sheep, this is possible because of the causal structure, but it actually occurs because of “human interpretation”. Now if we go to mental states themselves, do you propose to explain their representational semantics in exactly the same way – human interpretation; which creates foundationless circularity – or do you propose to explain the semantics of human thought in some other way – and if so in what way – or will you deny that human thoughts have a semantics at all?
Even as a reductionist, I’ll point out that the shepherd seems to have something in him that singles out the sheep specifically, as opposed to all other possible referents. The sheep-tracking system, in contrast, could just as well be counting sheep-noses instead of sheep. Or it could be counting sheep-passings—not the sheep themselves, but rather just their act of passing past the fabric. It’s only when the shepherd is added to the system that the sheep-out-in-the-field get specified as the referents of the pebbles.
ETA: To expand a bit: The issue I raise above is basically Quine’s indeterminacy of translation problem.
One’s initial impulse might be to say that you just need “higher resolution”. The idea is that the pebble machine just doesn’t have a high-enough resolution to differentiate sheep from sheep-passings or sheep-noses, while the shepherd’s brain does. This then leads to questions such as, How much resolution is enough to make meaning? Does the machine (without the shepherd) fail to be a referring thing altogether? Or does its “low resolution” just mean that it refers to some big semantic blob that includes sheep, sheep-noses, sheep-passings, etc.?
Personally, I don’t think that this is the right approach to take. I think it’s better to direct our energy towards resolving our confusion surrounding the concept of a computation.