What’s in the shepherd that’s not in the pebbles, exactly?
Let’s move to the automated pebble-tracking system where a curtain twitches as the sheep passes, causing a pebble to fall into the bucket (the fabric is called Sensory Modality, from a company called Natural Selections). What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
Do you agree that there is a phenomenon of subjective meaning to be accounted for? The question of meaning does not originate with problems like “why does pebble-tracking work?”. It arises because we attribute semantic content both to certain artefacts and to our own mental states.
If we view the number of pebbles as representing the number of sheep, this is possible because of the causal structure, but it actually occurs because of “human interpretation”. Now if we go to mental states themselves, do you propose to explain their representational semantics in exactly the same way – human interpretation; which creates foundationless circularity – or do you propose to explain the semantics of human thought in some other way – and if so in what way – or will you deny that human thoughts have a semantics at all?
Even as a reductionist, I’ll point out that the shepherd seems to have something in him that singles out the sheep specifically, as opposed to all other possible referents. The sheep-tracking system, in contrast, could just as well be counting sheep-noses instead of sheep. Or it could be counting sheep-passings—not the sheep themselves, but rather just their act of passing past the fabric. It’s only when the shepherd is added to the system that the sheep-out-in-the-field get specified as the referents of the pebbles.
One’s initial impulse might be to say that you just need “higher resolution”. The idea is that the pebble machine just doesn’t have a high-enough resolution to differentiate sheep from sheep-passings or sheep-noses, while the shepherd’s brain does. This then leads to questions such as, How much resolution is enough to make meaning? Does the machine (without the shepherd) fail to be a referring thing altogether? Or does its “low resolution” just mean that it refers to some big semantic blob that includes sheep, sheep-noses, sheep-passings, etc.?
Personally, I don’t think that this is the right approach to take. I think it’s better to direct our energy towards resolving our confusion surrounding the concept of a computation.
What’s in the shepherd that’s not in the pebbles, exactly?
Let’s move to the automated pebble-tracking system where a curtain twitches as the sheep passes, causing a pebble to fall into the bucket (the fabric is called Sensory Modality, from a company called Natural Selections). What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
Do you agree that there is a phenomenon of subjective meaning to be accounted for? The question of meaning does not originate with problems like “why does pebble-tracking work?”. It arises because we attribute semantic content both to certain artefacts and to our own mental states.
If we view the number of pebbles as representing the number of sheep, this is possible because of the causal structure, but it actually occurs because of “human interpretation”. Now if we go to mental states themselves, do you propose to explain their representational semantics in exactly the same way – human interpretation; which creates foundationless circularity – or do you propose to explain the semantics of human thought in some other way – and if so in what way – or will you deny that human thoughts have a semantics at all?
Even as a reductionist, I’ll point out that the shepherd seems to have something in him that singles out the sheep specifically, as opposed to all other possible referents. The sheep-tracking system, in contrast, could just as well be counting sheep-noses instead of sheep. Or it could be counting sheep-passings—not the sheep themselves, but rather just their act of passing past the fabric. It’s only when the shepherd is added to the system that the sheep-out-in-the-field get specified as the referents of the pebbles.
ETA: To expand a bit: The issue I raise above is basically Quine’s indeterminacy of translation problem.
One’s initial impulse might be to say that you just need “higher resolution”. The idea is that the pebble machine just doesn’t have a high-enough resolution to differentiate sheep from sheep-passings or sheep-noses, while the shepherd’s brain does. This then leads to questions such as, How much resolution is enough to make meaning? Does the machine (without the shepherd) fail to be a referring thing altogether? Or does its “low resolution” just mean that it refers to some big semantic blob that includes sheep, sheep-noses, sheep-passings, etc.?
Personally, I don’t think that this is the right approach to take. I think it’s better to direct our energy towards resolving our confusion surrounding the concept of a computation.