I couldn’t help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...
Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, “Grass is green”, I mean that grass is green. I conjure in my mind’s eye a mental image of blades of grass, and their colour, in the image, is green. So, I think to myself, the world is like that.
Could a zombie do all this? They would go “through the motions”, so to speak, but they wouldn’t actually see any mental image of green grass in their mind’s eye, so they could not really intend that their words convey that the world is “like that”. Insofar as there are no “lights on inside”, it would seem that they don’t really intend anything; they do not have minds.
If you can understand the above two paragraphs, then it seems that you have a conception of meaning as a distinctively mental relation (e.g. that holds between thoughts and worldly objects or states of affairs), not reducible to any of the purely physical/functional states that are shared by our zombie twins.
(From “The Simple Truth”, a parable about using pebbles in a bucket to keep count of the sheep in a pasture.)
“My pebbles represent the sheep!” Autrey says triumphantly. “Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”
“Ah!” Mark says. “Special causal powers, instead of magic.”
“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.”
“What kind of special powers does the bucket have?” asks Mark.
“Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked – when the bucket is empty, it means the pastures are empty.”
“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”
“It’s an ordinary bucket,” I say. “I used to climb trees with it… I don’t think this question needs to be difficult.”
“I’m talking to Autrey,” says Mark.
“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains.
Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.
“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”
Autrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.”
“Can I look at a pebble?” says Mark.
“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.
Autrey looks at me, puzzled. “Didn’t you just mess it up?”
I shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”
“But—” Autrey says.
“I taught you everything you know, but I haven’t taught you everything I know,” I say.
Mark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”
“A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”
“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.
Autrey laughs. “Now you’re just being gratuitously evil.”
I nod, for this is indeed the case.
“Is that really going to work, though?” says Autrey.
I nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the elan vital that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.
(The moral: In this sequence, I explained how words come to ‘mean’ things in a lawful, causal, mathematical universe with no mystical subterritory. If you think meaning has a special power and special nature beyond that, then (a) it seems to me that there is nothing left to explain and hence no motivation for the theory, and (b) I should like you to say what this extra nature is, exactly, and how you know about it—your lips moving in this, our causal and lawful universe, the while.)
It’s a nice parable and all, but it doesn’t seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn’t have to be anything “special” about the items we make use of in this way, except that we intend to so use them. Such “derivative intentionality” is not particularly difficult to explain (nor is the weak form of “natural intentionality” in which smoke “means” fire, tree rings “signify” age, etc.). The big question is whether you can account for the fully-fledged “original intentionality” of (e.g.) our thoughts and intentions.
In particular, I don’t see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.
“I toss in a pebble whenever a sheep passes,” I point out.
“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”
“It’s an interaction between the sheep and the pebbles,” I reply.
“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”
I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”
Mark furrows his brow. “I don’t quite follow you… is the cloth magical?”
I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket.”
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they’re all returned. From an outside standpoint, this agent’s mental bucket is meaningful because there’s a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.
In actuality this correspondence is not the primitive and local quality it feels like; it’s maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving real lattices of causal links and hypothetical lattices of causal links meeting in sense data, not an intrinsic quality of a single node in the lattice considered in isolation from the senses and the hypotheses linking it to the senses.
So far as I can tell, there’s nothing left to explain.
--
“At exactly which point in the process does the pebble become magic?” says Mark.
“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”
I agree with all of this… I would personally ask one question though, as I’m quite confused here… I think (pardon me if I’m putting words in anyone’s mouth) that the epiphenomenalist should agree that it’s all related causally, and when the decision comes to say that “I’ve noticed that I’ve noticed that I’m aware of a chair”, or something, it comes from causal relations. But that’s not located the… “Subjective” or “first person” “experience” (whatever any of those word ‘mean’).
I observe (through photons and my eyes and all the rest) the five sheep going through the gate, even though I miss a sixth, and I believe that the world is how I think it is, and I believe my vision is an intrinsic property of me in the world, mistakenly of course. Actually, when I say I’ve seen five sheep go through the gate, loads of processes that are below the level the conscious/speaking me is aware of, are working away, and are just making the top level stuff available—the stuff that evolution has decided would be beneficial for me to be able to talk about.
That doesn’t mean I’m not conscious of the sheep, just that I’m mistaken about what my consciousness is, and what exactly it’s telling me.
Where does the ‘aware’ bit come in? The ‘feeling’? The ‘subjective’?
(My apologies if I’ve confused a well argued discussion)
How, precisely, does one formalize the concept of “the bucket of pebbles represents the number of sheep, but it is doing so inaccurately.” ie, that it’s a model of the number of sheep rather than about something else, but a bad/inaccurate model?
I’ve fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.
The best I can come up with is something like “I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don’t match the links that exist in reality.”
See this thread for a discussion. A less buck-passing model is: “This bucket represents the sheep … plus an error term resulting from this here specific error process.”
For instance, if I systematically count two sheep exiting together as one sheep, then the bucket represents the number of sheep minus the number of sheep-pairs erroneously detected as one sheep. It’s not enough to say the sheep-detector is buggy; to have an accurate model of what it does (and thus, what its representations mean) you need to know what the bug is.
I couldn’t help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...
Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, “Grass is green”, I mean that grass is green. I conjure in my mind’s eye a mental image of blades of grass, and their colour, in the image, is green. So, I think to myself, the world is like that.
Could a zombie do all this? They would go “through the motions”, so to speak, but they wouldn’t actually see any mental image of green grass in their mind’s eye, so they could not really intend that their words convey that the world is “like that”. Insofar as there are no “lights on inside”, it would seem that they don’t really intend anything; they do not have minds.
If you can understand the above two paragraphs, then it seems that you have a conception of meaning as a distinctively mental relation (e.g. that holds between thoughts and worldly objects or states of affairs), not reducible to any of the purely physical/functional states that are shared by our zombie twins.
(From “The Simple Truth”, a parable about using pebbles in a bucket to keep count of the sheep in a pasture.)
“My pebbles represent the sheep!” Autrey says triumphantly. “Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”
“Ah!” Mark says. “Special causal powers, instead of magic.”
“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.”
“What kind of special powers does the bucket have?” asks Mark.
“Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked – when the bucket is empty, it means the pastures are empty.”
“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”
“It’s an ordinary bucket,” I say. “I used to climb trees with it… I don’t think this question needs to be difficult.”
“I’m talking to Autrey,” says Mark.
“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains.
Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.
“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”
Autrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.”
“Can I look at a pebble?” says Mark.
“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.
Autrey looks at me, puzzled. “Didn’t you just mess it up?”
I shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”
“But—” Autrey says.
“I taught you everything you know, but I haven’t taught you everything I know,” I say.
Mark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”
“A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”
“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.
Autrey laughs. “Now you’re just being gratuitously evil.”
I nod, for this is indeed the case.
“Is that really going to work, though?” says Autrey.
I nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the elan vital that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.
(The moral: In this sequence, I explained how words come to ‘mean’ things in a lawful, causal, mathematical universe with no mystical subterritory. If you think meaning has a special power and special nature beyond that, then (a) it seems to me that there is nothing left to explain and hence no motivation for the theory, and (b) I should like you to say what this extra nature is, exactly, and how you know about it—your lips moving in this, our causal and lawful universe, the while.)
It’s a nice parable and all, but it doesn’t seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn’t have to be anything “special” about the items we make use of in this way, except that we intend to so use them. Such “derivative intentionality” is not particularly difficult to explain (nor is the weak form of “natural intentionality” in which smoke “means” fire, tree rings “signify” age, etc.). The big question is whether you can account for the fully-fledged “original intentionality” of (e.g.) our thoughts and intentions.
In particular, I don’t see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.
“I toss in a pebble whenever a sheep passes,” I point out.
“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”
“It’s an interaction between the sheep and the pebbles,” I reply.
“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”
I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”
Mark furrows his brow. “I don’t quite follow you… is the cloth magical?”
I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket.”
And this responds to what I said… how?
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they’re all returned. From an outside standpoint, this agent’s mental bucket is meaningful because there’s a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.
In actuality this correspondence is not the primitive and local quality it feels like; it’s maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving real lattices of causal links and hypothetical lattices of causal links meeting in sense data, not an intrinsic quality of a single node in the lattice considered in isolation from the senses and the hypotheses linking it to the senses.
So far as I can tell, there’s nothing left to explain.
--
“At exactly which point in the process does the pebble become magic?” says Mark.
“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”
I agree with all of this… I would personally ask one question though, as I’m quite confused here… I think (pardon me if I’m putting words in anyone’s mouth) that the epiphenomenalist should agree that it’s all related causally, and when the decision comes to say that “I’ve noticed that I’ve noticed that I’m aware of a chair”, or something, it comes from causal relations. But that’s not located the… “Subjective” or “first person” “experience” (whatever any of those word ‘mean’).
I observe (through photons and my eyes and all the rest) the five sheep going through the gate, even though I miss a sixth, and I believe that the world is how I think it is, and I believe my vision is an intrinsic property of me in the world, mistakenly of course. Actually, when I say I’ve seen five sheep go through the gate, loads of processes that are below the level the conscious/speaking me is aware of, are working away, and are just making the top level stuff available—the stuff that evolution has decided would be beneficial for me to be able to talk about. That doesn’t mean I’m not conscious of the sheep, just that I’m mistaken about what my consciousness is, and what exactly it’s telling me. Where does the ‘aware’ bit come in? The ‘feeling’? The ‘subjective’?
(My apologies if I’ve confused a well argued discussion)
How, precisely, does one formalize the concept of “the bucket of pebbles represents the number of sheep, but it is doing so inaccurately.” ie, that it’s a model of the number of sheep rather than about something else, but a bad/inaccurate model?
I’ve fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.
The best I can come up with is something like “I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don’t match the links that exist in reality.”
See this thread for a discussion. A less buck-passing model is: “This bucket represents the sheep … plus an error term resulting from this here specific error process.”
For instance, if I systematically count two sheep exiting together as one sheep, then the bucket represents the number of sheep minus the number of sheep-pairs erroneously detected as one sheep. It’s not enough to say the sheep-detector is buggy; to have an accurate model of what it does (and thus, what its representations mean) you need to know what the bug is.