It’s a nice parable and all, but it doesn’t seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn’t have to be anything “special” about the items we make use of in this way, except that we intend to so use them. Such “derivative intentionality” is not particularly difficult to explain (nor is the weak form of “natural intentionality” in which smoke “means” fire, tree rings “signify” age, etc.). The big question is whether you can account for the fully-fledged “original intentionality” of (e.g.) our thoughts and intentions.
In particular, I don’t see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.
“I toss in a pebble whenever a sheep passes,” I point out.
“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”
“It’s an interaction between the sheep and the pebbles,” I reply.
“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”
I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”
Mark furrows his brow. “I don’t quite follow you… is the cloth magical?”
I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket.”
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they’re all returned. From an outside standpoint, this agent’s mental bucket is meaningful because there’s a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.
In actuality this correspondence is not the primitive and local quality it feels like; it’s maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving real lattices of causal links and hypothetical lattices of causal links meeting in sense data, not an intrinsic quality of a single node in the lattice considered in isolation from the senses and the hypotheses linking it to the senses.
So far as I can tell, there’s nothing left to explain.
--
“At exactly which point in the process does the pebble become magic?” says Mark.
“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”
I agree with all of this… I would personally ask one question though, as I’m quite confused here… I think (pardon me if I’m putting words in anyone’s mouth) that the epiphenomenalist should agree that it’s all related causally, and when the decision comes to say that “I’ve noticed that I’ve noticed that I’m aware of a chair”, or something, it comes from causal relations. But that’s not located the… “Subjective” or “first person” “experience” (whatever any of those word ‘mean’).
I observe (through photons and my eyes and all the rest) the five sheep going through the gate, even though I miss a sixth, and I believe that the world is how I think it is, and I believe my vision is an intrinsic property of me in the world, mistakenly of course. Actually, when I say I’ve seen five sheep go through the gate, loads of processes that are below the level the conscious/speaking me is aware of, are working away, and are just making the top level stuff available—the stuff that evolution has decided would be beneficial for me to be able to talk about.
That doesn’t mean I’m not conscious of the sheep, just that I’m mistaken about what my consciousness is, and what exactly it’s telling me.
Where does the ‘aware’ bit come in? The ‘feeling’? The ‘subjective’?
(My apologies if I’ve confused a well argued discussion)
It’s a nice parable and all, but it doesn’t seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn’t have to be anything “special” about the items we make use of in this way, except that we intend to so use them. Such “derivative intentionality” is not particularly difficult to explain (nor is the weak form of “natural intentionality” in which smoke “means” fire, tree rings “signify” age, etc.). The big question is whether you can account for the fully-fledged “original intentionality” of (e.g.) our thoughts and intentions.
In particular, I don’t see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.
“I toss in a pebble whenever a sheep passes,” I point out.
“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”
“It’s an interaction between the sheep and the pebbles,” I reply.
“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”
I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”
Mark furrows his brow. “I don’t quite follow you… is the cloth magical?”
I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket.”
And this responds to what I said… how?
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they’re all returned. From an outside standpoint, this agent’s mental bucket is meaningful because there’s a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.
In actuality this correspondence is not the primitive and local quality it feels like; it’s maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving real lattices of causal links and hypothetical lattices of causal links meeting in sense data, not an intrinsic quality of a single node in the lattice considered in isolation from the senses and the hypotheses linking it to the senses.
So far as I can tell, there’s nothing left to explain.
--
“At exactly which point in the process does the pebble become magic?” says Mark.
“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”
I agree with all of this… I would personally ask one question though, as I’m quite confused here… I think (pardon me if I’m putting words in anyone’s mouth) that the epiphenomenalist should agree that it’s all related causally, and when the decision comes to say that “I’ve noticed that I’ve noticed that I’m aware of a chair”, or something, it comes from causal relations. But that’s not located the… “Subjective” or “first person” “experience” (whatever any of those word ‘mean’).
I observe (through photons and my eyes and all the rest) the five sheep going through the gate, even though I miss a sixth, and I believe that the world is how I think it is, and I believe my vision is an intrinsic property of me in the world, mistakenly of course. Actually, when I say I’ve seen five sheep go through the gate, loads of processes that are below the level the conscious/speaking me is aware of, are working away, and are just making the top level stuff available—the stuff that evolution has decided would be beneficial for me to be able to talk about. That doesn’t mean I’m not conscious of the sheep, just that I’m mistaken about what my consciousness is, and what exactly it’s telling me. Where does the ‘aware’ bit come in? The ‘feeling’? The ‘subjective’?
(My apologies if I’ve confused a well argued discussion)