Meaning doesn’t seem to be a thing in the way that atoms and qualia are, so I’m doubtful that the causal criterion properly applies to it (similarly for normative properties).
(Note that it would seem rather self-defeating to claim that ‘meaning’ is meaningless.)
I couldn’t help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...
Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, “Grass is green”, I mean that grass is green. I conjure in my mind’s eye a mental image of blades of grass, and their colour, in the image, is green. So, I think to myself, the world is like that.
Could a zombie do all this? They would go “through the motions”, so to speak, but they wouldn’t actually see any mental image of green grass in their mind’s eye, so they could not really intend that their words convey that the world is “like that”. Insofar as there are no “lights on inside”, it would seem that they don’t really intend anything; they do not have minds.
If you can understand the above two paragraphs, then it seems that you have a conception of meaning as a distinctively mental relation (e.g. that holds between thoughts and worldly objects or states of affairs), not reducible to any of the purely physical/functional states that are shared by our zombie twins.
(From “The Simple Truth”, a parable about using pebbles in a bucket to keep count of the sheep in a pasture.)
“My pebbles represent the sheep!” Autrey says triumphantly. “Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”
“Ah!” Mark says. “Special causal powers, instead of magic.”
“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.”
“What kind of special powers does the bucket have?” asks Mark.
“Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked – when the bucket is empty, it means the pastures are empty.”
“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”
“It’s an ordinary bucket,” I say. “I used to climb trees with it… I don’t think this question needs to be difficult.”
“I’m talking to Autrey,” says Mark.
“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains.
Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.
“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”
Autrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.”
“Can I look at a pebble?” says Mark.
“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.
Autrey looks at me, puzzled. “Didn’t you just mess it up?”
I shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”
“But—” Autrey says.
“I taught you everything you know, but I haven’t taught you everything I know,” I say.
Mark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”
“A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”
“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.
Autrey laughs. “Now you’re just being gratuitously evil.”
I nod, for this is indeed the case.
“Is that really going to work, though?” says Autrey.
I nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the elan vital that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.
(The moral: In this sequence, I explained how words come to ‘mean’ things in a lawful, causal, mathematical universe with no mystical subterritory. If you think meaning has a special power and special nature beyond that, then (a) it seems to me that there is nothing left to explain and hence no motivation for the theory, and (b) I should like you to say what this extra nature is, exactly, and how you know about it—your lips moving in this, our causal and lawful universe, the while.)
It’s a nice parable and all, but it doesn’t seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn’t have to be anything “special” about the items we make use of in this way, except that we intend to so use them. Such “derivative intentionality” is not particularly difficult to explain (nor is the weak form of “natural intentionality” in which smoke “means” fire, tree rings “signify” age, etc.). The big question is whether you can account for the fully-fledged “original intentionality” of (e.g.) our thoughts and intentions.
In particular, I don’t see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.
“I toss in a pebble whenever a sheep passes,” I point out.
“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”
“It’s an interaction between the sheep and the pebbles,” I reply.
“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”
I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”
Mark furrows his brow. “I don’t quite follow you… is the cloth magical?”
I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket.”
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they’re all returned. From an outside standpoint, this agent’s mental bucket is meaningful because there’s a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.
In actuality this correspondence is not the primitive and local quality it feels like; it’s maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving real lattices of causal links and hypothetical lattices of causal links meeting in sense data, not an intrinsic quality of a single node in the lattice considered in isolation from the senses and the hypotheses linking it to the senses.
So far as I can tell, there’s nothing left to explain.
--
“At exactly which point in the process does the pebble become magic?” says Mark.
“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”
I agree with all of this… I would personally ask one question though, as I’m quite confused here… I think (pardon me if I’m putting words in anyone’s mouth) that the epiphenomenalist should agree that it’s all related causally, and when the decision comes to say that “I’ve noticed that I’ve noticed that I’m aware of a chair”, or something, it comes from causal relations. But that’s not located the… “Subjective” or “first person” “experience” (whatever any of those word ‘mean’).
I observe (through photons and my eyes and all the rest) the five sheep going through the gate, even though I miss a sixth, and I believe that the world is how I think it is, and I believe my vision is an intrinsic property of me in the world, mistakenly of course. Actually, when I say I’ve seen five sheep go through the gate, loads of processes that are below the level the conscious/speaking me is aware of, are working away, and are just making the top level stuff available—the stuff that evolution has decided would be beneficial for me to be able to talk about.
That doesn’t mean I’m not conscious of the sheep, just that I’m mistaken about what my consciousness is, and what exactly it’s telling me.
Where does the ‘aware’ bit come in? The ‘feeling’? The ‘subjective’?
(My apologies if I’ve confused a well argued discussion)
How, precisely, does one formalize the concept of “the bucket of pebbles represents the number of sheep, but it is doing so inaccurately.” ie, that it’s a model of the number of sheep rather than about something else, but a bad/inaccurate model?
I’ve fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.
The best I can come up with is something like “I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don’t match the links that exist in reality.”
See this thread for a discussion. A less buck-passing model is: “This bucket represents the sheep … plus an error term resulting from this here specific error process.”
For instance, if I systematically count two sheep exiting together as one sheep, then the bucket represents the number of sheep minus the number of sheep-pairs erroneously detected as one sheep. It’s not enough to say the sheep-detector is buggy; to have an accurate model of what it does (and thus, what its representations mean) you need to know what the bug is.
I’m trying to figure out what work “meaning” is doing. Eliezer says brains are “thinking” meaningless gibberish. You dispute this by saying,
… mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts …
But what are brains thinking, if not thoughts?
And then
But the fact that the squiggles are about consciousness (or indeed anything at all) depends crucially upon the epiphenomenal aspects of our minds, in addition.
This implies that “about”-ness and “meaning” have roughly the same set of properties. But I don’t understand why anyone believes anything about “meaning” (in this sense). If it doesn’t appear in the causal diagram, how could we tell that we’re not living in a totally meaningless universe? Let’s play the Monday-Tuesday game: on Monday, our thoughts are meaningful; on Tuesday, they’re not. What’s different?
Right, according to epiphenomenalists, brains aren’t thinking (they may be computing, but syntax is not semantics).
If it doesn’t appear in the causal diagram, how could we tell that we’re not living in a totally meaningless universe?
Our thoughts are (like qualia) what we are most directly acquainted with. If we didn’t have them, there would be no “we” to “tell” anything. We only need causal connections to put us in contact with the world beyond our minds.
You can probably give a functionalist analysis of computation. I doubt we can reductively analyse “thinking” (at least if you taboo away all related mentalistic terms), so this strikes me as a bedrock case (again, like “qualia”) where tabooing away the term (and its cognates) simply leaves you unable to talk about the phenomenon in question.
It sounds like “thinking” and “qualia” are getting the special privilege of being irreducible, even though there have been plenty of attempts to reduce them, and these attempts have had at least some success. Why can’t I pick any concept and declare it a bedrock case? Is my cat fuzzy? Well, you could talk about how she is covered with soft fur, but it’s possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it’s possible to imagine these things, clearly fuzziness must be non-physical. It’s maybe harder to imagine a non-fuzzy cat than to imagine a non-thinking person, but that’s just because fuzziness doesn’t have the same aura of the mysterious that thinking and experiencing do.
I don’t beleive anyone has regarded thinking as causally irreducible for at least a century.
Could you cite a partially successful reduction of qualia?
Yes, that sometimes happens when you reduce something; it turns out that there’s nothing left. Nobody would say that there is no reductionist account of phlogiston.
That may be so (though I agree with Peter, that reduction and elimination are different), but regardless Dennett’s actual argument is not a reduction of qualia to more simple terms. He argued (mostly on conceptual grounds) that the idea of qualia is incoherent. Even if elimination (in the manner of phlogiston) were reduction, Dennett’s argument wouldn’t be a case of either.
OK, I think I agree with this view of Dennett. I hadn’t read the book in a while, and I conflated his reduction of consciousness (which is, I think, a genuine reduction) with his explanation of qualia.
I am not saying that all posits are doomed to elimination, only that what is elemintated tends to be a posit rather
than a prima facie phenomenon. How could you say that there is no heat? I also don’t agree that qualia are posits...but Dennett of course needs to portray them that way in order to eliminate them.
I don’t think I understand what you think is and isn’t a “posit”. “Cold” is a prima facie phenomenon as well, but it has been subsumed entirely into the concept of “heat”.
The prima-facie phenomenon of “cold” (as in “your hands feel cold”) has been subsumed under the scientific theory of heat-as-random-molecular-motion. That’s reduction. it was never eilmininated in favour of the prima-facie phenomenn of heat, as in “This soup is hot”.
Only minorly. We could just as well still talk about phlogiston, which is just negative oxygen. The difference between reduction and elimination is just that in the latter, we do not think the concept is useful anymore. If there are different “we”s involved, you might have the same analysis result in both.
Only minorly. We could just as well still talk about phlogiston, which is just negative oxygen.
Not very menaingfully. What does that mean in terms of modern physics? Negatively ionised oxygen? Anti-oxygen? Negatively massive oxygen?
The difference between reduction and elimination is just that in the latter, we do not think the concept is useful anymore
Well, that’s a difference.
Only minorly.
Is it minority opinion that reductive materialism and eliminative materialism are different positions?
“The reductive materialist contrasts the eliminativist more strongly, arguing that a mental state is well defined, and that further research will result in a more detailed, but not different understanding.[3]”—WP
Heat was reduced, phlogiston was eliminated. There is heat. There is no phlogiston.
That is the reductionist account of phlogiston. The grandparent didn’t claim that everyone would agree that there is a reduction of phlogiston that makes sense. The result of reduction is that phlogiston was eliminated. Which sometimes happens when you try to reduce things.
This is what the grandparent was saying. You were in agreement already.
It;s an emlimination. If it were a reduction, there would still be phlogiston as is there is still heat.
The reductive explanation of combustion did not need phlogiston as a posit, so it was eliminated. Note the difference
beteen phlogiston, a posit, and heat/combustion, which are prima-facie phenomena. Nobody was trying to
reductivley explain phlogiston, they were trying to explain heat with it.
It depends on what you mean by ‘thinking’, but I think the view is pretty widespread that rational relations (like the relation of justification between premises and a conclusion) are not reducible to any physical relation in such a way that explains or even preserves the rational relation.
I’m thinking of Donald Davidson’s ‘Mental Events’ as an example at the moment, just to illustrate the point. He would say that while every token mental state is identical to a token physical state, and every token mental causal relation (like a relation of inference or justification) is identical to a token physical causal relation...
...nevertheless, types of mental states, like the thought that it is raining, and types of mental causal relations, like the inference that if it is raining, and I don’t want to get wet, then I should bring an umbrella, are not identical to types of physical states or types of physical causal relations.
This has the result that 1) we can be assured that the mind supervenes on the brain in some way, and that there’s nothing immaterial or non-physical going on, but 2) there are in principle no explanations of brain states and relations which suffice as explanations of anything like thinking, reasoning, inferring etc..
Davidson’s views are widely known rather than widely accepted, I think. I don’t recall seeing them beign used
for a serious argument for epiphenomenalism, I can see how they could be, if you tie causality to laws. OTOH,
I can see how you could argue in the opposite direction: if mental events are identical to physical events, then, by Leibnitz’s law, they have the same causal powers as physical events.
Well, you could talk about how she is covered with soft fur, but it’s possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it’s possible to imagine these things, clearly fuzziness must be non-physical.
Erm, this is just poor reasoning. The conclusion that follows from your premises is that the properties of fuzziness and being-covered-in-fur are distinct, but that doesn’t yet make fuzziness non-physical, since there are obviously other physical properties besides being-covered-in-fur that it might reduce to. The simple proof: you can’t hold ALL the other physical facts fixed and yet change the fuzziness facts. Any world physically identical to ours is a world in which your cat is still fuzzy. (There are no fuzz-zombies.) This is an obvious conceptual truth.
So, in short, the reason why you can’t just “pick any concept and declare it a bedrock case” is that competent conceptual analysis would soon expose it to be a mistake.
No, I’m saying that you could hold all of the physical facts fixed and my cat might still not be fuzzy. This is somewhat absurd, but I have a tremendously good imagination; if I can imagine zombies, I can imagine fuzz-zombies.
More than that, it’s obviously incoherent. I assume your point is that the same should be said of zombies? Probably reaching diminishing returns in this discussion, so I’ll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here. Even those who want to deny that zombies are metaphysically possible generally concede that the concept is logically coherent.
More than that, it’s obviously incoherent. I assume your point is that the same should be said of zombies?
On reflection, I think that’s right. I’m capable of imagining incoherent things.
I’ll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here
I guess I’m somewhat skeptical that anyone can be an expert in which non-existent things are more or less possible. How could you tell if someone was ever correct—let alone an expert? Wouldn’t there be a relentless treadmill of acceptance of increasingly absurd claims, because nobody want to admit that their powers of conception are weak and they can’t imagine something?
I’m not sure I follow you. Why would you need to analyse “thinking” in order to “get a start on building AI”? Presumably it’s enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought. Whether it’s really thought, or mere computation, that occurs inside the black box is presumably not any concern of computer scientists!
I’m not sure I follow you. Why would you need to analyse “thinking” in order to “get a start on building AI”?
Becuase thought is essential to intelligence. Why would you need to analyse intelligence to get a start
on building artificial intelliigence? Because you would have no idea what you were tryinng to do if you didn’t.
Presumably it’s enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought.
I fail to see how that is not just a long winded way of saying “analysing thought”
Meaning doesn’t seem to be a thing in the way that atoms and qualia are, so I’m doubtful that the causal criterion properly applies to it (similarly for normative properties).
(Note that it would seem rather self-defeating to claim that ‘meaning’ is meaningless.)
What exactly do you mean by “mean”?
I couldn’t help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...
Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, “Grass is green”, I mean that grass is green. I conjure in my mind’s eye a mental image of blades of grass, and their colour, in the image, is green. So, I think to myself, the world is like that.
Could a zombie do all this? They would go “through the motions”, so to speak, but they wouldn’t actually see any mental image of green grass in their mind’s eye, so they could not really intend that their words convey that the world is “like that”. Insofar as there are no “lights on inside”, it would seem that they don’t really intend anything; they do not have minds.
If you can understand the above two paragraphs, then it seems that you have a conception of meaning as a distinctively mental relation (e.g. that holds between thoughts and worldly objects or states of affairs), not reducible to any of the purely physical/functional states that are shared by our zombie twins.
(From “The Simple Truth”, a parable about using pebbles in a bucket to keep count of the sheep in a pasture.)
“My pebbles represent the sheep!” Autrey says triumphantly. “Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”
“Ah!” Mark says. “Special causal powers, instead of magic.”
“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.”
“What kind of special powers does the bucket have?” asks Mark.
“Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked – when the bucket is empty, it means the pastures are empty.”
“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”
“It’s an ordinary bucket,” I say. “I used to climb trees with it… I don’t think this question needs to be difficult.”
“I’m talking to Autrey,” says Mark.
“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains.
Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.
“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”
Autrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.”
“Can I look at a pebble?” says Mark.
“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.
Autrey looks at me, puzzled. “Didn’t you just mess it up?”
I shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”
“But—” Autrey says.
“I taught you everything you know, but I haven’t taught you everything I know,” I say.
Mark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”
“A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”
“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.
Autrey laughs. “Now you’re just being gratuitously evil.”
I nod, for this is indeed the case.
“Is that really going to work, though?” says Autrey.
I nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the elan vital that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.
(The moral: In this sequence, I explained how words come to ‘mean’ things in a lawful, causal, mathematical universe with no mystical subterritory. If you think meaning has a special power and special nature beyond that, then (a) it seems to me that there is nothing left to explain and hence no motivation for the theory, and (b) I should like you to say what this extra nature is, exactly, and how you know about it—your lips moving in this, our causal and lawful universe, the while.)
It’s a nice parable and all, but it doesn’t seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn’t have to be anything “special” about the items we make use of in this way, except that we intend to so use them. Such “derivative intentionality” is not particularly difficult to explain (nor is the weak form of “natural intentionality” in which smoke “means” fire, tree rings “signify” age, etc.). The big question is whether you can account for the fully-fledged “original intentionality” of (e.g.) our thoughts and intentions.
In particular, I don’t see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.
“I toss in a pebble whenever a sheep passes,” I point out.
“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”
“It’s an interaction between the sheep and the pebbles,” I reply.
“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”
I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”
Mark furrows his brow. “I don’t quite follow you… is the cloth magical?”
I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket.”
And this responds to what I said… how?
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they’re all returned. From an outside standpoint, this agent’s mental bucket is meaningful because there’s a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.
In actuality this correspondence is not the primitive and local quality it feels like; it’s maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving real lattices of causal links and hypothetical lattices of causal links meeting in sense data, not an intrinsic quality of a single node in the lattice considered in isolation from the senses and the hypotheses linking it to the senses.
So far as I can tell, there’s nothing left to explain.
--
“At exactly which point in the process does the pebble become magic?” says Mark.
“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”
I agree with all of this… I would personally ask one question though, as I’m quite confused here… I think (pardon me if I’m putting words in anyone’s mouth) that the epiphenomenalist should agree that it’s all related causally, and when the decision comes to say that “I’ve noticed that I’ve noticed that I’m aware of a chair”, or something, it comes from causal relations. But that’s not located the… “Subjective” or “first person” “experience” (whatever any of those word ‘mean’).
I observe (through photons and my eyes and all the rest) the five sheep going through the gate, even though I miss a sixth, and I believe that the world is how I think it is, and I believe my vision is an intrinsic property of me in the world, mistakenly of course. Actually, when I say I’ve seen five sheep go through the gate, loads of processes that are below the level the conscious/speaking me is aware of, are working away, and are just making the top level stuff available—the stuff that evolution has decided would be beneficial for me to be able to talk about. That doesn’t mean I’m not conscious of the sheep, just that I’m mistaken about what my consciousness is, and what exactly it’s telling me. Where does the ‘aware’ bit come in? The ‘feeling’? The ‘subjective’?
(My apologies if I’ve confused a well argued discussion)
How, precisely, does one formalize the concept of “the bucket of pebbles represents the number of sheep, but it is doing so inaccurately.” ie, that it’s a model of the number of sheep rather than about something else, but a bad/inaccurate model?
I’ve fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.
The best I can come up with is something like “I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don’t match the links that exist in reality.”
See this thread for a discussion. A less buck-passing model is: “This bucket represents the sheep … plus an error term resulting from this here specific error process.”
For instance, if I systematically count two sheep exiting together as one sheep, then the bucket represents the number of sheep minus the number of sheep-pairs erroneously detected as one sheep. It’s not enough to say the sheep-detector is buggy; to have an accurate model of what it does (and thus, what its representations mean) you need to know what the bug is.
I’m trying to figure out what work “meaning” is doing. Eliezer says brains are “thinking” meaningless gibberish. You dispute this by saying,
But what are brains thinking, if not thoughts?
And then
This implies that “about”-ness and “meaning” have roughly the same set of properties. But I don’t understand why anyone believes anything about “meaning” (in this sense). If it doesn’t appear in the causal diagram, how could we tell that we’re not living in a totally meaningless universe? Let’s play the Monday-Tuesday game: on Monday, our thoughts are meaningful; on Tuesday, they’re not. What’s different?
Right, according to epiphenomenalists, brains aren’t thinking (they may be computing, but syntax is not semantics).
Our thoughts are (like qualia) what we are most directly acquainted with. If we didn’t have them, there would be no “we” to “tell” anything. We only need causal connections to put us in contact with the world beyond our minds.
So if we taboo “thinking” and “computing”, what is it that brains are not doing?
You can probably give a functionalist analysis of computation. I doubt we can reductively analyse “thinking” (at least if you taboo away all related mentalistic terms), so this strikes me as a bedrock case (again, like “qualia”) where tabooing away the term (and its cognates) simply leaves you unable to talk about the phenomenon in question.
It sounds like “thinking” and “qualia” are getting the special privilege of being irreducible, even though there have been plenty of attempts to reduce them, and these attempts have had at least some success. Why can’t I pick any concept and declare it a bedrock case? Is my cat fuzzy? Well, you could talk about how she is covered with soft fur, but it’s possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it’s possible to imagine these things, clearly fuzziness must be non-physical. It’s maybe harder to imagine a non-fuzzy cat than to imagine a non-thinking person, but that’s just because fuzziness doesn’t have the same aura of the mysterious that thinking and experiencing do.
I don’t beleive anyone has regarded thinking as causally irreducible for at least a century. Could you cite a partially successful reduction of qualia?
Read the parent of the comment you’re responding to.
Dennett: Consciousness Explained.
That was elimination.
Yes, that sometimes happens when you reduce something; it turns out that there’s nothing left. Nobody would say that there is no reductionist account of phlogiston.
That may be so (though I agree with Peter, that reduction and elimination are different), but regardless Dennett’s actual argument is not a reduction of qualia to more simple terms. He argued (mostly on conceptual grounds) that the idea of qualia is incoherent. Even if elimination (in the manner of phlogiston) were reduction, Dennett’s argument wouldn’t be a case of either.
OK, I think I agree with this view of Dennett. I hadn’t read the book in a while, and I conflated his reduction of consciousness (which is, I think, a genuine reduction) with his explanation of qualia.
I would. Reduction and elimination are clearly different. Heat was reduced, phlogiston was eliminated. There is heat. There is no phlogiston.
So in this case, in your view, subjective experiences would be reduced, while qualia would be eliminated?
I am not saying that all posits are doomed to elimination, only that what is elemintated tends to be a posit rather than a prima facie phenomenon. How could you say that there is no heat? I also don’t agree that qualia are posits...but Dennett of course needs to portray them that way in order to eliminate them.
I don’t think I understand what you think is and isn’t a “posit”. “Cold” is a prima facie phenomenon as well, but it has been subsumed entirely into the concept of “heat”.
The prima-facie phenomenon of “cold” (as in “your hands feel cold”) has been subsumed under the scientific theory of heat-as-random-molecular-motion. That’s reduction. it was never eilmininated in favour of the prima-facie phenomenn of heat, as in “This soup is hot”.
Only minorly. We could just as well still talk about phlogiston, which is just negative oxygen. The difference between reduction and elimination is just that in the latter, we do not think the concept is useful anymore. If there are different “we”s involved, you might have the same analysis result in both.
Not very menaingfully. What does that mean in terms of modern physics? Negatively ionised oxygen? Anti-oxygen? Negatively massive oxygen?
Well, that’s a difference.
Is it minority opinion that reductive materialism and eliminative materialism are different positions?
“The reductive materialist contrasts the eliminativist more strongly, arguing that a mental state is well defined, and that further research will result in a more detailed, but not different understanding.[3]”—WP
That is the reductionist account of phlogiston. The grandparent didn’t claim that everyone would agree that there is a reduction of phlogiston that makes sense. The result of reduction is that phlogiston was eliminated. Which sometimes happens when you try to reduce things.
This is what the grandparent was saying. You were in agreement already.
It;s an emlimination. If it were a reduction, there would still be phlogiston as is there is still heat. The reductive explanation of combustion did not need phlogiston as a posit, so it was eliminated. Note the difference beteen phlogiston, a posit, and heat/combustion, which are prima-facie phenomena. Nobody was trying to reductivley explain phlogiston, they were trying to explain heat with it.
I disagree.
Please, just read this.
It depends on what you mean by ‘thinking’, but I think the view is pretty widespread that rational relations (like the relation of justification between premises and a conclusion) are not reducible to any physical relation in such a way that explains or even preserves the rational relation.
I’m thinking of Donald Davidson’s ‘Mental Events’ as an example at the moment, just to illustrate the point. He would say that while every token mental state is identical to a token physical state, and every token mental causal relation (like a relation of inference or justification) is identical to a token physical causal relation...
...nevertheless, types of mental states, like the thought that it is raining, and types of mental causal relations, like the inference that if it is raining, and I don’t want to get wet, then I should bring an umbrella, are not identical to types of physical states or types of physical causal relations.
This has the result that 1) we can be assured that the mind supervenes on the brain in some way, and that there’s nothing immaterial or non-physical going on, but 2) there are in principle no explanations of brain states and relations which suffice as explanations of anything like thinking, reasoning, inferring etc..
Davidson’s views are widely known rather than widely accepted, I think. I don’t recall seeing them beign used for a serious argument for epiphenomenalism, I can see how they could be, if you tie causality to laws. OTOH, I can see how you could argue in the opposite direction: if mental events are identical to physical events, then, by Leibnitz’s law, they have the same causal powers as physical events.
Erm, this is just poor reasoning. The conclusion that follows from your premises is that the properties of fuzziness and being-covered-in-fur are distinct, but that doesn’t yet make fuzziness non-physical, since there are obviously other physical properties besides being-covered-in-fur that it might reduce to. The simple proof: you can’t hold ALL the other physical facts fixed and yet change the fuzziness facts. Any world physically identical to ours is a world in which your cat is still fuzzy. (There are no fuzz-zombies.) This is an obvious conceptual truth.
So, in short, the reason why you can’t just “pick any concept and declare it a bedrock case” is that competent conceptual analysis would soon expose it to be a mistake.
No, I’m saying that you could hold all of the physical facts fixed and my cat might still not be fuzzy. This is somewhat absurd, but I have a tremendously good imagination; if I can imagine zombies, I can imagine fuzz-zombies.
More than that, it’s obviously incoherent. I assume your point is that the same should be said of zombies? Probably reaching diminishing returns in this discussion, so I’ll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here. Even those who want to deny that zombies are metaphysically possible generally concede that the concept is logically coherent.
On reflection, I think that’s right. I’m capable of imagining incoherent things.
I guess I’m somewhat skeptical that anyone can be an expert in which non-existent things are more or less possible. How could you tell if someone was ever correct—let alone an expert? Wouldn’t there be a relentless treadmill of acceptance of increasingly absurd claims, because nobody want to admit that their powers of conception are weak and they can’t imagine something?
If we cant even get a start on that, how did we get a start on building AI?
I’m not sure I follow you. Why would you need to analyse “thinking” in order to “get a start on building AI”? Presumably it’s enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought. Whether it’s really thought, or mere computation, that occurs inside the black box is presumably not any concern of computer scientists!
Becuase thought is essential to intelligence. Why would you need to analyse intelligence to get a start on building artificial intelliigence? Because you would have no idea what you were tryinng to do if you didn’t.
I fail to see how that is not just a long winded way of saying “analysing thought”