Projecting the ontology of your (flawed) internal representations onto reality is a bad idea. “Doing a Dennet” is also not dealt with, except by incredulity.
It’s a fact that the individual shades of color exist, however it is that we group them—and your ontology must contain them, if it pretends to completeness.
This is simply not the case. The fact that we can compare two stimuli more accurately than we can identify a stimuli merely means that internally we represent reality with lesser fidelity than our senses theoretically can achieve. On a reductionist view at most you’ve established “greater than” and “round to nearest” are implemented in neurons. You do not need to have colour.
Let’s unpack “blueness”. It’s a property we ascribe to objects, yet it’s trivial to “concieve of” blueness independent of an object. Neurologically, we process colour, motion, edge finding and so in in parallel; the linking of them together occurs at a higher level. Furthermore the brain fakes much of the data, giving the perception of colour vision, for example, in regions of the visual field where no ability to discriminate colour exists, and cases of blindness with continued concious perception of colour.
Brains compress input extensively; it would be crass to worry about the motion of every spot on a leopard separately—block them up as a single leopard. Asserting that the world must fit with our hallucination of reality lets you see things that are marginally visible, and get by with far worse sensory apparatus than needed. Cue optical illusions: this, this and this, for example. Individual shades don’t exist as you want them to.
It is absurdly clear that the map your brain makes does not correspond to either the territory of your direct sense perception (at the retina) or reality. On precisely what basis do you assert to project from the ontology of a bad map to the territory?
“Blue” is a referent to properties of internal representations, which is translatable across multiple instances of primate brains. You say “X is blue”, and I can check my internal representation of X to see whether I would categorise it as “blue”. This does not require “blue” to be fundamental in ontology. There isn’t a “blue thing” in physics, nor should there be. “Blue” existing means simply that there are things which this block of wetware puts in some equivalence class.
Lets move on to computation:
But if the “computational state” of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?
Again, you seem project from an internal map of your own brain to the territory. Simply because I can look at a computer at multiple levels, say:
Starting Excel, API calls, Machine instructions, microcode, functional units on the CPU, adders/multipliers/whatever on the CPU, logic gates, transistors, current flows or probability masses in the field of electrons,
does not in principle invalidate any of the above views as correct views of an operating computer. The observer dependence isn’t an issue if (modulo translation/equivalence classes for abstraction between languages) they all give the same function or behaviour. You can block things up as many low level behaviours or a smaller number of high level ones; this doesn’t invalidate a computational view. What is the computation implemented by starting Excel? What details do you care about? It doesn’t matter to a functionalist, as the computations are equivalent, albeit in different languages or formalisms.
The critique of aboutness is similar to your issues over colour. You percieve “X is about Y” and thus assume it to be ontologically fundamental. Semantic content is a compressed and inaccurate rendition of low level states: Useful for communicating and processing if you don’t care about the details. Indeed the only reason we care about this kind of semantics is that our own wetware implements theory-of-mind directly. Good idea for predicting cognitive agents; not neccessarily a true statement about the world. The “Y” that “X” is “about” is another contraction—an infered property of a model.
“Time” is as flexible as your neural architecture wants it to be. Causality is a good idea, for Darwinian reasons, but people’s perception of the flow of time is adjustable. I will point out that your senses imply strongly that the world is a 2D surface. Have you ever been able to see behind an object without moving your head? I haven’t either, therefor clearly this 3D stuff is bunkum—the world is a flat plane and I directly percieve part of one side of it. Ditto time. Causality limits the state of a cognitive thing to be dependent on its previous states and its light cone at this point in space-time, and you percieve time to flow because you can remember previous brain states, and depending on them (compressed somewhat) is good for survival.
And now for unity of conciousness. It isn’t unitary. Multiple personality, dissociative disorders, blindsight, sleepwalking, alien hand, need I go on? I percieve my own representation of reality to be unitary; I know for a fact that it’s half made up. You claim that the individual issues “just can’t” be the whole story. Why? Personal incredulity isn’t an argument. The brain in the skull you call yours isn’t just running a single cognitive entity. You move before even realising “you” were going to; you are unconcious of breathing until you decide to be. Why is a unitary conciousness fundamental? Why isn’t it just a shortcut to approximate “you” and others in planning the future and figuring out the present?
Furthermore the brain fakes much of the data, giving the perception of colour vision, for example, in regions of the visual field where no ability to discriminate colour exists,
Just a note—I don’t disagree with your point; but the claim that we can’t discriminate color in our peripheral vision is simply false. I’ve done some informal experiments with this, because I was puzzled that textbooks say that our peripheral vision is primarily due to rods, which can’t detect color; yet I see color in my peripheral vision.
If I stand with my nose and forehead pressed against the wall, holding a stack of shuffled yellow and red sheets of origami paper behind my back, close my eyes, and then hold one sheet up in each of my outstretched arms, and open my eyes, so that the sheets are each 90 degrees out from my central vision and I see them both at the same time, I can distinguish the two colors 100% of the time.
There’s a serious problem with resolution; but color doesn’t seem to be affected at all in any way that I can detect by central vs. peripheral vision.
Of the same apparent intensity to a rod? If they’re not, you’ll guess correctly based on apparent brightness, and your brain fills in the colour based on memory of which colours of paper are around.
There are low levels of cones out to the periphery, but of such level as to be unreliable sources. For example, this notes that some monochromatic light is misidentified peripherally but not foevally, and that frequency discrimination drops by a factor of 50 or so.
Noting Jonathan_Lee’s remarks, a suggestion for an experiment: place a monitor in the peripheral vision of the experimental which, at regular intervals, shows a random RGB color. The subject is to press a key indicating perceived color (e.g. [R]ed, [Y]ellow, [B]lue, [O]range, [G]reen, [P]urple, [W]hite, [B]lack) each time the color changes (perhaps an audio cue?). Compare results to same experiment with monitor directly in front.
It seems you agree that colors, the flow of time, meanings, and the unity of experience all appear to be there. The general import of your remarks is that reality isn’t actually like that, it only appears to be like that. You state how things are, and then you state what’s happening in the brain in order to create certain appearances. Color and time are external appearances, meaning and unity are internal appearances.
Some of what you say about the imperfections of conscious representations is not an issue for me. The fidelity of the mapping between external states and conscious states only has an incidental bearing on the nature of the conscious states themselves. Whether color is sometimes hallucinated is not the issue. Whether a color is nothing but an equivalence class is the issue.
In this regard I have observed a number of positions taken. Some people are at the stage of saying, color is a neural classification and I don’t see any further problem. Some people say that color is how it “feels” to make such a classification. Since Dennett takes care to deny that there is anything that is actually colored, even in the mind, and that there are only words, dispositions to classify, and so forth, he arguably wishes to deny even that there is a “color feeling”, though it’s hard to be sure.
My position is very simple. All these things (color, time, meaning, unity) exist in consciousness; which means that they exist in at least one part of reality. The elements and the modes of combination offered by today’s scientific ontology do not suffice to generate them. Therefore, today’s scientific ontology is wrong, incomplete, however you want to put it.
So if we are to have a discussion, you need to say less about the imperfections of consciousness as a medium of representation, and more about the medium itself. Do you agree that color, time, meaning, unity exist in consciousness? If so, can you identify the physical or computational property which supposedly corresponds to, or is identical to, “appearing to experience” each of these various phenomena?
Since Dennett takes care to deny that there is anything that is actually colored, even in the mind, and that there are only words, dispositions to classify, and so forth, he arguably wishes to deny even that there is a “color feeling”, though it’s hard to be sure.
Citation or he didn’t say it. Daniel Dennett coined the phrase “greedy reductionism”—partially to emphasize that he does not deny the existence of color, consciousness, etc. Unless you know of some place where he reversed his position, I would argue that you have misinterpreted his remarks. My understanding is that his position is that color is an idiosyncratic property of the human visual perception system with no simple referent in physics, not no referent at all.
(I normally wouldn’t make such a big deal of it, but Dennett is one of the major figures on the physicalist side of this debate, and a mischaracterization of his views impedes the ability of bystanders to perform a fair comparison.)
Here is a full quote that makes clear in exactly what sense he doesn’t believe in qualia:
So when we look one last time at our original characterization of qualia, as ineffable, intrinsic, private, directly apprehensible properties of experience, we find that there is nothing to fill the bill. In their place are relatively or practically ineffable public properties we can refer to indirectly via reference to our private property-detectors — private only in the sense of idiosyncratic. And insofar as we wish to cling to our subjective authority about the occurrence within us of states of certain types or with certain properties, we can have some authority — not infallibility or incorrigibility, but something better than sheer guessing — but only if we restrict ourselves to relational, extrinsic properties like the power of certain internal states of ours to provoke acts of apparent re-identification. So contrary to what seems obvious at first blush, there simply are no qualia at all.
Originally in Quining Qualia, 1988, by Dennett, and quoted on Multiple-Drafts Model.
The Taboo of Subjectivity is a book by B. Alan Wallace. It appears that Dennett wrote a review for that work, but I couldn’t find it online. Are you referring to that review, or to something else?
I see what you meant now. Dennett was quoted in Wallace’s book, on p.139. Sorry for the misunderstanding.
The quote, with some context, is:
Paul Churchland, one of the most prominent advocates of this view [eliminative materialism], declares that commonsense experience is probably irreducible to, and therefore incommensurable with, neuroscience; and for this reason familiar mental states should be regarded as nonexistent or at most as “false and misleading”^18. For similar reasons, philosopher Daniel Dennett bluntly asserts: “[t]here simply are no qualia at all.”^19
18 . Paul M. Churchland, 1990, Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind, p. 41 & 48.
19 . Daniel C. Dennett, 1991, Consciousness Explained, p. 74.
I think the reference on p. 28 is pointing out that the brain doesn’t turn purple (and a purple brain wouldn’t help anyway, as there are no eyes in the brain to see the purple). The remainder of the page is extending the example to further elaborate the problem of subjective experience.
I cannot find the reference to qualia quoted in The Taboo of Subjectivity at all—p. 74 is before Dennett even defines qualia, and p. 374 does not have those exact words—only the conclusion of a thought experiment illuminating his rejection of the concept.
Upvoted. I searched Google for about 15 seconds looking for the quote and didn’t find it, but I remember seeing or hearing Dennett say once how flabbergasted he is about being “oh yeah, that guy who thinks we don’t see color.”
He does not use the expression “color feeling”, but here’s a direct quote from Consciousness Explained, chapter 12, part 4:
You seem to be referring to a private, ineffable something-or-other in your mind’s eye, a private shade of homogeneous pink, but this is just how it seems to you, not how it is… what it turns out to be in the real world in your brain is just a complex of dispositions.
He explicitly denies that there is any such thing as a “private shade of homogeneous pink”—which I would consider a reasonably apt description of the phenomenological reality. He also says there is something real, a “complex of dispositions”. And, he also says that when we refer to color, we think we’re referring to the former, but we’re really referring to the latter. So, subjective color does not exist, but references to color do exist.
That still leaves room for there to be “appearances of pink”. No actual pink, but also more than a mere belief in pink; some actual phenomenon, appearance, component of experience, which people mistakenly think is pink. But I see no trace of this. The thing which he is prepared to call real, the “complex of dispositions”, is entirely cognitive (in the previous paragraph he refers to “innate and learned associations and reactive dispositions”). There is no reference to appearance, experience, or any other aspect of subjectivity.
Therefore, I conclude that not only does Dennett deny the existence of color (yes, I know he still uses the word, but he explicitly defines it to refer to something else), he denies that there is even an appearance of color, a “color feeling”. In his account of color phenomenology, there are just beliefs about nonexistent things, and that’s it.
So, subjective color does not exist, but references to color do exist.
The references to red together definitely form a physical network in my brain, right? I have a list of 10,000 things in my memory that are vividly red, some more vivid than others, and they’re all potentially connected under this label ‘red’. When that entire network is stimulated (say, by my seeing something red or imagining what “red” is), might I not also give that a label? I could call the stimulation of the entire network the “essence of red” or “redness” and have a subjective feeling about it.
I’m certain this particular theory about what “redness” occurs frequently. My question is, what’s missing in this explanation from the dualist point of view? Why can’t the subjective experience of red just be the whole network of red associations being simultaneously excited as an entity?
Above you wrote
Some people are at the stage of saying, color is a neural classification and I don’t see any further problem.
So I guess I’m just asking, what’s the further problem? (If you’ve already answered, would you please link to it?)
What are in those ellipses? In what you quote, I see that he’s denying that it’s “a private, ineffable something-or-other in your mind’s eye”. From what else I’ve read of Dennett, I’m sure that he has a problem with the “private” and “ineffable” part. Is it so clear that he has a problem with the “component of experience” part?
In the book, a character called Otto advocates the position that qualia exists. The full passage is Dennett making his case to Otto once again:
What qualia are, Otto, are just those complexes of dispositions. When you say “This is my quale”, what you are singling out, or referring to, whether you realize it or not, is your idiosyncratic complex of dispositions. You seem to be referring to a private, ineffable something-or-other in your mind’s eye, a private shade of homogeneous pink, but this is just how it seems to you, not how it is. That “quale” of yours is a character in good standing in the fictional world of your heterophenomenology, but what it turns out to be in the real world in your brain is just a complex of dispositions.
“Dear Dan—the shade of pink is real. In denying its existence, you are getting things backwards. The important methodological maxim to remember is that appearances are real. This does not mean that every time there is an appearance of an apple, there is an apple. It just means that every time there is an appearance of an apple, there is an appearance of an apple. It also does not mean that every time someone thinks there is an appearance of an apple, there is one. People can be mistaken in their auto-phenomenology—but not as mistaken as you would have us believe.”
Husserl, who was only concerned with getting phenomenology right and not with any underlying ontology, had a “principle of principles” which expresses the first half of what I mean by “appearances are real”:
everything originarily offered to us in “intuition” is to be accepted simply as what it is presented as being, but also only within the limits in which it is presented there.
In Husserl, every mode of awareness is a form of intuition, including sense perception. He’s saying that every appearance has an element of certainty, but only an element.
Appealing to Husserl may be overkill, but the point is, there is a limit to the degree one can plausibly deny appearance. Denying the existence of color in the way Dennett appears to be doing is like saying that 0 = 1 or that nothing exists—it’s only worth doing as an exercise in cognitive extremism; try believing something impossible and see what happens.
However, people do end up believing weird things out of apparent philosophical necessity. I think this is what is going on with Dennett; he does understand that there is nothing like that shade of pink in standard physical ontology, so rather than engage in a spurious identification of pinkness with some neural property, he just says there is no pink. It’s just a word. It’s there to denote a bundle of cognitive and behavioral dispositions. But there is no pink as such, outside or inside the head.
He’s willing to take this drastic step because the truth of physics seems so nailed down, so indisputable. However, there is a sense in which we do not know what physics is about. It’s a black-box causal structure, whose inputs and outputs show up in our experience looking a certain way (looking like objects distributed in space). But that doesn’t tell us how they are in themselves.
If you take the Husserlian principle ontologically—conscious experience is offering us a glimpse of the genuine nature of one small sliver of reality, namely, what happens in consciousness—and combine it with a general commitment to the causal structure of physics, you get what I’m now calling Reverse Monism. Reverse, because it’s the reverse of the usual reductionism. The usual reductionism says this appearance, this part of consciousness, is actually atoms in space doing something. Reverse monism says instead: this appearance must be what some part of physics (some part of the physical brain) actually is.
If the usual reductionistic accounts of conscious experience were plausible as identities, reverse monism wouldn’t introduce anything new; it would just be looking at the same identity from the other end of the equation. However, the only thing these alleged identities have going for them, generally, is a common causal role. The thing which is supposed to be the neural correlate of blueness is in the right position to be caused by blue light and to get a person talking about blueness. But the thing in itself (e.g. cortical neurons firing) is nothing like blueness as such.
Now as it happens, all these theories about the neural correlates of consciousness (such as Drescher’s gensyms) are speculative in the extreme. We’re not talking about anything as well-founded as the Krebs cycle or the inverse square law; these are speculations about how the truth might be. So we are not under any obligation to consider the mismatch between subjective ontology and neural ontology which occurs in these theories as itself an established fact, that we just have to learn to live with. We are free to look for other theories in which an ontologically plausible identity, and not just a causally adequate identity, is posited. That’s what I’m on about.
Time and again you sweep aside the “bundle of cognitive and behavioral dispositions” Dennett refers to in his reply to Otto, in your appeal to the primacy of “redness” or “pinkness”.
This has some intuitive appeal, because “red” and “pink” are short words and refer to something we experience as simple. Your position would be much harder to defend if you were looking for “the private, ineffable feeling of reading Lesswrong.com″ as one commenter suggested: people would have an easier time denying the existence of that.
Yet—even though I’m not entirely sure that’s what this commenter had in mind—I would say there is only a difference of degree, not of kind, between “the feeling of redness” and “the feeling of reading Lesswrong”. The feeling of seeing the color red really is a complex of dispositions, something cobbled together from many parts over our long evolutionary history. The more we learn about color, the more complex it turns out to be. It only feels simple because it’s a human universal.
The “feeling of reading LessWrong” can be analysed in great detail. There’s a classic work of phenomenology, Roman Ingarden’s The Literary Work of Art, which goes into the multiple “strata” of meaning which turn the examination of small black shapes on white paper into the imagination of a possible world. Participating in a discussion like this involves a stream of complex intentional experiences against a steady background of embodied sensation.
Color experience is certainly not beyond further analysis, even at the phenomenological level. The three-dimensional model of hue, saturation, and intensity is a statement about the nature of subjective color. The idea that experiences are ineffable is just wrong. We’re all describing them every day.
No amount of intricate new knowledge about the way that color perception varies or the functions that it performs can actually abolish the phenomenon. And most materialists don’t try to abolish it, they try to identify it with something material. I think Dennett is trying to abolish phenomena as realities, in favor of a cognitive behaviorism, but that is really a topic for Dennett interpreters.
Instead, I want to know about your phenomenology of color. I assume that in fact you have it. But I’m curious to know, first, whether you’ll admit to having it, or whether you prefer to talk about your experience in some other way; and second, how you describe it. Do you look at color and think “I’m seeing a bundle of dispositions”? Do you tell yourself “I’m not actually seeing it, I’m just associating the perceptual object with a certain abstract class”?
I’m not sure I ever “look at color” in isolation. There are colors and arrangements of color that I like and that I’ll go out of my way to experience; I’m looking forward to an exhibition of Soulages’ work in Paris, for instance.
When I look at a Soulages painting my inner narrative is probably something like “Wow, this is black… a luminous black which emphasizes straight, purposive brushstrokes in a way that’s quite different from any other painter’s use of color I’ve seen; how puzzling and delightful.” It’s different from the reflective black of my coffee cup nearby, the matte black of my phone handset or the black I see when I close my eyes. When I see my coffee cup I’m mostly seeing the reflections, when I see the handset it’s the texture that stands out, when I close my eyes the black is a background to a dance of random splotches and blobs.
When I think about my perception of black in all the above instances I am certainly thinking in terms of dispositions and of abstract tags. There isn’t a unitary “feeling of black” that persists after these various experiences of things I now call black.
External only in that wetware is modelling something outside of the skull, rather than it’s internal state. The intent was to state that merely because you perceive reality along certain ontological lines does not imply that reality has the same ontology.
This should be particularly obvious when your internal sense fails to correspond to reality; if conscious states are an imperfect guide to external states then why should the apparent ontology of consciousness be accurate?
In this regard I have observed a number of positions taken.
None of which you refute here or in the OP, especially those who deny that “blueness” is a veridical property of reality.
All these things (color, time, meaning, unity) exist in consciousness; which means that they exist in at least one part of reality.
No; it means that something referencing them exists in some part of reality (your skull). An equivalence relation; an internal tag that this object is blue.
To counter the realism, consider mathematicians, who consciously deal in infinite sets, or all theorems provable under some axioms (model theory). Just because something appears plainly to you does not mean it exists. Kant says it better than I can.
Do you agree that color, time, meaning, unity exist in consciousness?
Not if you mean more than perception by consciousness. Even in perception, they’re just the ontology imposed by our neurology, and have neural correlates that suffice.
Consciousness isn’t prior to perception or action; it’s after it. There isn’t a homunculus in there for experience to “appear to”. If anything, there’s a compressed model of your own behaviour to which experience is fed into; that’s the “you” in the primate—a model of that same primate for planning and conterfactual reasoning.
Do you agree that color, time, meaning, unity exist in consciousness?
Not if you mean more than perception by consciousness. Even in perception, they’re just the ontology imposed by our neurology, and have neural correlates that suffice.
Let’s suppose I have a hallucinatory perception of a banana. So, there’s no yellow object outside my skull—we can both agree on that. It seems we also agree that I’m having a yellow perception.
But we part ways on the meaning of that. Apparently you think that even my hallucination isn’t really yellow. Instead, there’s some neural thing happening which has been tagged as yellow—whatever that means.
I really wonder about how you interpret your own experience. I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it’s like to have a perception tagged as yellow. But how does that translate, subjectively? When you see yellow, do you tell yourself you’re seeing the tag? Do you just semi-visualize a bunch of neurons firing in a certain way?
I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it’s like to have a perception tagged as yellow. But how does that translate, subjectively? When you see yellow, do you tell yourself you’re seeing the tag? Do you just semi-visualize a bunch of neurons firing in a certain way?
We went over this issue a bit in the previous discussion. My response (following Drescher) was: “To experience [yellow] is to feel your cognitive architecture assigning a label to sensory data.”
As I elaborated:
… the phenomenal experience of blue is what it is like to be a program that has classified incoming data as being a certain kind of light, under the constraint of having to coherently represent all of its other data (other colors, other visual qualities, other senses, other combined extrapolations from multiple senses, etc) but with limited comparison abilities.
The point being: I can’t give a complete answer now, but I can tell you what the solution will look like. It will involve describing how a cognitive architecture works, then looking at the distinctions it has to make, then looking at what constraints these distinctions operate under (e.g. color being orthogonal to sound [unless you have synaesthesia], etc.), then identifying what parts of the process can access each other.
Out of all of that, only certain data representations are possible, and one of these (perhaps, hopefully, the only one) is the one with the same qualities as our perception of color. You know you’re at the solution, when you say, Aha! If I had to express what information I receive, under all those constraints, that is what qualities it would need to have.
Though you object to the comparison, this is the same kind of error as demanding that there be a fundamental “chess thing” in Deep Blue. There is no fundamental color, just as there is no fundamental chess. There is only a regularity the system follows, compressible by reference to the concept of color or chess.
I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it’s like to have a perception tagged as yellow.
I am intrigued by your wording, here. I suppose I experience colors just like you do, but—when I think about it—I tell myself that what is, in fact seeing a yellow object is, in fact the same thing as experiencing what it’s like to have a perception tagged as yellow. I believe these descriptions to be equivalent in the same sense that “breaking of hydrogen bonds between dihydrogen monoxide molecules, leading to those molecules traveling in near-independent trajectories outside the crystalline structure” is equivalent to “ice sublimating”.
But we part ways on the meaning of that. Apparently you think that even my hallucination isn’t really yellow. Instead, there’s some neural thing happening which has been tagged as yellow—whatever that means.
The relevant part of the optical cortex which fires on yellow objects has fired; the rest of your brain behaves as if there were a yellow banana out in front of it. “Tagging” seemed like the best high level term for it. A collection of stimuli are being collected together as an atomic thing. There’s a neural thing happening, and part of that neural thing is normally caused by yellow things in the visual field.
The most obvious point where it has subjective import is when things change[1]. I probably experience colours as you do; when I introspect on colour, or time, I cannot find good cause to distinguish it from “visualising” an infinite set or a function. The only apparent different is that reality isn’t under concious control. I don’t assume that the naive ontology that is presented to me is a true ontology.
[1] There are a pair of coloured mugs (blue and purple) that I can’t distinguish in my peripheral vision, for example. When I see one in my peripheral vision, it is coloured (blue, say); when I look at it directly, there is a period in which it is both blue and purple, as best I can describe, before definitively becoming purple. Head MRI’s do this too.
Edit: The problem is that there isn’t an easy way to introspect on the processes leading to perceptions; they are presented ex nihilo. As best I can tell, there’s no good distinguisher of my senses from “experiencing what it’s like to have a perception tagged as yellow”
Projecting the ontology of your (flawed) internal representations onto reality is a bad idea. “Doing a Dennet” is also not dealt with, except by incredulity.
This is simply not the case. The fact that we can compare two stimuli more accurately than we can identify a stimuli merely means that internally we represent reality with lesser fidelity than our senses theoretically can achieve. On a reductionist view at most you’ve established “greater than” and “round to nearest” are implemented in neurons. You do not need to have colour.
Let’s unpack “blueness”. It’s a property we ascribe to objects, yet it’s trivial to “concieve of” blueness independent of an object. Neurologically, we process colour, motion, edge finding and so in in parallel; the linking of them together occurs at a higher level. Furthermore the brain fakes much of the data, giving the perception of colour vision, for example, in regions of the visual field where no ability to discriminate colour exists, and cases of blindness with continued concious perception of colour.
Brains compress input extensively; it would be crass to worry about the motion of every spot on a leopard separately—block them up as a single leopard. Asserting that the world must fit with our hallucination of reality lets you see things that are marginally visible, and get by with far worse sensory apparatus than needed. Cue optical illusions: this, this and this, for example. Individual shades don’t exist as you want them to.
It is absurdly clear that the map your brain makes does not correspond to either the territory of your direct sense perception (at the retina) or reality. On precisely what basis do you assert to project from the ontology of a bad map to the territory?
“Blue” is a referent to properties of internal representations, which is translatable across multiple instances of primate brains. You say “X is blue”, and I can check my internal representation of X to see whether I would categorise it as “blue”. This does not require “blue” to be fundamental in ontology. There isn’t a “blue thing” in physics, nor should there be. “Blue” existing means simply that there are things which this block of wetware puts in some equivalence class.
Lets move on to computation:
Again, you seem project from an internal map of your own brain to the territory. Simply because I can look at a computer at multiple levels, say: Starting Excel, API calls, Machine instructions, microcode, functional units on the CPU, adders/multipliers/whatever on the CPU, logic gates, transistors, current flows or probability masses in the field of electrons, does not in principle invalidate any of the above views as correct views of an operating computer. The observer dependence isn’t an issue if (modulo translation/equivalence classes for abstraction between languages) they all give the same function or behaviour. You can block things up as many low level behaviours or a smaller number of high level ones; this doesn’t invalidate a computational view. What is the computation implemented by starting Excel? What details do you care about? It doesn’t matter to a functionalist, as the computations are equivalent, albeit in different languages or formalisms.
The critique of aboutness is similar to your issues over colour. You percieve “X is about Y” and thus assume it to be ontologically fundamental. Semantic content is a compressed and inaccurate rendition of low level states: Useful for communicating and processing if you don’t care about the details. Indeed the only reason we care about this kind of semantics is that our own wetware implements theory-of-mind directly. Good idea for predicting cognitive agents; not neccessarily a true statement about the world. The “Y” that “X” is “about” is another contraction—an infered property of a model.
“Time” is as flexible as your neural architecture wants it to be. Causality is a good idea, for Darwinian reasons, but people’s perception of the flow of time is adjustable. I will point out that your senses imply strongly that the world is a 2D surface. Have you ever been able to see behind an object without moving your head? I haven’t either, therefor clearly this 3D stuff is bunkum—the world is a flat plane and I directly percieve part of one side of it. Ditto time. Causality limits the state of a cognitive thing to be dependent on its previous states and its light cone at this point in space-time, and you percieve time to flow because you can remember previous brain states, and depending on them (compressed somewhat) is good for survival.
And now for unity of conciousness. It isn’t unitary. Multiple personality, dissociative disorders, blindsight, sleepwalking, alien hand, need I go on? I percieve my own representation of reality to be unitary; I know for a fact that it’s half made up. You claim that the individual issues “just can’t” be the whole story. Why? Personal incredulity isn’t an argument. The brain in the skull you call yours isn’t just running a single cognitive entity. You move before even realising “you” were going to; you are unconcious of breathing until you decide to be. Why is a unitary conciousness fundamental? Why isn’t it just a shortcut to approximate “you” and others in planning the future and figuring out the present?
Re. blueness: Mitchell is talking about qualia. Google the hard problem of consciousness.
Just a note—I don’t disagree with your point; but the claim that we can’t discriminate color in our peripheral vision is simply false. I’ve done some informal experiments with this, because I was puzzled that textbooks say that our peripheral vision is primarily due to rods, which can’t detect color; yet I see color in my peripheral vision.
If I stand with my nose and forehead pressed against the wall, holding a stack of shuffled yellow and red sheets of origami paper behind my back, close my eyes, and then hold one sheet up in each of my outstretched arms, and open my eyes, so that the sheets are each 90 degrees out from my central vision and I see them both at the same time, I can distinguish the two colors 100% of the time.
There’s a serious problem with resolution; but color doesn’t seem to be affected at all in any way that I can detect by central vs. peripheral vision.
Of the same apparent intensity to a rod? If they’re not, you’ll guess correctly based on apparent brightness, and your brain fills in the colour based on memory of which colours of paper are around.
There are low levels of cones out to the periphery, but of such level as to be unreliable sources. For example, this notes that some monochromatic light is misidentified peripherally but not foevally, and that frequency discrimination drops by a factor of 50 or so.
Would be interesting to see you do this on video with a second person shuffling and displaying the cards.
Noting Jonathan_Lee’s remarks, a suggestion for an experiment: place a monitor in the peripheral vision of the experimental which, at regular intervals, shows a random RGB color. The subject is to press a key indicating perceived color (e.g. [R]ed, [Y]ellow, [B]lue, [O]range, [G]reen, [P]urple, [W]hite, [B]lack) each time the color changes (perhaps an audio cue?). Compare results to same experiment with monitor directly in front.
It seems you agree that colors, the flow of time, meanings, and the unity of experience all appear to be there. The general import of your remarks is that reality isn’t actually like that, it only appears to be like that. You state how things are, and then you state what’s happening in the brain in order to create certain appearances. Color and time are external appearances, meaning and unity are internal appearances.
Some of what you say about the imperfections of conscious representations is not an issue for me. The fidelity of the mapping between external states and conscious states only has an incidental bearing on the nature of the conscious states themselves. Whether color is sometimes hallucinated is not the issue. Whether a color is nothing but an equivalence class is the issue.
In this regard I have observed a number of positions taken. Some people are at the stage of saying, color is a neural classification and I don’t see any further problem. Some people say that color is how it “feels” to make such a classification. Since Dennett takes care to deny that there is anything that is actually colored, even in the mind, and that there are only words, dispositions to classify, and so forth, he arguably wishes to deny even that there is a “color feeling”, though it’s hard to be sure.
My position is very simple. All these things (color, time, meaning, unity) exist in consciousness; which means that they exist in at least one part of reality. The elements and the modes of combination offered by today’s scientific ontology do not suffice to generate them. Therefore, today’s scientific ontology is wrong, incomplete, however you want to put it.
So if we are to have a discussion, you need to say less about the imperfections of consciousness as a medium of representation, and more about the medium itself. Do you agree that color, time, meaning, unity exist in consciousness? If so, can you identify the physical or computational property which supposedly corresponds to, or is identical to, “appearing to experience” each of these various phenomena?
Citation or he didn’t say it. Daniel Dennett coined the phrase “greedy reductionism”—partially to emphasize that he does not deny the existence of color, consciousness, etc. Unless you know of some place where he reversed his position, I would argue that you have misinterpreted his remarks. My understanding is that his position is that color is an idiosyncratic property of the human visual perception system with no simple referent in physics, not no referent at all.
(I normally wouldn’t make such a big deal of it, but Dennett is one of the major figures on the physicalist side of this debate, and a mischaracterization of his views impedes the ability of bystanders to perform a fair comparison.)
In Explaining Consciousness, chapter 2, p. 28, Dennett says there is no purple in the brain when we see purple. That may be what he means.
I also heard Dennett quoted as saying there is no such thing as qualia, allegedly in “The taboo of subjectivity”, p. 139, which I don’t have.
Here is a full quote that makes clear in exactly what sense he doesn’t believe in qualia:
Originally in Quining Qualia, 1988, by Dennett, and quoted on Multiple-Drafts Model.
The Taboo of Subjectivity is a book by B. Alan Wallace. It appears that Dennett wrote a review for that work, but I couldn’t find it online. Are you referring to that review, or to something else?
I see what you meant now. Dennett was quoted in Wallace’s book, on p.139. Sorry for the misunderstanding.
The quote, with some context, is:
18 . Paul M. Churchland, 1990, Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind, p. 41 & 48.
19 . Daniel C. Dennett, 1991, Consciousness Explained, p. 74.
Belatedly:
I think the reference on p. 28 is pointing out that the brain doesn’t turn purple (and a purple brain wouldn’t help anyway, as there are no eyes in the brain to see the purple). The remainder of the page is extending the example to further elaborate the problem of subjective experience.
I cannot find the reference to qualia quoted in The Taboo of Subjectivity at all—p. 74 is before Dennett even defines qualia, and p. 374 does not have those exact words—only the conclusion of a thought experiment illuminating his rejection of the concept.
Thanks for the page number—I’ll see if I can find it in my copy when I get home.
Upvoted. I searched Google for about 15 seconds looking for the quote and didn’t find it, but I remember seeing or hearing Dennett say once how flabbergasted he is about being “oh yeah, that guy who thinks we don’t see color.”
He does not use the expression “color feeling”, but here’s a direct quote from Consciousness Explained, chapter 12, part 4:
He explicitly denies that there is any such thing as a “private shade of homogeneous pink”—which I would consider a reasonably apt description of the phenomenological reality. He also says there is something real, a “complex of dispositions”. And, he also says that when we refer to color, we think we’re referring to the former, but we’re really referring to the latter. So, subjective color does not exist, but references to color do exist.
That still leaves room for there to be “appearances of pink”. No actual pink, but also more than a mere belief in pink; some actual phenomenon, appearance, component of experience, which people mistakenly think is pink. But I see no trace of this. The thing which he is prepared to call real, the “complex of dispositions”, is entirely cognitive (in the previous paragraph he refers to “innate and learned associations and reactive dispositions”). There is no reference to appearance, experience, or any other aspect of subjectivity.
Therefore, I conclude that not only does Dennett deny the existence of color (yes, I know he still uses the word, but he explicitly defines it to refer to something else), he denies that there is even an appearance of color, a “color feeling”. In his account of color phenomenology, there are just beliefs about nonexistent things, and that’s it.
The references to red together definitely form a physical network in my brain, right? I have a list of 10,000 things in my memory that are vividly red, some more vivid than others, and they’re all potentially connected under this label ‘red’. When that entire network is stimulated (say, by my seeing something red or imagining what “red” is), might I not also give that a label? I could call the stimulation of the entire network the “essence of red” or “redness” and have a subjective feeling about it.
I’m certain this particular theory about what “redness” occurs frequently. My question is, what’s missing in this explanation from the dualist point of view? Why can’t the subjective experience of red just be the whole network of red associations being simultaneously excited as an entity?
Above you wrote
So I guess I’m just asking, what’s the further problem? (If you’ve already answered, would you please link to it?)
What are in those ellipses? In what you quote, I see that he’s denying that it’s “a private, ineffable something-or-other in your mind’s eye”. From what else I’ve read of Dennett, I’m sure that he has a problem with the “private” and “ineffable” part. Is it so clear that he has a problem with the “component of experience” part?
In the book, a character called Otto advocates the position that qualia exists. The full passage is Dennett making his case to Otto once again:
And how would you answer that passage of Dennett’s ?
“Dear Dan—the shade of pink is real. In denying its existence, you are getting things backwards. The important methodological maxim to remember is that appearances are real. This does not mean that every time there is an appearance of an apple, there is an apple. It just means that every time there is an appearance of an apple, there is an appearance of an apple. It also does not mean that every time someone thinks there is an appearance of an apple, there is one. People can be mistaken in their auto-phenomenology—but not as mistaken as you would have us believe.”
Husserl, who was only concerned with getting phenomenology right and not with any underlying ontology, had a “principle of principles” which expresses the first half of what I mean by “appearances are real”:
In Husserl, every mode of awareness is a form of intuition, including sense perception. He’s saying that every appearance has an element of certainty, but only an element.
Appealing to Husserl may be overkill, but the point is, there is a limit to the degree one can plausibly deny appearance. Denying the existence of color in the way Dennett appears to be doing is like saying that 0 = 1 or that nothing exists—it’s only worth doing as an exercise in cognitive extremism; try believing something impossible and see what happens.
However, people do end up believing weird things out of apparent philosophical necessity. I think this is what is going on with Dennett; he does understand that there is nothing like that shade of pink in standard physical ontology, so rather than engage in a spurious identification of pinkness with some neural property, he just says there is no pink. It’s just a word. It’s there to denote a bundle of cognitive and behavioral dispositions. But there is no pink as such, outside or inside the head.
He’s willing to take this drastic step because the truth of physics seems so nailed down, so indisputable. However, there is a sense in which we do not know what physics is about. It’s a black-box causal structure, whose inputs and outputs show up in our experience looking a certain way (looking like objects distributed in space). But that doesn’t tell us how they are in themselves.
If you take the Husserlian principle ontologically—conscious experience is offering us a glimpse of the genuine nature of one small sliver of reality, namely, what happens in consciousness—and combine it with a general commitment to the causal structure of physics, you get what I’m now calling Reverse Monism. Reverse, because it’s the reverse of the usual reductionism. The usual reductionism says this appearance, this part of consciousness, is actually atoms in space doing something. Reverse monism says instead: this appearance must be what some part of physics (some part of the physical brain) actually is.
If the usual reductionistic accounts of conscious experience were plausible as identities, reverse monism wouldn’t introduce anything new; it would just be looking at the same identity from the other end of the equation. However, the only thing these alleged identities have going for them, generally, is a common causal role. The thing which is supposed to be the neural correlate of blueness is in the right position to be caused by blue light and to get a person talking about blueness. But the thing in itself (e.g. cortical neurons firing) is nothing like blueness as such.
Now as it happens, all these theories about the neural correlates of consciousness (such as Drescher’s gensyms) are speculative in the extreme. We’re not talking about anything as well-founded as the Krebs cycle or the inverse square law; these are speculations about how the truth might be. So we are not under any obligation to consider the mismatch between subjective ontology and neural ontology which occurs in these theories as itself an established fact, that we just have to learn to live with. We are free to look for other theories in which an ontologically plausible identity, and not just a causally adequate identity, is posited. That’s what I’m on about.
Husserl couldn’t know what Dennett knows about the biology, psychology and evolutionary history of color perception.
Time and again you sweep aside the “bundle of cognitive and behavioral dispositions” Dennett refers to in his reply to Otto, in your appeal to the primacy of “redness” or “pinkness”.
This has some intuitive appeal, because “red” and “pink” are short words and refer to something we experience as simple. Your position would be much harder to defend if you were looking for “the private, ineffable feeling of reading Lesswrong.com″ as one commenter suggested: people would have an easier time denying the existence of that.
Yet—even though I’m not entirely sure that’s what this commenter had in mind—I would say there is only a difference of degree, not of kind, between “the feeling of redness” and “the feeling of reading Lesswrong”. The feeling of seeing the color red really is a complex of dispositions, something cobbled together from many parts over our long evolutionary history. The more we learn about color, the more complex it turns out to be. It only feels simple because it’s a human universal.
The “feeling of reading LessWrong” can be analysed in great detail. There’s a classic work of phenomenology, Roman Ingarden’s The Literary Work of Art, which goes into the multiple “strata” of meaning which turn the examination of small black shapes on white paper into the imagination of a possible world. Participating in a discussion like this involves a stream of complex intentional experiences against a steady background of embodied sensation.
Color experience is certainly not beyond further analysis, even at the phenomenological level. The three-dimensional model of hue, saturation, and intensity is a statement about the nature of subjective color. The idea that experiences are ineffable is just wrong. We’re all describing them every day.
No amount of intricate new knowledge about the way that color perception varies or the functions that it performs can actually abolish the phenomenon. And most materialists don’t try to abolish it, they try to identify it with something material. I think Dennett is trying to abolish phenomena as realities, in favor of a cognitive behaviorism, but that is really a topic for Dennett interpreters.
Instead, I want to know about your phenomenology of color. I assume that in fact you have it. But I’m curious to know, first, whether you’ll admit to having it, or whether you prefer to talk about your experience in some other way; and second, how you describe it. Do you look at color and think “I’m seeing a bundle of dispositions”? Do you tell yourself “I’m not actually seeing it, I’m just associating the perceptual object with a certain abstract class”?
I’m not sure I ever “look at color” in isolation. There are colors and arrangements of color that I like and that I’ll go out of my way to experience; I’m looking forward to an exhibition of Soulages’ work in Paris, for instance.
When I look at a Soulages painting my inner narrative is probably something like “Wow, this is black… a luminous black which emphasizes straight, purposive brushstrokes in a way that’s quite different from any other painter’s use of color I’ve seen; how puzzling and delightful.” It’s different from the reflective black of my coffee cup nearby, the matte black of my phone handset or the black I see when I close my eyes. When I see my coffee cup I’m mostly seeing the reflections, when I see the handset it’s the texture that stands out, when I close my eyes the black is a background to a dance of random splotches and blobs.
When I think about my perception of black in all the above instances I am certainly thinking in terms of dispositions and of abstract tags. There isn’t a unitary “feeling of black” that persists after these various experiences of things I now call black.
External only in that wetware is modelling something outside of the skull, rather than it’s internal state. The intent was to state that merely because you perceive reality along certain ontological lines does not imply that reality has the same ontology.
This should be particularly obvious when your internal sense fails to correspond to reality; if conscious states are an imperfect guide to external states then why should the apparent ontology of consciousness be accurate?
None of which you refute here or in the OP, especially those who deny that “blueness” is a veridical property of reality.
No; it means that something referencing them exists in some part of reality (your skull). An equivalence relation; an internal tag that this object is blue.
To counter the realism, consider mathematicians, who consciously deal in infinite sets, or all theorems provable under some axioms (model theory). Just because something appears plainly to you does not mean it exists. Kant says it better than I can.
Not if you mean more than perception by consciousness. Even in perception, they’re just the ontology imposed by our neurology, and have neural correlates that suffice.
Consciousness isn’t prior to perception or action; it’s after it. There isn’t a homunculus in there for experience to “appear to”. If anything, there’s a compressed model of your own behaviour to which experience is fed into; that’s the “you” in the primate—a model of that same primate for planning and conterfactual reasoning.
Let’s suppose I have a hallucinatory perception of a banana. So, there’s no yellow object outside my skull—we can both agree on that. It seems we also agree that I’m having a yellow perception.
But we part ways on the meaning of that. Apparently you think that even my hallucination isn’t really yellow. Instead, there’s some neural thing happening which has been tagged as yellow—whatever that means.
I really wonder about how you interpret your own experience. I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it’s like to have a perception tagged as yellow. But how does that translate, subjectively? When you see yellow, do you tell yourself you’re seeing the tag? Do you just semi-visualize a bunch of neurons firing in a certain way?
We went over this issue a bit in the previous discussion. My response (following Drescher) was: “To experience [yellow] is to feel your cognitive architecture assigning a label to sensory data.”
As I elaborated:
The point being: I can’t give a complete answer now, but I can tell you what the solution will look like. It will involve describing how a cognitive architecture works, then looking at the distinctions it has to make, then looking at what constraints these distinctions operate under (e.g. color being orthogonal to sound [unless you have synaesthesia], etc.), then identifying what parts of the process can access each other.
Out of all of that, only certain data representations are possible, and one of these (perhaps, hopefully, the only one) is the one with the same qualities as our perception of color. You know you’re at the solution, when you say, Aha! If I had to express what information I receive, under all those constraints, that is what qualities it would need to have.
To that, you replied:
Though you object to the comparison, this is the same kind of error as demanding that there be a fundamental “chess thing” in Deep Blue. There is no fundamental color, just as there is no fundamental chess. There is only a regularity the system follows, compressible by reference to the concept of color or chess.
I am intrigued by your wording, here. I suppose I experience colors just like you do, but—when I think about it—I tell myself that what is, in fact seeing a yellow object is, in fact the same thing as experiencing what it’s like to have a perception tagged as yellow. I believe these descriptions to be equivalent in the same sense that “breaking of hydrogen bonds between dihydrogen monoxide molecules, leading to those molecules traveling in near-independent trajectories outside the crystalline structure” is equivalent to “ice sublimating”.
The relevant part of the optical cortex which fires on yellow objects has fired; the rest of your brain behaves as if there were a yellow banana out in front of it. “Tagging” seemed like the best high level term for it. A collection of stimuli are being collected together as an atomic thing. There’s a neural thing happening, and part of that neural thing is normally caused by yellow things in the visual field.
The most obvious point where it has subjective import is when things change[1]. I probably experience colours as you do; when I introspect on colour, or time, I cannot find good cause to distinguish it from “visualising” an infinite set or a function. The only apparent different is that reality isn’t under concious control. I don’t assume that the naive ontology that is presented to me is a true ontology.
[1] There are a pair of coloured mugs (blue and purple) that I can’t distinguish in my peripheral vision, for example. When I see one in my peripheral vision, it is coloured (blue, say); when I look at it directly, there is a period in which it is both blue and purple, as best I can describe, before definitively becoming purple. Head MRI’s do this too.
Edit: The problem is that there isn’t an easy way to introspect on the processes leading to perceptions; they are presented ex nihilo. As best I can tell, there’s no good distinguisher of my senses from “experiencing what it’s like to have a perception tagged as yellow”