There are two traditional problems associated with colors. One is the sort that pseudo-philosophical douchebags take to: “Dude, what if no one really sees the same colors?” The other was very popular in the heyday of classical analytic philosophy: how can we say that Red is Not-Blue analytically if they are empirical & presumably a posteriori data?
Let’s assume for the sake of getting to the real argument that consciousness arises from physical matter in a manner uncontroversial for the materialist. Granting this, why do we all see the same colors, if we do?
The short answer is that we probably don’t. I don’t even see with the same level of clarity that someone with 20⁄20 vision does, at least not without the help of my glasses, which themselves introduce a level of optical distortion not significant to my brain’s processing but certainly significant in a [small] geometric sense.
A quicker way to get at the fact that we probably don’t see quite the same way is to point out that dogs’ eyes aren’t responsive to certain colors which most human eyes can distinguish quite easily. This leads directly to the point that there is probably enough biological variation (& physical deterioration over someone’s lifetime) that we don’t end up with quite the same picture of the world, even though it’s evidently close enough that we all get along all right.
This also leads to the strongest argument (for empirical scientists anyhow) that we do all see roughly the same thing: we’ve got pretty much the same sensory organs & brains to process what is roughly the same data. It seems reasonable to expect that most members of a given species should experience roughly the same picture of the world.
So much for the first problem, at least in brief & from a pragmatic point of view. The skeptical philosopher must admit that this is a silly problem to demand a decisive answer to.
As for the problem of distinguishing between colors analytically, of determining a priori the truth of empirical statements, a mathematical concept is quite helpful, particularly if we’re willing to grant that colors are induced by a spectrum of wavelengths which the eye can perceive. But even if we don’t grant that last fact, introducing the notion of a partition suffices to distinguish the perceived colors (or qualia) inasmuch as it also divides up the spectrum of wavelengths which induce those colors.
Note that this doesn’t help us escape the fact that we require experience to learn of the various colors & the fact that they form a partition, but that isn’t the crux of the problem to begin with. In the same way that we can learn what a round table is & deduce that it is a table analytically, once we become acquainted with the colors & their structure—that is, once we understand the abstract rules governing partitions—we can make analytic claims based only on that structure we understand, and not requiring any further empirical data, or really even the empirical components of the original data.
Granting this, why do we all see the same colors, if we do?
I can quickly and easily prove that some people see colours in a different way to the way that I do.
To my eyes, red and green are visibly and obviously distinct. I cannot look at one and consider it to be the other. Yet, red-green colour blindness is the most common version of colourblindness; these people must see either red, or green, or both in some way differently to the way that I see these colours.
I think you are confusing the word “color” that identifies a certain type of visual experience, with the word “color” that identifies a certain set of light-frequencies.
This is much like confusing the word “sound” which means “auditory experience”, with the word “sound” which means “acoustic vibrations”.
You see certain frequencies in a different way than people with red-green colour blindness; in short these frequencies lead to different qualia, different visual experiences. That’s rather obvious and rather useless in discussing the deeper philosophical point.
But to say that you experience certain visual experiences differently than others experience them, may even be a contradiction in terms—unless it’s meant that the atomic qualia trigger in turn different qualia (e.g. different memories or feelings) in each person. Which is probably also trivially true...
Your second paragraph encapsulates the point I intended to convey; that given frequencies of light create in my mind qualia that differ from the qualia created by the same frequency of light in the mind of a red-green colourblind person.
On the common sense view that qualia are the kolors generated by our minds, which do so based on sensory input about the colors in the world, it makes sense that color-to-kolor conversion (if you will) should be imperfect even among people with properly functioning sight.
Its possible my writing wasn’t clear enough to convey this point (or that you were objecting to CCC, not me), but I was getting at the idea that we probably do experience slightly different kolors. It was never my intention to be philosophically “rigorous” about that, just to raise the point.
You’ll notice that the next few sentences of my post address this same idea for fully functional members of different species. But it doesn’t technically refute the claim for qualia, only that we’re not all equally responsive to the same stimuli.
It is, for example, technically possible (in the broadest sense) that color-blind people experience the same qualia we do, but they are unable to act on them, much in the same way that a friend with ADD might experience the same auditory stimuli I do, but then is too distracted to actually notice or make sense of it.
I note, however, that the physical differences in color-blindness (or different species’ eyes) are enough reason to lend little credibility to this idea.
I’m not sure what the prolem of distingusihing colours analytically is supposed to relate to. The classic modern argument, Mary’s Room attempts to demonstrate that the subjective sensation of colour is a problem of materialism, because on can conceviably know everything about the neuroscience of colour perception without knowing
anything about how colours look. That could sort-of be re-expressed by saying Mary can’t analytically deduce colour
sensations from the information she has. And it is sort-of true that once you have a certain amount of experiential
knowledge of colour space, you could gues the nature of colours you haven’t personally seen. But that isn’t very
relevant to M’s R because she is stipulated as not having seen any colours. So, overall, I don’t see what you are getting
at.
You can also know all relevant facts about physics but still not “know” how to ride a bicycle. “Knowing” what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural “program”. You can’t learn how to imagine red the same way you learn facts like 2+2=4 for the same reason you can’t learn how to ride a bike by learning physics. It’s a different type of “knowledge”, not sure if we should even call it that.
Edit (further explanation): To learn how to ride a bike you need to practice doing it, which implements a “neural program” that allows you to do it (via e.g. “muscle memory” and whatnot). Same for producing a redness sensation (imagining red), a.k.a “knowing what red looks like”.
Knowing” what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural “program”
Maybe. But, if true, that doesn’t mean that red is know-how. I means that something like know-how is necessary
to get knowlege-by-acquaintance with Red. So it still doesn’t show that Red is know-how in itself. (What does
it enable you to do?)
So it still doesn’t show that Red is know-how in itself.
Talking about “red in itself” is a bit like talking about “the-number-1 in itself”. What does it mean? We can talk about the “redness sensation” that a person experiences, or “the experience of red”. From an anatomical point of view, experiencing red(ness) is a process that occurs in the brain. When you’re looking at something red (or imagining redness), certain neural pathways are constantly firing. No brain activity → no redness experience.
Let’s compare this to factual knowledge. How are facts stored in the brain? From what we understand about the brain, they’re likely encoded in neuronal/synaptic connections. You could in principle extract them by analyzing the brain. And where is the (knowledge of) red(ness) stored in the brain? Well there is no ‘redness’ stored in the brain, what is stored are (again in synaptic connections) instructions that activate the color-pathways of the visual cortex that produce the experience of red. See how the ‘knowledge of color’ is not quite like factual knowledge, but rather looks like an ability?
You argue as if involving neuronal activation is sufficient evidence that something is an ability. But inabilities
are as neuronal as abilitites. If someone becomes incapably drunk, that is as much as matter of neuronal activity
as anything else. But in common sense terms, it is loss of ability, not acquisition of an ability.
Both riding a bike or seeing red involves the brain performing I/O, i.e., interacting with the outside world, whereas learning that 2+2=4 can be done without such interaction.
Mary’s room is an interesting one. I think there’s a valid rebuttal to it, though, but it takes quite a bit of explanation so hold onto your hats, ladies and gentlemen, and if you’re not interested then feel free to ignore. I should stress that this is an argument of my own formulation, although it is informed by my readings of a bunch of other philosophers, and that therefore it is entirely possible that people who share my conclusions might disagree with my premises or form of argument. I’m not trying very hard to convince anyone with this post, just putting the argument out there for your inspection. <-- (EDIT: left the word “not” out of this sentence the first time. Whoops!)
The hard-materialist, anti-qualian, functionalist argument is that sensation ≡ brain state. That is, “for one’s brain to be in the brain-state which is produced when red light hits one’s retina is to experience redness”. Once you’ve experienced redness a few times, it is to possible to intentionally assume that “red” brain-state, so it is possible to remember what it is like to see red without actually having to be exposed to red light. We call this “knowing what red is like”.
Mary, unfortunately, has grown up in a colour-free environment, so she has never experienced the brain-state that is “seeing red”, and even if her brain had drifted through that state accidentally, she wouldn’t have known that what she was experiencing was redness. She can’t find her way to the state of redness because she has never been there before. When she starts researching in an attempt to figure out what it is like to see red, her descriptive knowledge of the state will increase—she’ll know which sets of neurons are involved, the order and frequency of their firings, etc—but of course this won’t be much help in actually attaining a red brain-state. Hearing that Paris is at 48.8742° N, 2.3470° E doesn’t help you get there unless you know where you are right now.
Mary’s next step might be to investigate the patterns that instantiate sensations with which she is familiar. She might learn about how the smell of cinnamon is instantiated in the brain, or the feeling of heat, etc, etc, and then attempt to “locate” the sensation of red by analogy to these sensations. If you know where you are relative to Brisbane, and you know where Brisbane is relative to Paris, then you can figure out where you are relative to Paris.
This workaround would be effective if she were trying to find her way to a physical place, because on Earth you only need 3 dimensions to specify any given location, and it’s the same 3 dimensions every time. Unfortunately, the brain is more complicated. There are some patterns of neural behaviour which are only active in the perception of colour, so while analogy to the other senses might allow Mary zero in a little closer to knowing what red is like, it wouldn’t be nearly enough to solve her problem.
Luckily, Mary is a scientist, and where scientists can’t walk they generally invent a way to fly. Mary knows which neurons would are activated when people see red, and she knows the manner of their activation. She can scan her head and point to the region of her brain that red light would stimulate. So why does she need red light? Synesthetes regularly report colour experiences being induced by apparently non-coloured stimuli, and epileptics often experience phantom colours before fits. Ramachandran and Hubbard even offer a report of a colour-blind synesthete who experiences what he calls “Martian colours”—colours which he has never experienced in the real world and which therefore appear alien to him (PRSL, 2001). So, Mary CAN know red, she just has to induce the brain state associated with redness in herself. Maybe she uses transcranial electrostimulation. Maybe she has to resort to wireheading (http://wiki.lesswrong.com/wiki/Wireheading). Maybe all she needs to do is watch a real-time brain scan while she meditates, so she can learn to guide herself into the right state the same way that people who already “know” red get to it. Point is, if Mary is at all dedicated, she’s going to end up understanding red.
Of course, some qualians might argue that this misses the point—if Mary induces an experience of redness then she’s still exposing herself to the quale of red, whether or not there was any red light involved, so Mary hasn’t come to her knowledge solely by physical means. I think that skirts dangerously close to begging the question, though. As I’ve mentioned above, the functionalist view of colour holds that to know what it is like to see red is just “to know how to bring about the brain-state associated with redness in oneself”. It seems unfair to say that Mary has to be able possess that knowledge but never use it in order for functionalists to be proved right—you might as well request that she know what an elephant looks like without ever picturing one in her mind. Regardless, the Mary’s Room thought experiment presupposes that Mary can’t experience the quale of red in her colourless environment. If qualians want to argue that inducing the brain state of red exposes Mary to the quale of red, then the thought experiment doesn’t do what it was supposed to, and therefore can’t prove what it was designed to prove.
Anyway, I’d say that was my two cents but looking at how much I’ve typed it’s probably more like fifteen dollars...
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about. Or that you can’t understand a program without compiling it & running it.
I’m not entirely sure how much credibility I lend to that. There are some very abstract things (fairly simple, yes) which I can intuit without prior experience, and there are many complicated things which I can predict due to a great deal of prior experience (eg landscapes described in novels).
But I mostly raised it as another interesting problem with a proposed [partial] solution.
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about
I dont see how you could fail to be able to deduce what it is about, given Mary’s supercientific powers.
Or that you can’t understand a program without compiling it & running it.
Ordinary mortals can, in simple cases, and Mary presumably can in any case.
Or that you can’t understand a program without compiling it & running it.
You″re not a superscientist. Can I recommend reading the linked material?
It’s possible I already had & that you’re misunderstanding what my examples are about: the difference between the physical/digital/abstract structure underlying something & the actual experience it produces (eg qualia for perceptions of physical things, or pictures for geometric definitions, etc).
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
I don’t see where you are going with that. If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code. But M’s R proposes that there is something you can
get from seeing a colour yourself. The analogy doesnt seem to be there. Unless you disagree with the intended
conclusion of M’s R.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code.
This seems trivially false.
The implicit assumption I inferred from the claim made it:
If you are a superscientist, there is nothing you can learn from running a programme [for some given non-infinite time] that you cannot get from examining the code [for a commensurate period of subjective time, including allowance for some computational overhead in those special cases where abstract analysis of the program provides no compression over just emulating it].”
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
So why expect to recognize “qualia” from their descriptions?
Why expect an inability to figure out some things about your internal stare to put on a techinicolor
display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display?
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from >examining the code.
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
In case I haven’t driven the point home with enough clarity (for example, I did read the link the first time you posted it), I am claiming that there is something to experiencing the program/novel/world inasmuch as there is something to experiencing colors in the world. Whether that something is a subset of the code/words/physics or something additional is the whole point of the problem of qualia.
And no, I don’t have a clear idea what a satisfying answer might look like.
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
That doesn’t follow. Figuring out the behaviour of a programme is just an exercise in logical deduction. It can be done by non-superscientists in easy cases, so it is just an extension of the same idea that a supersceintist can handle difficult cases. However, there is no “easy case” of deducing a perceived quality from objective inormation.
Beyond that, if all you are saying is that the problem of colours is part of a larger problem of qualia, which
itself is part of a larger issue of experience, I can answer with a wholehearted “maybe”. That might make colour
seem less exceptional and therefore less annihilaion-worthy, but I otherwise don’t see where you are going.
I’m not just talking about behavior. The kinds of things involved in experiencing a program involve subjective qualities, like whether Counter-Strike is more fun than Day of Defeat, which maybe can’t be learned just from reading the code.
It’s possible the analogy is actually flawed, and one is contained in its underlying components while the other is not, but I don’t understand how they differ if they do, or why they should.
we do all see roughly the same thing: we’ve got pretty much the same sensory organs & brains to process what is roughly the same data. It seems reasonable to expect that most members of a given species should experience roughly the same picture of the world.
To my disappointment, David Papineau concluded the same, but we can’t compare differences in pictures of the world to differences in the brain structure or function because we can have only a single example of a “picture of the world.” “Pretty much the same sensory organs & brains” is useless because of its vagueness.
So much for the first problem, at least in brief & from a pragmatic point of view. The skeptical philosopher must admit that this is a silly problem to demand a decisive answer to.
To the contrary, the qualia problem is exactly the sort of problem to which philosophy can provide a decisive answer. For example, that we can’t frame the qualitative differences between persons conceptually should lead philosophers to doubt the coherence of the qualia concept.
Does perhaps the notion that innate concepts might be incoherent create confusion?
There are two traditional problems associated with colors. One is the sort that pseudo-philosophical douchebags take to: “Dude, what if no one really sees the same colors?” The other was very popular in the heyday of classical analytic philosophy: how can we say that Red is Not-Blue analytically if they are empirical & presumably a posteriori data?
Let’s assume for the sake of getting to the real argument that consciousness arises from physical matter in a manner uncontroversial for the materialist. Granting this, why do we all see the same colors, if we do?
The short answer is that we probably don’t. I don’t even see with the same level of clarity that someone with 20⁄20 vision does, at least not without the help of my glasses, which themselves introduce a level of optical distortion not significant to my brain’s processing but certainly significant in a [small] geometric sense.
A quicker way to get at the fact that we probably don’t see quite the same way is to point out that dogs’ eyes aren’t responsive to certain colors which most human eyes can distinguish quite easily. This leads directly to the point that there is probably enough biological variation (& physical deterioration over someone’s lifetime) that we don’t end up with quite the same picture of the world, even though it’s evidently close enough that we all get along all right.
This also leads to the strongest argument (for empirical scientists anyhow) that we do all see roughly the same thing: we’ve got pretty much the same sensory organs & brains to process what is roughly the same data. It seems reasonable to expect that most members of a given species should experience roughly the same picture of the world.
So much for the first problem, at least in brief & from a pragmatic point of view. The skeptical philosopher must admit that this is a silly problem to demand a decisive answer to.
As for the problem of distinguishing between colors analytically, of determining a priori the truth of empirical statements, a mathematical concept is quite helpful, particularly if we’re willing to grant that colors are induced by a spectrum of wavelengths which the eye can perceive. But even if we don’t grant that last fact, introducing the notion of a partition suffices to distinguish the perceived colors (or qualia) inasmuch as it also divides up the spectrum of wavelengths which induce those colors.
Note that this doesn’t help us escape the fact that we require experience to learn of the various colors & the fact that they form a partition, but that isn’t the crux of the problem to begin with. In the same way that we can learn what a round table is & deduce that it is a table analytically, once we become acquainted with the colors & their structure—that is, once we understand the abstract rules governing partitions—we can make analytic claims based only on that structure we understand, and not requiring any further empirical data, or really even the empirical components of the original data.
I can quickly and easily prove that some people see colours in a different way to the way that I do.
To my eyes, red and green are visibly and obviously distinct. I cannot look at one and consider it to be the other. Yet, red-green colour blindness is the most common version of colourblindness; these people must see either red, or green, or both in some way differently to the way that I see these colours.
I think you are confusing the word “color” that identifies a certain type of visual experience, with the word “color” that identifies a certain set of light-frequencies. This is much like confusing the word “sound” which means “auditory experience”, with the word “sound” which means “acoustic vibrations”.
You see certain frequencies in a different way than people with red-green colour blindness; in short these frequencies lead to different qualia, different visual experiences. That’s rather obvious and rather useless in discussing the deeper philosophical point.
But to say that you experience certain visual experiences differently than others experience them, may even be a contradiction in terms—unless it’s meant that the atomic qualia trigger in turn different qualia (e.g. different memories or feelings) in each person. Which is probably also trivially true...
Apologies for the confusion.
Your second paragraph encapsulates the point I intended to convey; that given frequencies of light create in my mind qualia that differ from the qualia created by the same frequency of light in the mind of a red-green colourblind person.
On the common sense view that qualia are the kolors generated by our minds, which do so based on sensory input about the colors in the world, it makes sense that color-to-kolor conversion (if you will) should be imperfect even among people with properly functioning sight.
Its possible my writing wasn’t clear enough to convey this point (or that you were objecting to CCC, not me), but I was getting at the idea that we probably do experience slightly different kolors. It was never my intention to be philosophically “rigorous” about that, just to raise the point.
You’ll notice that the next few sentences of my post address this same idea for fully functional members of different species. But it doesn’t technically refute the claim for qualia, only that we’re not all equally responsive to the same stimuli.
It is, for example, technically possible (in the broadest sense) that color-blind people experience the same qualia we do, but they are unable to act on them, much in the same way that a friend with ADD might experience the same auditory stimuli I do, but then is too distracted to actually notice or make sense of it.
I note, however, that the physical differences in color-blindness (or different species’ eyes) are enough reason to lend little credibility to this idea.
I’m not sure what the prolem of distingusihing colours analytically is supposed to relate to. The classic modern argument, Mary’s Room attempts to demonstrate that the subjective sensation of colour is a problem of materialism, because on can conceviably know everything about the neuroscience of colour perception without knowing anything about how colours look. That could sort-of be re-expressed by saying Mary can’t analytically deduce colour sensations from the information she has. And it is sort-of true that once you have a certain amount of experiential knowledge of colour space, you could gues the nature of colours you haven’t personally seen. But that isn’t very relevant to M’s R because she is stipulated as not having seen any colours. So, overall, I don’t see what you are getting at.
You can also know all relevant facts about physics but still not “know” how to ride a bicycle. “Knowing” what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural “program”. You can’t learn how to imagine red the same way you learn facts like 2+2=4 for the same reason you can’t learn how to ride a bike by learning physics. It’s a different type of “knowledge”, not sure if we should even call it that.
Edit (further explanation): To learn how to ride a bike you need to practice doing it, which implements a “neural program” that allows you to do it (via e.g. “muscle memory” and whatnot). Same for producing a redness sensation (imagining red), a.k.a “knowing what red looks like”.
Maybe. But, if true, that doesn’t mean that red is know-how. I means that something like know-how is necessary to get knowlege-by-acquaintance with Red. So it still doesn’t show that Red is know-how in itself. (What does it enable you to do?)
Talking about “red in itself” is a bit like talking about “the-number-1 in itself”. What does it mean? We can talk about the “redness sensation” that a person experiences, or “the experience of red”. From an anatomical point of view, experiencing red(ness) is a process that occurs in the brain. When you’re looking at something red (or imagining redness), certain neural pathways are constantly firing. No brain activity → no redness experience.
Let’s compare this to factual knowledge. How are facts stored in the brain? From what we understand about the brain, they’re likely encoded in neuronal/synaptic connections. You could in principle extract them by analyzing the brain. And where is the (knowledge of) red(ness) stored in the brain? Well there is no ‘redness’ stored in the brain, what is stored are (again in synaptic connections) instructions that activate the color-pathways of the visual cortex that produce the experience of red. See how the ‘knowledge of color’ is not quite like factual knowledge, but rather looks like an ability?
An ability to do what?
You argue as if involving neuronal activation is sufficient evidence that something is an ability. But inabilities are as neuronal as abilitites. If someone becomes incapably drunk, that is as much as matter of neuronal activity as anything else. But in common sense terms, it is loss of ability, not acquisition of an ability.
In an case, there are plenty of other obections to the Ability Hypothesis
Both riding a bike or seeing red involves the brain performing I/O, i.e., interacting with the outside world, whereas learning that 2+2=4 can be done without such interaction.
One might imagine so, but I expect there are no examples of it ever happening.
There are plenty of examples of less basic apriori truths being figured out once the basics are in place.
Mary’s room is an interesting one. I think there’s a valid rebuttal to it, though, but it takes quite a bit of explanation so hold onto your hats, ladies and gentlemen, and if you’re not interested then feel free to ignore. I should stress that this is an argument of my own formulation, although it is informed by my readings of a bunch of other philosophers, and that therefore it is entirely possible that people who share my conclusions might disagree with my premises or form of argument. I’m not trying very hard to convince anyone with this post, just putting the argument out there for your inspection. <-- (EDIT: left the word “not” out of this sentence the first time. Whoops!)
The hard-materialist, anti-qualian, functionalist argument is that sensation ≡ brain state. That is, “for one’s brain to be in the brain-state which is produced when red light hits one’s retina is to experience redness”. Once you’ve experienced redness a few times, it is to possible to intentionally assume that “red” brain-state, so it is possible to remember what it is like to see red without actually having to be exposed to red light. We call this “knowing what red is like”.
Mary, unfortunately, has grown up in a colour-free environment, so she has never experienced the brain-state that is “seeing red”, and even if her brain had drifted through that state accidentally, she wouldn’t have known that what she was experiencing was redness. She can’t find her way to the state of redness because she has never been there before. When she starts researching in an attempt to figure out what it is like to see red, her descriptive knowledge of the state will increase—she’ll know which sets of neurons are involved, the order and frequency of their firings, etc—but of course this won’t be much help in actually attaining a red brain-state. Hearing that Paris is at 48.8742° N, 2.3470° E doesn’t help you get there unless you know where you are right now.
Mary’s next step might be to investigate the patterns that instantiate sensations with which she is familiar. She might learn about how the smell of cinnamon is instantiated in the brain, or the feeling of heat, etc, etc, and then attempt to “locate” the sensation of red by analogy to these sensations. If you know where you are relative to Brisbane, and you know where Brisbane is relative to Paris, then you can figure out where you are relative to Paris.
This workaround would be effective if she were trying to find her way to a physical place, because on Earth you only need 3 dimensions to specify any given location, and it’s the same 3 dimensions every time. Unfortunately, the brain is more complicated. There are some patterns of neural behaviour which are only active in the perception of colour, so while analogy to the other senses might allow Mary zero in a little closer to knowing what red is like, it wouldn’t be nearly enough to solve her problem.
Luckily, Mary is a scientist, and where scientists can’t walk they generally invent a way to fly. Mary knows which neurons would are activated when people see red, and she knows the manner of their activation. She can scan her head and point to the region of her brain that red light would stimulate. So why does she need red light? Synesthetes regularly report colour experiences being induced by apparently non-coloured stimuli, and epileptics often experience phantom colours before fits. Ramachandran and Hubbard even offer a report of a colour-blind synesthete who experiences what he calls “Martian colours”—colours which he has never experienced in the real world and which therefore appear alien to him (PRSL, 2001). So, Mary CAN know red, she just has to induce the brain state associated with redness in herself. Maybe she uses transcranial electrostimulation. Maybe she has to resort to wireheading (http://wiki.lesswrong.com/wiki/Wireheading). Maybe all she needs to do is watch a real-time brain scan while she meditates, so she can learn to guide herself into the right state the same way that people who already “know” red get to it. Point is, if Mary is at all dedicated, she’s going to end up understanding red.
Of course, some qualians might argue that this misses the point—if Mary induces an experience of redness then she’s still exposing herself to the quale of red, whether or not there was any red light involved, so Mary hasn’t come to her knowledge solely by physical means. I think that skirts dangerously close to begging the question, though. As I’ve mentioned above, the functionalist view of colour holds that to know what it is like to see red is just “to know how to bring about the brain-state associated with redness in oneself”. It seems unfair to say that Mary has to be able possess that knowledge but never use it in order for functionalists to be proved right—you might as well request that she know what an elephant looks like without ever picturing one in her mind. Regardless, the Mary’s Room thought experiment presupposes that Mary can’t experience the quale of red in her colourless environment. If qualians want to argue that inducing the brain state of red exposes Mary to the quale of red, then the thought experiment doesn’t do what it was supposed to, and therefore can’t prove what it was designed to prove.
Anyway, I’d say that was my two cents but looking at how much I’ve typed it’s probably more like fifteen dollars...
It’s just another cool problem about colors.
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about. Or that you can’t understand a program without compiling it & running it.
I’m not entirely sure how much credibility I lend to that. There are some very abstract things (fairly simple, yes) which I can intuit without prior experience, and there are many complicated things which I can predict due to a great deal of prior experience (eg landscapes described in novels).
But I mostly raised it as another interesting problem with a proposed [partial] solution.
I dont see how you could fail to be able to deduce what it is about, given Mary’s supercientific powers.
Ordinary mortals can, in simple cases, and Mary presumably can in any case.
You″re not a superscientist. Can I recommend reading the linked material?
It’s possible I already had & that you’re misunderstanding what my examples are about: the difference between the physical/digital/abstract structure underlying something & the actual experience it produces (eg qualia for perceptions of physical things, or pictures for geometric definitions, etc).
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
How about telling me whether you actually had?
I don’t see where you are going with that. If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code. But M’s R proposes that there is something you can get from seeing a colour yourself. The analogy doesnt seem to be there. Unless you disagree with the intended conclusion of M’s R.
This seems trivially false. See also the incomputability of pure Solomonoff induction.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
The implicit assumption I inferred from the claim made it:
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
My thoughts exactly.
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
I don’t see anything particularly troubling for a superscientist in the above.
More info than what? Are you assuming that inspection is equivalent to one programme cycle, or something?
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
More info than who or what inspecting the code? We are talking about superscientists here.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
There is no evidence that PA is self aware.
So your blind spot is filled in by other blind spots?
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
In case I haven’t driven the point home with enough clarity (for example, I did read the link the first time you posted it), I am claiming that there is something to experiencing the program/novel/world inasmuch as there is something to experiencing colors in the world. Whether that something is a subset of the code/words/physics or something additional is the whole point of the problem of qualia.
And no, I don’t have a clear idea what a satisfying answer might look like.
That doesn’t follow. Figuring out the behaviour of a programme is just an exercise in logical deduction. It can be done by non-superscientists in easy cases, so it is just an extension of the same idea that a supersceintist can handle difficult cases. However, there is no “easy case” of deducing a perceived quality from objective inormation.
Beyond that, if all you are saying is that the problem of colours is part of a larger problem of qualia, which itself is part of a larger issue of experience, I can answer with a wholehearted “maybe”. That might make colour seem less exceptional and therefore less annihilaion-worthy, but I otherwise don’t see where you are going.
I’m not just talking about behavior. The kinds of things involved in experiencing a program involve subjective qualities, like whether Counter-Strike is more fun than Day of Defeat, which maybe can’t be learned just from reading the code.
It’s possible the analogy is actually flawed, and one is contained in its underlying components while the other is not, but I don’t understand how they differ if they do, or why they should.
To my disappointment, David Papineau concluded the same, but we can’t compare differences in pictures of the world to differences in the brain structure or function because we can have only a single example of a “picture of the world.” “Pretty much the same sensory organs & brains” is useless because of its vagueness.
To the contrary, the qualia problem is exactly the sort of problem to which philosophy can provide a decisive answer. For example, that we can’t frame the qualitative differences between persons conceptually should lead philosophers to doubt the coherence of the qualia concept.
Does perhaps the notion that innate concepts might be incoherent create confusion?