Given the tenor of your further comments, I miunderstood you. You are claiming that given materialism, qualia
probably vary with slight variations in brain structure. Although the conclusion really follows from something like a supervenience principle, not just from the materiality of all things. And although qualia only probabl& vary. There could still be a “same” red. An althouh we don’t have a theory of how qualia depen on brain states—which is, in fact, the* Hard Problem. And the Hard Problem remains unaddressed by an assumption
of materialism, so materialism does not clear up “all hard problems”.
An althouh we don’t have a theory of *how qualia depen on brain states—which is, in fact, the Hard Problem
In my response, I was trying to say that “qualia” are brain states. I put the word “qualia” in quotes because, as far as I understand, this word implies something like, “a property or entity that all beings who see this particular shade of red share”, but I explicitly deny that such a thing exists.
Everyone’s brains are different, and not everyone experiences the same red, or does so in the same way. The fact that our experiences of “red” are similar enough to the point where we can discuss them is an artifact of our shared biology, as well as the fact that we were all brought up in the same environment.
Anyway, if “qualia” are brain states, then the question “how do qualia depend on brain states” is trivially answered.
In my response, I was trying to say that “qualia” are brain states
My use of “depend” was not meant to exlude identity. I had in mind the supervenience principle, which is trivially fulfilled by identity.
“a property or entity that all beings who see this particular shade of red share”
I am not sure where you got that from. C I Lewis defined qualia as a “sort of universal”, but I don’t think there was
any implication that everyone sees 600nm radiation identicallty. OTOH, ones personal qualia must recur to
a good degree of accuracy or one would be able to make no sense of ones sensory input.
Anyway, if “qualia” are brain states, then the question “how do qualia depend on brain states” is trivially answered.
Interestingly, that is completely false. Knowing that a bat-on-LSD’s qualia are identical to its brain states
tells me nohting about what they are (which is to say what they seem like to the bat in question..which
is to say what they are, since qualia are by definition seemings.[If you think there are two or three meanings
of “are” going on there, you might be right]).
OTOH, ones personal qualia must recur to a good degree of accuracy or one would be able to make no sense of ones sensory input.
Agreed. I was just making sure that we aren’t talking about some sort of Platonic-realm qualia, or mysterious quantum-entanglement qualia, etc. That’s why I personally dislike the word “qualia”; it’s too overloaded.
Knowing that a bat-on-LSD’s qualia are identical to its brain states tells me nohting about what they are (which is to say what they seem like to the bat in question..
If I am correct, then you personally could never know exactly what another being experiences when it looks at the same red object that you’re looking at. You may only emulate this knowledge approximately, by looking at how its brain states correlate with yours. Since another human’s brain states are pretty similar to yours, your emulation will be fairly accurate. A bat’s brain is quite different from yours, and thus your emulation will not be nearly as accurate.
However, this is not the same thing as saying, “bats don’t experience the color red (*)”. They just experience it differently from humans. I don’t see this as a problem that needs solving, though I could be missing something.
(*) Assuming that bats have color receptors in their eyes; I forgot whether they do or not.
Agreed. I was just making sure that we aren’t talking about some sort of Platonic-realm qualia,
I don’t think anyone has raised that except you.
If I am correct, then you personally could never know exactly what another being experiences when it looks at the same red object that you’re looking at.
Alhough, under may circumstances, I could know approximately.
However, this is not the same thing as saying, “bats don’t experience the color red”.
Bats have a sense that humans don’t have, sonar, and if they have qualia, they presumably have some
kind of radically unfamiliar-to-humans qualia to go with it. That is an issue of a different order to not
knowing exactly what someone else’s Red is like. And, again, it is not a problem solved by positing
the identity of the the bat’s brain state and its qualia. Identity theory does’t explain qualia in the sense
of explaining how variations in qualia relate to varations in brain state.
Alhough, under may circumstances, I could know approximately.
Agreed.
Bats have a sense that humans don’t have, sonar, and if they have qualia, they presumably have some kind of radically unfamiliar-to-humans qualia to go with it.
I wasn’t talking about sonar, but about good old-fashioned color perception. A bat’s brain is very different from a human’s. Thus, while you can approximate another human’s perception fairly well, your approximation of a bat’s perception would be quite inexact.
Identity theory does’t explain qualia in the sense of explaining how variations in qualia relate to varations in brain state.
I’m not sure I understand what you mean. If we could scan a bat’s brain, and understand more or less how it worked (which, today, we can’t do), then we could trace the changes in its states that would propagate throughout the bat when red photons hit its eyes. We could say, “aha, at this point, the bat will likely experience something vaguely similar to what we do, when red photons hit our eyes”. And we could predict the changes in the bat’s model of the world that will occur as the result. For example, if the bat is conditioned to fear the color red for some reason, we could say, “the bat will identify this area of its environment as dangerous, and will seek to avoid it”, etc.
If the above is true, then what is there left to explain ?
If the above is true, then what is there left to explain ?
Radically unfamiliar-to-humans qualia. You have picked an easy case, I have picked a difficult one.
If we wan’t to know what the world sonars like to a bat on LSD, identity theory doens’t tell us.
You have picked an easy case, I have picked a difficult one. If we wan’t to know what the world sonars like to a bat on LSD, identity theory doens’t tell us.
Well, in point of fact, I’ve personally never done LSD, so I don’t know what color perception is like for another human on LSD, either. I could make an educated guess, though.
In case of the bat sonar, the answer is even simpler, IMO: we lack the capacity to experience what the world sonars like to a bat, except in the vaguest terms. Again, I don’t see this is a problem. Bats have sonars, we don’t.
Note that this is very different from saying something like “we can’t know whether bats experience anything at all through their sonar”, or “even if we have scanned the bat’s brain, we can’t predict what changes it would undergo in response to a particular sonar signal”, etc. All I’m saying is, “we cannot create a sufficiently accurate mapping between our brain states and the bat’s, as far as sonaring is concerned”.
Again, I’m not entirely sure I understand what additional things we need to explain w.r.t qualia.
Well, in point of fact, I’ve personally never done LSD, so I don’t know what color perception is like for another human on LSD, either. I could make an educated guess, though.
Normally I’d assume that I know what you meant and move on, but since this involves LSD… You don’t know what it’s like? Or you do, but it’s an educated guess? What?
I’ve never done LSD myself, but I’ve talked to people who did, and I’ve read similar accounts in books, online, etc. Thus, I can make a guess as to what LSD would feel like, assuming my brain is close to the average.
In case of the bat sonar, the answer is even simpler, IMO: we lack the capacity to experience what the world sonars like to a bat, except in the vaguest terms. Again, I don’t see this is a problem
I see that as a problem for the claim that mind-brain identity theory explains qualia. It does not enable us
to undestand the bat’s qualia, or to predict what they would be like. However, other explanations do lead
to understanding and predicting.
Again, I’m not entirely sure I understand what additional things we need to explain w.r.t qualia.
I guess I’m not entirely sure what you mean by “understanding” and “predicting”. As I said, if we could scan the bat’s brain and figure out how all of its subsystems influence each other, we would know with a very high degree of certainty what happens to it when the bat receives a sonar signal. We could identify the changes in the bat’s model of the world that would result from the sonar signal, and we could predict them ahead of time.
Thus, for example, we could say, “if the bat is in mid-flight, and hungry, and detects its sonar reflecting from a small object A of size B and shape C etc., then it would alter its model of the world to include a probable moth at the object’s approximate location (*). It would then alter course to intercept the moth, by sending out signals to its wing muscles as follows: blah blah”.
Are predictions of this sort insufficient ? If so, what additional predictions could be made by those other explanations you mentioned ?
(*) Disclaimer: I don’t really know much about the hunting habits of real-life bats.
We can’t figure out the former from the latter. If we want to know what such-and-such and experience is like, a description of a brain state won’t tell us. They might still be identical in some way we can;t understand… but
then we can’t undestand it. So it remains the case that m/b identity theory doesn’t constitute an explanation.
The map is not the territory. Just because descriptions of our brain states won’t help us figure out what subjective experiences are like (either currently or in the foreseeable future), doesn’t mean that those experiences aren’t a part of the physical world somehow. Reductionism has been a very successful paradigm in our description of the physical world, but we can’t state with any confidence that it has captured what the ontologically basic, “ground” level of physics is really like.
The map is not the territory. Just because descriptions of our brain states won’t help us figure out what subjective experiences are like (either currently or in the foreseeable future), doesn’t mean that those experiences aren’t a part of the physical world somehow
OK. I am not arguing for duaism. I am arguing against the claim tha adopting reductionism, or materialism, or m/b identity constitutes a resolution of any of any Hard Problem. What you are saying is that m/b identity might be true as unintelligible brute fact. What I am saying is that brute facts aren’t explanations.
is you parpahrase actually a fair translation of my comment? Are “mappings” things that tell people what such-and-such an experience is like, as if they had had it themselves? What, concretely, is a mapping?
Our goal is to estimate what someone else will experience, “from the inside”, in response to some stimulus—given that we know what we’d experience in response to that stimulus. One way to do it is observe our own brains in action, and compare them to the other brain under similar conditions. This way, we can directly relate specific functions of our brain to the target brain. To use a rather crude and totally inadequate example, we could say,
“Every time I feel afraid, area X of my brain lights up. And every time this bat acts in a way that’s consistent with being afraid, area Y of its brain lights up. Given this, plus what we know about biology/evolution/etc., I can say that Y performs the same function as X, with 95% confidence.”
That’s a rather crude example because brains can’t be always subdivided into neat parts like that, and because we don’t know a lot about how they work, etc. etc. Still, if we could relate the functioning of one brain to another under a variety of circumstances with some degree of certainty, we’d have a “mapping”.
When you say, “I think if another human saw this piece of paper, he’d believe it was red”, you’re referencing the “mapping” that you made between your brain and the other human’s. Sure, you probably created this mapping based on instinct or intuition, rather than based on some sort of scientific analysis, but it still works; in fact, it works so well you don’t even need to think about it.
In the case of bat sonar, we’d have to analytically match up as many of our mental functions to the bat’s, and then infer where the sonar would fit in—since we humans don’t have one of those. Thus, while we could make an educated guess, our degree of confidence in it would be low.
Agreed; but then, what is your goal ? If you are trying to answer the question, “how would it feel to have sonar”, one possible answer is, “you can’t experience it directly, but you’d be able to sort of see intermittently in the dark, except with your ears instead of eyes; here’s a detailed probabilistic model”. Is that not enough ? If not, what else are you looking for, and why do you believe that it’s achievable at all ?
Some humans do seem to have managed to experience echolocation, and you could presumably ask them about it—not that that’s terribly relevant to the broader question of experience.
Discussing whether “reductionism is true” or what is a “reductionistic explanation” feels to me like discussing whether “French cuisine is true”, it’s not apparent what particular query or method of explanation you are talking about. I think it’s best to taboo “reductionism” in discussions such as this one.
I’m still not seeing what it is that you’re trying to explain. I think you are confusing the two statements: a). “bats experience sonar”, and b). “we can experience sonar vicariously through bats, somehow”.
I’m not claiming to be able to explain anything. Some people have claimed that accepting materialism, or reductioinism, or something, solves the hard problem. I am pointing out that it doens’t. The HP is the problem
of explaining how experiential states relate in a detailed way to brain states, and materialists are no clearer about that than anyone else.
I suppose I’m as confused as the average materialist, because I don’t see what the “hard problem” even is. As far as I understand, materialism explains it away.
To put it another way, I don’t think the fact that we can’t directly experience what it’s like to be a bat is a philosophical problem that needs solving. I agree that “how experiential states relate in a detailed way to brain states” is a question worth asking, but so are many other questions, such as “how does genetic code relate in a detail way to expressed phenotypes”. People are working on it, though—just check out Nornagest’s link on this thread.
To put it another way, I don’t think the fact that we can’t directly experience what it’s like to be a bat is a philosophical problem that needs solving.
Philosophers don’t suppose that either.
“The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colors and tastes.”—WP
People are working on it, though
Maybe but you have clearly expressed why it is difficult: you can’t predict novel qualia, or check your predictions. If you can’t state quala verbally (mathematically, etc), then it is hard to see how you could have
an explanation of qualia.
How novel are we talking ? If I have a functional model of the brain (which we currently do not, just as we don’t have a model of the entire proteome), I can predict how people and other beings will feel in response to stimuli similar to the ones they’d been exposed to in the past. I can check these predictions by asking them how they feel on one hand, and scanning their brains on the other.
I can also issue such predictions for new stimuli, of course; in fact, artists and advertisers implicitly do this every day. As for things like, “what would it feel like to have sonar”, I could issue predictions as well, though they’d be less certain.
If you can’t state quala verbally (mathematically, etc)...
I thought we were stating them verbally already, f.ex. “this font is red”. As for “mathematically”, there are all kinds of MRI studies, psychological studies, etc. out there, that are making a good attempt at it.
Thus, I’m still not sure what remains to be explained in principle. I get the feeling that maybe you’re looking for some sort of “theory of qualia” that is independent of brains, or possibly one that’s only dependent on sensory mechanisms and nothing else. I don’t think it makes sense to request such a theory, however; it’d be like asking for a “theory of falling” that excludes gravity.
They wouldn’t be novel. I don’t mean further instances of the same kind.
What do you mean, then ? I’m still rather confused. Sure, it’s interesting to imagine what it’d feel like to have bat sonar (although some people apparently don’t have to imagine), but, well, we don’t have a sonar at the moment. Once we do, we can start talking about its qualia, and see if our predictions were right.
why there is phenomenal experience at all
That’s kind of a broad question. Why do we have eyes at all ? The answer takes a few billion years...
why we see colours and smell smells—how and why quaia match up to sensory modalities.
Again, to me this sounds like, “why do our brain states change in response to stimuli received by our sensory organs (which are plugged into the brains); how and why do brain states match up to brain states”. Perhaps you mean something special by “sensory modalities” ?
I mean something like the standard meaning of ” novel prediction”. Like black holes are a novel prediction of GR
Sure “why is there experience at all” a broad question. Particularly since you wouldn’t expect to find irreducible subjectivity in a physical universe. And its another question that isn’t adressed by Accpeting Materialism.
how and why do brain states match up to brain states”
Yes, but you can’t make that work in practice. You can;t describe a quale by describig the related brain state.
For us, given our igonrance, brains states and qualia are informationally and semantically independent, even
if they are ontologically the same thing. WHich is anothe way of saying that identity theory doens’t explain much..
Perhaps you mean something special by “sensory modalities” ?
Particularly since you wouldn’t expect to find irreducible subjectivity in a physical universe.
People keep asserting that and it’s not obvious. Why would you not expect a being in a “physical” (Q1. what does this mean?) universe, to have “subjective experience” (Q2. what does that mean?)? (Q3 is the question itself)
If “physcical” is cashed out as “understandable by the methods of the physcal sciences”, then it follows that
“everything is physical” means “everything is understandable from an extenal, objective perspective”. If that is the
case, the only kind of subjectivity that could exist is a kind that can be reduced to physics, a kind whch is ultimately objective, in the way that the “mental”, for physicalists, is a subset of the physical.
That “irreducible” part is bothering me. What does it mean? I can see that it could take us out of what “materialism” would predict, but I can’t see it doing that without also taking us out of the set of phenomena we actually observe. (the meanings of irreducible that materialism prohibits are also not actually observed, AFAICT).
Anyways, getting downvoted, going to tap out now, I’ve made my case with the program and whatnot, no one wants to read the rest of this. Apologies for the bandwidth and time.
Irreducile as in reducible as in reductionism. How can you spend any time on LW and not know what reductionism is? Reducibility is not observed except the form of explanations pubished in journals and gi vn in classrooms. Irreducibility is likewise not observed.
I mean something like the standard meaning of ” novel prediction”. Like black holes are a novel prediction of GR
I don’t know enough neurobiology to offer up any novel predictions off the top of my head; here are some random links off of Google that look somewhat interesting (disclaimer: I haven’t read them yet). In general, though, the reduction of qualia directly to brain states has already yielded some useful applications in the fields of color theory (apparently, color perception is affected by culture, f.ex. Russians can discern more colors than Americans), audio compression (f.ex. ye olde MP3), and synthetic senses (people embedding magnets under their skin to sense magnetic fields).
And its another question that isn’t adressed by Accpeting Materialism.
Why not ? I do not believe that subjectivity is “irreducible”.
For us, given our igonrance, brains states and qualia are informationally and semantically independent, even if they are ontologically the same thing.
I’m not sure what this means. I mean, yes, given our ignorance, the Moon is a small, dim light source high up in the sky; but today we know better.
I mean sight is one modality hearing another.
How is this different from saying, “sight and sound are captured by different organs and processed by different sub-structures in the brain, thus leading to distinct experiences” ?
Believeing in materialism does not reduce subjectviity, and neither does believing in the reducibility of subjectivity.
I have no idea what this means. Believing or disbelieving in things generally doesn’t poof them in or out of existence, but seeing as neither of us here are omniscient, I’m not sure why you’d bring it up.
Do you believe that subjective experiences are “irreducible” ? If so, you are making a very strong existential claim, and you need to provide more evidence than you’ve done so far.
That kind of depends on what the question is, and you still haven’t told me. If the question is, “who makes the most delicious cupcakes”, then Materialism is probably not the answer. If the question is, “how do you account for the irreducibility of subjective experience”, then Materialism is not the answer either, since you have not convinced me that subjective experience is irreducible, and thus the answer is “mu”.
I haven’t told you because they haven’t told me. Which is not surprising, since thinking about what the questions are tends to reveal that materaiism doens’t answer most of them.
Ok, so there are some questions that materialism doesn’t answer, but you don’t know what those questions are, or why it doesn’t answer them ? Why are we still talking about this, then ?
I know what the questions materialism doesn’t answer are. I’ve mentioned them repeatedly. I don’t know what the questions materialism does answer are, ebcause the true Believers wont say.
Anyway, if “qualia” are brain states, then the question “how do qualia depend on brain states” is trivially answered.
It still makes sense to ask what these “brain states” actually are, physically. Since we seem to have direct experiential access to them as part of our subjective phenomenology, this suggests on Occamian grounds that they should not be as physically or ontologically complex as neurophysical brain states. The alternative would be for biological brains to be mysteriously endowed with ontologically basic properties (as if they had tiny XML tags attached to them) which makes no sense at all.
It still makes sense to ask what these “brain states” actually are, physically
I would agree that it makes sense to ask what sorts of brain states are associated with what sorts of subjective experiences, and how changes in brain states cause and are caused by those experiences, and what sorts of physical structures are capable of entering into those states and what the mechanism is whereby they do so. Indeed, a lot of genuinely exciting work is being done in these areas by neurologists, neurobiologists, and similar specialists as we speak.
Indeed, a lot of genuinely exciting work is being done in these areas by neurologists, neurobiologists, and similar specialists as we speak.
I agree, and I would add that a lot of interesting work has also been done by transcendental phenomenologists—the folks who study the subjective experience phenomenon from its, well, “subjective” side. The open question is whether these two strands of work will be able to meet in the middle and come up with a mutually consistent account.
Except that there is, since there are plenty of subjects which have been studied from both sides. The
natures of space, time and causality for a start.
The natures of space, time and causality for a start.
Having studied these subjects from the physics side, I find that there is little useful input into the matter from the philosophy types, except for some vague motivations.
Something concrete, please. What is this nature? What is the philosophical position and what is the physical position? Where is that middle?
The standard example is Einstein’s invocation of the Mach’s principle, which is actually a bad example. GR shows that, contrary to Mach, acceleration is absolute, not relative. One can potentially argue that the frame dragging effect is sort of in this vein, but this effect is weak and was discovered after GR was already constructed, and not by Einstein.
If I can jump in… It’s useful to distinguish between phenomenology in general, as the study of consciousness from “within” consciousness; various schools of phenomenological thought, distinguished by their methods and conclusions; and then all those attempts to explain the relationship between consciousness and the material world. These days the word “phenomenology” is used quite frequently in the latter context, and often just to designate what it is that one is trying to “correlate” with the neurons.
It’s part of the general pattern of usage whereby an “-ology” comes to designate its subject matter, so that “biology” means life and not the study of life—“we share the same biology” doesn’t mean our biology classes are in agreement—“psychology” means mind and not the study of mind, and “sociology” means social processes and not the study of them. That’s an odd little trend and I don’t know what to make of it, but in any case, “phenomenology” is often used as a synonym for the phenomena of consciousness, rather than to refer to the study of those phenomena or to a genuine theory of subjectivity.
Thus people talk about “naturalizing phenomenology”, but they don’t mean taking a specific theory of subjective consciousness and embedding it within natural science, they just mean embedding consciousness within natural science. Consciousness is treated in a very imprecise way, compared to e.g. neuroscience. Such precision as exists is usually in the domain of philosophical definition of concepts. But you don’t see people talking about methods for precise introspection or for precise description of a state of consciousness, or methods for precise arbitration of epistemological disputes about consciousness.
Phenomenology as a discipline includes such methodological issues. But this is a discipline which exists more as an unknown ideal and as an object of historical study. Today we have some analytic precision in the definition of phenomenological concepts, and total imprecision in all other aspects, and even a lack of awareness that precision might be possible or desirable in those other aspects.
Historically, phenomenology is identified with a particular movement within philosophy, one which attached especial significance to consciousness as a starting point of knowledge and as an object of study. It could be argued that this is another sign of intellectual underdevelopment, in the discipline of philosophy as a whole—that phenomenology is regarded as a school of thought, rather than as a specific branch of philosophy like epistemology or ethics. It’s as if people spoke about “the biological school of scientific thought”, to refer to an obscure movement of scientists who stood out because they thought “life” should be studied scientifically.
So to recap, there is a movement to “naturalize phenomenology” but really it means the movement to “naturalize consciousness”, i.e. place consciousness within natural science. And anyone trying to do that implicitly has a personal theory of consciousness—they must have some concept of what it is. But not many of those people are self-consciously adherents to any of the theories of consciousness which historically are known as phenomenological. And of those who are, I think there would be considerably more enthusiasm for “existential phenomenology” than for “transcendental phenomenology”.
This distinction goes back to the divide between Husserl and his student Heidegger. Husserl was a rationalist in an older, subjective sense and by temperament—he was interested in analytical thought and in the analytical study of analytical thought; the phenomenology of propositional thinking, for example. Heidegger was his best student, but he became obsessed with the phenomenology of “Being”, which became a gateway for the study of angst, dread, the meaning of life, and a lot of other things that were a lot more popular and exciting than the intentional structure of the perception of an apple. The later Heidegger even thought that the best phenomenology is found in the poetic use of language, which makes some sense—such language evokes, it gets people to employ complex integrated systems of concepts which aren’t so easy to specify in detail.
Meanwhile, Husserl’s more rationalistc tendencies led towards transcendental phenomenology, which even among philosophers was widely regarded as misguided, the pursuit of a phantasmal “transcendental ego” that was (according to the criticism) an artefact produced by language or by religious metaphysics. Husserl literally fled Nazi Germany in order to continue his work (while Heidegger tried to accommodate himself to the sturm und drang of the regime) and died with only a few loyalists developing the last phase of his ideas. After the war, Heidegger was excoriated for his politics, but existential phenomenology remained culturally victorious.
If we come closer to the present and the age of cognitive science, there are now many people who are appreciative of Husserl’s earlier analyses, but transcendental phenomenology is still mostly regarded as misguided and metaphysical. Existential phenomenology is also a somewhat exotic affiliation among scientists, but it does get some recognition among people who are into the importance of “embodiment” in cognitive science and consciousness studies. Husserl’s phenomenology is so verbal and verbalizing, whereas existential phenomenology, in its attention to “raw existence”, can lead (among other destinations) to a 1960s-style rediscovery of the senses, e.g. in Merleau-Ponty’s phenomenology, and from there to the embodied cognition of Rodney Brooks et al.
So in the contemporary world, transcendental phenomenology is very obscure and mostly it’s a subject of historical research. You could make the analogy between Husserl and Einstein, with transcendental phenomenology as Husserl’s unified field theory. Einstein was regarded as a founder of modern physics but his later interests were regarded as misguided, and it’s much the same with Husserl. But fifty years after Einstein’s death, unified theories are a standard interest, it’s just that they’re quantum rather than classical. Similarly, it’s likely that the spirit of transcendental phenomenology will be revived eventually.
Since we seem to have direct experiential access to them as part of our subjective phenomenology, this suggests on Occamian grounds that they should not be as physically or ontologically complex as neurophysical brain states.
How so ? I don’t follow your reasoning, and I’m not sure what you mean by “neurophysical brain states”—are there any other kinds ? Ultimately, every human brain is made of neurons...
Given the tenor of your further comments, I miunderstood you. You are claiming that given materialism, qualia probably vary with slight variations in brain structure. Although the conclusion really follows from something like a supervenience principle, not just from the materiality of all things. And although qualia only probabl& vary. There could still be a “same” red. An althouh we don’t have a theory of how qualia depen on brain states—which is, in fact, the* Hard Problem. And the Hard Problem remains unaddressed by an assumption of materialism, so materialism does not clear up “all hard problems”.
In my response, I was trying to say that “qualia” are brain states. I put the word “qualia” in quotes because, as far as I understand, this word implies something like, “a property or entity that all beings who see this particular shade of red share”, but I explicitly deny that such a thing exists.
Everyone’s brains are different, and not everyone experiences the same red, or does so in the same way. The fact that our experiences of “red” are similar enough to the point where we can discuss them is an artifact of our shared biology, as well as the fact that we were all brought up in the same environment.
Anyway, if “qualia” are brain states, then the question “how do qualia depend on brain states” is trivially answered.
My use of “depend” was not meant to exlude identity. I had in mind the supervenience principle, which is trivially fulfilled by identity.
I am not sure where you got that from. C I Lewis defined qualia as a “sort of universal”, but I don’t think there was any implication that everyone sees 600nm radiation identicallty. OTOH, ones personal qualia must recur to a good degree of accuracy or one would be able to make no sense of ones sensory input.
Interestingly, that is completely false. Knowing that a bat-on-LSD’s qualia are identical to its brain states tells me nohting about what they are (which is to say what they seem like to the bat in question..which is to say what they are, since qualia are by definition seemings.[If you think there are two or three meanings of “are” going on there, you might be right]).
Agreed. I was just making sure that we aren’t talking about some sort of Platonic-realm qualia, or mysterious quantum-entanglement qualia, etc. That’s why I personally dislike the word “qualia”; it’s too overloaded.
If I am correct, then you personally could never know exactly what another being experiences when it looks at the same red object that you’re looking at. You may only emulate this knowledge approximately, by looking at how its brain states correlate with yours. Since another human’s brain states are pretty similar to yours, your emulation will be fairly accurate. A bat’s brain is quite different from yours, and thus your emulation will not be nearly as accurate.
However, this is not the same thing as saying, “bats don’t experience the color red (*)”. They just experience it differently from humans. I don’t see this as a problem that needs solving, though I could be missing something.
(*) Assuming that bats have color receptors in their eyes; I forgot whether they do or not.
I don’t think anyone has raised that except you.
Alhough, under may circumstances, I could know approximately.
Bats have a sense that humans don’t have, sonar, and if they have qualia, they presumably have some kind of radically unfamiliar-to-humans qualia to go with it. That is an issue of a different order to not knowing exactly what someone else’s Red is like. And, again, it is not a problem solved by positing the identity of the the bat’s brain state and its qualia. Identity theory does’t explain qualia in the sense of explaining how variations in qualia relate to varations in brain state.
Agreed.
I wasn’t talking about sonar, but about good old-fashioned color perception. A bat’s brain is very different from a human’s. Thus, while you can approximate another human’s perception fairly well, your approximation of a bat’s perception would be quite inexact.
I’m not sure I understand what you mean. If we could scan a bat’s brain, and understand more or less how it worked (which, today, we can’t do), then we could trace the changes in its states that would propagate throughout the bat when red photons hit its eyes. We could say, “aha, at this point, the bat will likely experience something vaguely similar to what we do, when red photons hit our eyes”. And we could predict the changes in the bat’s model of the world that will occur as the result. For example, if the bat is conditioned to fear the color red for some reason, we could say, “the bat will identify this area of its environment as dangerous, and will seek to avoid it”, etc.
If the above is true, then what is there left to explain ?
Radically unfamiliar-to-humans qualia. You have picked an easy case, I have picked a difficult one. If we wan’t to know what the world sonars like to a bat on LSD, identity theory doens’t tell us.
Well, in point of fact, I’ve personally never done LSD, so I don’t know what color perception is like for another human on LSD, either. I could make an educated guess, though.
In case of the bat sonar, the answer is even simpler, IMO: we lack the capacity to experience what the world sonars like to a bat, except in the vaguest terms. Again, I don’t see this is a problem. Bats have sonars, we don’t.
Note that this is very different from saying something like “we can’t know whether bats experience anything at all through their sonar”, or “even if we have scanned the bat’s brain, we can’t predict what changes it would undergo in response to a particular sonar signal”, etc. All I’m saying is, “we cannot create a sufficiently accurate mapping between our brain states and the bat’s, as far as sonaring is concerned”.
Again, I’m not entirely sure I understand what additional things we need to explain w.r.t qualia.
Normally I’d assume that I know what you meant and move on, but since this involves LSD… You don’t know what it’s like? Or you do, but it’s an educated guess? What?
I’ve never done LSD myself, but I’ve talked to people who did, and I’ve read similar accounts in books, online, etc. Thus, I can make a guess as to what LSD would feel like, assuming my brain is close to the average.
I see that as a problem for the claim that mind-brain identity theory explains qualia. It does not enable us to undestand the bat’s qualia, or to predict what they would be like. However, other explanations do lead to understanding and predicting.
Understanding and predicting.
I guess I’m not entirely sure what you mean by “understanding” and “predicting”. As I said, if we could scan the bat’s brain and figure out how all of its subsystems influence each other, we would know with a very high degree of certainty what happens to it when the bat receives a sonar signal. We could identify the changes in the bat’s model of the world that would result from the sonar signal, and we could predict them ahead of time.
Thus, for example, we could say, “if the bat is in mid-flight, and hungry, and detects its sonar reflecting from a small object A of size B and shape C etc., then it would alter its model of the world to include a probable moth at the object’s approximate location (*). It would then alter course to intercept the moth, by sending out signals to its wing muscles as follows: blah blah”.
Are predictions of this sort insufficient ? If so, what additional predictions could be made by those other explanations you mentioned ?
(*) Disclaimer: I don’t really know much about the hunting habits of real-life bats.
More irrelevant. None of them are actualy about qualia, about how things seem to experiencing subjects. You have Substituted an Easier Problem.
Is “how things seem to experiencing subjects” somehow different from “things happening to the brains of experiencing subjects” ? If so, how ?
We can’t figure out the former from the latter. If we want to know what such-and-such and experience is like, a description of a brain state won’t tell us. They might still be identical in some way we can;t understand… but then we can’t undestand it. So it remains the case that m/b identity theory doesn’t constitute an explanation.
The map is not the territory. Just because descriptions of our brain states won’t help us figure out what subjective experiences are like (either currently or in the foreseeable future), doesn’t mean that those experiences aren’t a part of the physical world somehow. Reductionism has been a very successful paradigm in our description of the physical world, but we can’t state with any confidence that it has captured what the ontologically basic, “ground” level of physics is really like.
OK. I am not arguing for duaism. I am arguing against the claim tha adopting reductionism, or materialism, or m/b identity constitutes a resolution of any of any Hard Problem. What you are saying is that m/b identity might be true as unintelligible brute fact. What I am saying is that brute facts aren’t explanations.
I read this sentence as,
“If we want to build an approximate mapping between someone else’s brain states and ours, a description of a brain state won’t help us”.
That sounds contradictory to me.
is you parpahrase actually a fair translation of my comment? Are “mappings” things that tell people what such-and-such an experience is like, as if they had had it themselves? What, concretely, is a mapping?
Our goal is to estimate what someone else will experience, “from the inside”, in response to some stimulus—given that we know what we’d experience in response to that stimulus. One way to do it is observe our own brains in action, and compare them to the other brain under similar conditions. This way, we can directly relate specific functions of our brain to the target brain. To use a rather crude and totally inadequate example, we could say,
“Every time I feel afraid, area X of my brain lights up. And every time this bat acts in a way that’s consistent with being afraid, area Y of its brain lights up. Given this, plus what we know about biology/evolution/etc., I can say that Y performs the same function as X, with 95% confidence.”
That’s a rather crude example because brains can’t be always subdivided into neat parts like that, and because we don’t know a lot about how they work, etc. etc. Still, if we could relate the functioning of one brain to another under a variety of circumstances with some degree of certainty, we’d have a “mapping”.
When you say, “I think if another human saw this piece of paper, he’d believe it was red”, you’re referencing the “mapping” that you made between your brain and the other human’s. Sure, you probably created this mapping based on instinct or intuition, rather than based on some sort of scientific analysis, but it still works; in fact, it works so well you don’t even need to think about it.
In the case of bat sonar, we’d have to analytically match up as many of our mental functions to the bat’s, and then infer where the sonar would fit in—since we humans don’t have one of those. Thus, while we could make an educated guess, our degree of confidence in it would be low.
OK. The cases where confidence is low are the cases where a dexcription of a brain state won’t help.
Agreed; but then, what is your goal ? If you are trying to answer the question, “how would it feel to have sonar”, one possible answer is, “you can’t experience it directly, but you’d be able to sort of see intermittently in the dark, except with your ears instead of eyes; here’s a detailed probabilistic model”. Is that not enough ? If not, what else are you looking for, and why do you believe that it’s achievable at all ?
Some humans do seem to have managed to experience echolocation, and you could presumably ask them about it—not that that’s terribly relevant to the broader question of experience.
If reductionism is true, I would expect a reductive explanation, and I’m not getting one.
Discussing whether “reductionism is true” or what is a “reductionistic explanation” feels to me like discussing whether “French cuisine is true”, it’s not apparent what particular query or method of explanation you are talking about. I think it’s best to taboo “reductionism” in discussions such as this one.
Don’t tell me, tell EY..while I’m at a safe distance, please.
I’m still not seeing what it is that you’re trying to explain. I think you are confusing the two statements: a). “bats experience sonar”, and b). “we can experience sonar vicariously through bats, somehow”.
I’m not claiming to be able to explain anything. Some people have claimed that accepting materialism, or reductioinism, or something, solves the hard problem. I am pointing out that it doens’t. The HP is the problem of explaining how experiential states relate in a detailed way to brain states, and materialists are no clearer about that than anyone else.
I suppose I’m as confused as the average materialist, because I don’t see what the “hard problem” even is. As far as I understand, materialism explains it away.
To put it another way, I don’t think the fact that we can’t directly experience what it’s like to be a bat is a philosophical problem that needs solving. I agree that “how experiential states relate in a detailed way to brain states” is a question worth asking, but so are many other questions, such as “how does genetic code relate in a detail way to expressed phenotypes”. People are working on it, though—just check out Nornagest’s link on this thread.
Philosophers don’t suppose that either.
“The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colors and tastes.”—WP
Maybe but you have clearly expressed why it is difficult: you can’t predict novel qualia, or check your predictions. If you can’t state quala verbally (mathematically, etc), then it is hard to see how you could have an explanation of qualia.
How novel are we talking ? If I have a functional model of the brain (which we currently do not, just as we don’t have a model of the entire proteome), I can predict how people and other beings will feel in response to stimuli similar to the ones they’d been exposed to in the past. I can check these predictions by asking them how they feel on one hand, and scanning their brains on the other.
I can also issue such predictions for new stimuli, of course; in fact, artists and advertisers implicitly do this every day. As for things like, “what would it feel like to have sonar”, I could issue predictions as well, though they’d be less certain.
I thought we were stating them verbally already, f.ex. “this font is red”. As for “mathematically”, there are all kinds of MRI studies, psychological studies, etc. out there, that are making a good attempt at it.
Thus, I’m still not sure what remains to be explained in principle. I get the feeling that maybe you’re looking for some sort of “theory of qualia” that is independent of brains, or possibly one that’s only dependent on sensory mechanisms and nothing else. I don’t think it makes sense to request such a theory, however; it’d be like asking for a “theory of falling” that excludes gravity.
They wouldn’t be novel. I don’t mean further instances of the same kind.
Do they? Surely they make arrangements of existing qualia types.
That’s no good for novel qualia.
why there is phenomenal experience at all
why we see colours and smell smells—how and why quaia match up to sensory modalities.
anything to do with quala we don’t have
Nope.
What do you mean, then ? I’m still rather confused. Sure, it’s interesting to imagine what it’d feel like to have bat sonar (although some people apparently don’t have to imagine), but, well, we don’t have a sonar at the moment. Once we do, we can start talking about its qualia, and see if our predictions were right.
That’s kind of a broad question. Why do we have eyes at all ? The answer takes a few billion years...
Again, to me this sounds like, “why do our brain states change in response to stimuli received by our sensory organs (which are plugged into the brains); how and why do brain states match up to brain states”. Perhaps you mean something special by “sensory modalities” ?
See above.
I mean something like the standard meaning of ” novel prediction”. Like black holes are a novel prediction of GR
Sure “why is there experience at all” a broad question. Particularly since you wouldn’t expect to find irreducible subjectivity in a physical universe. And its another question that isn’t adressed by Accpeting Materialism.
Yes, but you can’t make that work in practice. You can;t describe a quale by describig the related brain state. For us, given our igonrance, brains states and qualia are informationally and semantically independent, even if they are ontologically the same thing. WHich is anothe way of saying that identity theory doens’t explain much..
I mean sight is one modality hearing another.
People keep asserting that and it’s not obvious. Why would you not expect a being in a “physical” (Q1. what does this mean?) universe, to have “subjective experience” (Q2. what does that mean?)? (Q3 is the question itself)
Please respond
If “physcical” is cashed out as “understandable by the methods of the physcal sciences”, then it follows that “everything is physical” means “everything is understandable from an extenal, objective perspective”. If that is the case, the only kind of subjectivity that could exist is a kind that can be reduced to physics, a kind whch is ultimately objective, in the way that the “mental”, for physicalists, is a subset of the physical.
Ok.
What does such a statement predict wrt subjective experience?
please respond
I have said it predicts that there is no irreducible subjective experience.
That “irreducible” part is bothering me. What does it mean? I can see that it could take us out of what “materialism” would predict, but I can’t see it doing that without also taking us out of the set of phenomena we actually observe. (the meanings of irreducible that materialism prohibits are also not actually observed, AFAICT).
Anyways, getting downvoted, going to tap out now, I’ve made my case with the program and whatnot, no one wants to read the rest of this. Apologies for the bandwidth and time.
Irreducile as in reducible as in reductionism. How can you spend any time on LW and not know what reductionism is? Reducibility is not observed except the form of explanations pubished in journals and gi vn in classrooms. Irreducibility is likewise not observed.
I don’t know enough neurobiology to offer up any novel predictions off the top of my head; here are some random links off of Google that look somewhat interesting (disclaimer: I haven’t read them yet). In general, though, the reduction of qualia directly to brain states has already yielded some useful applications in the fields of color theory (apparently, color perception is affected by culture, f.ex. Russians can discern more colors than Americans), audio compression (f.ex. ye olde MP3), and synthetic senses (people embedding magnets under their skin to sense magnetic fields).
Why not ? I do not believe that subjectivity is “irreducible”.
I’m not sure what this means. I mean, yes, given our ignorance, the Moon is a small, dim light source high up in the sky; but today we know better.
How is this different from saying, “sight and sound are captured by different organs and processed by different sub-structures in the brain, thus leading to distinct experiences” ?
Bear in mind that what is important here is the prediction of experience.
Believeing in materialism does not reduce subjectviity, and neither does believing in the reducibility of subjectivity.
Yep. Explanation first, then identitfication.
I have no idea what this means. Believing or disbelieving in things generally doesn’t poof them in or out of existence, but seeing as neither of us here are omniscient, I’m not sure why you’d bring it up.
Do you believe that subjective experiences are “irreducible” ? If so, you are making a very strong existential claim, and you need to provide more evidence than you’ve done so far.
People keep telling me that Accpeting Materialism is The Answer. You don’t beleive that, don’t. But people keep tellig me.
That kind of depends on what the question is, and you still haven’t told me. If the question is, “who makes the most delicious cupcakes”, then Materialism is probably not the answer. If the question is, “how do you account for the irreducibility of subjective experience”, then Materialism is not the answer either, since you have not convinced me that subjective experience is irreducible, and thus the answer is “mu”.
I haven’t told you because they haven’t told me. Which is not surprising, since thinking about what the questions are tends to reveal that materaiism doens’t answer most of them.
Ok, so there are some questions that materialism doesn’t answer, but you don’t know what those questions are, or why it doesn’t answer them ? Why are we still talking about this, then ?
I know what the questions materialism doesn’t answer are. I’ve mentioned them repeatedly. I don’t know what the questions materialism does answer are, ebcause the true Believers wont say.
It still makes sense to ask what these “brain states” actually are, physically. Since we seem to have direct experiential access to them as part of our subjective phenomenology, this suggests on Occamian grounds that they should not be as physically or ontologically complex as neurophysical brain states. The alternative would be for biological brains to be mysteriously endowed with ontologically basic properties (as if they had tiny XML tags attached to them) which makes no sense at all.
I would agree that it makes sense to ask what sorts of brain states are associated with what sorts of subjective experiences, and how changes in brain states cause and are caused by those experiences, and what sorts of physical structures are capable of entering into those states and what the mechanism is whereby they do so. Indeed, a lot of genuinely exciting work is being done in these areas by neurologists, neurobiologists, and similar specialists as we speak.
I agree, and I would add that a lot of interesting work has also been done by transcendental phenomenologists—the folks who study the subjective experience phenomenon from its, well, “subjective” side. The open question is whether these two strands of work will be able to meet in the middle and come up with a mutually consistent account.
“transcendental phenomenology” is not a natural science but philosophy, so there is no middle to meet in.
Except that there is, since there are plenty of subjects which have been studied from both sides. The natures of space, time and causality for a start.
Having studied these subjects from the physics side, I find that there is little useful input into the matter from the philosophy types, except for some vague motivations.
You may not like the Middle, but it is there.
Feel free to give an example.
The natures of space, time and causality for a start.
Something concrete, please. What is this nature? What is the philosophical position and what is the physical position? Where is that middle?
The standard example is Einstein’s invocation of the Mach’s principle, which is actually a bad example. GR shows that, contrary to Mach, acceleration is absolute, not relative. One can potentially argue that the frame dragging effect is sort of in this vein, but this effect is weak and was discovered after GR was already constructed, and not by Einstein.
It’s not a question of positions. The point is both philosophy and science study these questions.
You claimed that there is a middle. Point one out, concretely.
http://en.wikipedia.org/wiki/Leibniz%E2%80%93Clarke_correspondence. The point is both philosophy and science study these questions.
You say “has been done”… is that to suggest that there is no active work currently being done in transcendental phenomenology?
If I can jump in… It’s useful to distinguish between phenomenology in general, as the study of consciousness from “within” consciousness; various schools of phenomenological thought, distinguished by their methods and conclusions; and then all those attempts to explain the relationship between consciousness and the material world. These days the word “phenomenology” is used quite frequently in the latter context, and often just to designate what it is that one is trying to “correlate” with the neurons.
It’s part of the general pattern of usage whereby an “-ology” comes to designate its subject matter, so that “biology” means life and not the study of life—“we share the same biology” doesn’t mean our biology classes are in agreement—“psychology” means mind and not the study of mind, and “sociology” means social processes and not the study of them. That’s an odd little trend and I don’t know what to make of it, but in any case, “phenomenology” is often used as a synonym for the phenomena of consciousness, rather than to refer to the study of those phenomena or to a genuine theory of subjectivity.
Thus people talk about “naturalizing phenomenology”, but they don’t mean taking a specific theory of subjective consciousness and embedding it within natural science, they just mean embedding consciousness within natural science. Consciousness is treated in a very imprecise way, compared to e.g. neuroscience. Such precision as exists is usually in the domain of philosophical definition of concepts. But you don’t see people talking about methods for precise introspection or for precise description of a state of consciousness, or methods for precise arbitration of epistemological disputes about consciousness.
Phenomenology as a discipline includes such methodological issues. But this is a discipline which exists more as an unknown ideal and as an object of historical study. Today we have some analytic precision in the definition of phenomenological concepts, and total imprecision in all other aspects, and even a lack of awareness that precision might be possible or desirable in those other aspects.
Historically, phenomenology is identified with a particular movement within philosophy, one which attached especial significance to consciousness as a starting point of knowledge and as an object of study. It could be argued that this is another sign of intellectual underdevelopment, in the discipline of philosophy as a whole—that phenomenology is regarded as a school of thought, rather than as a specific branch of philosophy like epistemology or ethics. It’s as if people spoke about “the biological school of scientific thought”, to refer to an obscure movement of scientists who stood out because they thought “life” should be studied scientifically.
So to recap, there is a movement to “naturalize phenomenology” but really it means the movement to “naturalize consciousness”, i.e. place consciousness within natural science. And anyone trying to do that implicitly has a personal theory of consciousness—they must have some concept of what it is. But not many of those people are self-consciously adherents to any of the theories of consciousness which historically are known as phenomenological. And of those who are, I think there would be considerably more enthusiasm for “existential phenomenology” than for “transcendental phenomenology”.
This distinction goes back to the divide between Husserl and his student Heidegger. Husserl was a rationalist in an older, subjective sense and by temperament—he was interested in analytical thought and in the analytical study of analytical thought; the phenomenology of propositional thinking, for example. Heidegger was his best student, but he became obsessed with the phenomenology of “Being”, which became a gateway for the study of angst, dread, the meaning of life, and a lot of other things that were a lot more popular and exciting than the intentional structure of the perception of an apple. The later Heidegger even thought that the best phenomenology is found in the poetic use of language, which makes some sense—such language evokes, it gets people to employ complex integrated systems of concepts which aren’t so easy to specify in detail.
Meanwhile, Husserl’s more rationalistc tendencies led towards transcendental phenomenology, which even among philosophers was widely regarded as misguided, the pursuit of a phantasmal “transcendental ego” that was (according to the criticism) an artefact produced by language or by religious metaphysics. Husserl literally fled Nazi Germany in order to continue his work (while Heidegger tried to accommodate himself to the sturm und drang of the regime) and died with only a few loyalists developing the last phase of his ideas. After the war, Heidegger was excoriated for his politics, but existential phenomenology remained culturally victorious.
If we come closer to the present and the age of cognitive science, there are now many people who are appreciative of Husserl’s earlier analyses, but transcendental phenomenology is still mostly regarded as misguided and metaphysical. Existential phenomenology is also a somewhat exotic affiliation among scientists, but it does get some recognition among people who are into the importance of “embodiment” in cognitive science and consciousness studies. Husserl’s phenomenology is so verbal and verbalizing, whereas existential phenomenology, in its attention to “raw existence”, can lead (among other destinations) to a 1960s-style rediscovery of the senses, e.g. in Merleau-Ponty’s phenomenology, and from there to the embodied cognition of Rodney Brooks et al.
So in the contemporary world, transcendental phenomenology is very obscure and mostly it’s a subject of historical research. You could make the analogy between Husserl and Einstein, with transcendental phenomenology as Husserl’s unified field theory. Einstein was regarded as a founder of modern physics but his later interests were regarded as misguided, and it’s much the same with Husserl. But fifty years after Einstein’s death, unified theories are a standard interest, it’s just that they’re quantum rather than classical. Similarly, it’s likely that the spirit of transcendental phenomenology will be revived eventually.
How so ? I don’t follow your reasoning, and I’m not sure what you mean by “neurophysical brain states”—are there any other kinds ? Ultimately, every human brain is made of neurons...
I didn’t understand that either.
Not exclusively. There are glial cells, for example.
Good point. I should’ve said, “made of neurons or other physical substances” :-)