I agree that eventually we should be able to find an answer that sounds as reduced as an answer to “How does blood flow work?” does. But from where we currently stand, they seem to be really, incredibly fundamentally different questions...
Ok, that makes sense. I understand now that this is what you believe, but I still don’t see why. You say:
But from what we can currently tell, there doesn’t seem to be even an in-principle plausible mechanism for adding qualia to a computer’s way of processing things. A computer receives input, does some well-defined manipulations, and offers output. Where do qualia come into play?
This, to me, sounds like a circular argument at worst, and a circular analogy (if there is such a thing) at best. You are trying to illustrate your belief that qualia are categorically different from visual perception (just f.ex.), by introducing a computer which possesses visual perception but not qualia, because, due to the qualia being so different from visual perception, there is no way to grant qualia to the computer even in principle. So, “qualia are hard because qualia are hard”, which is a tautology. Your next paragraph makes a lot more sense to me:
I guess the categorical difference is that when asking about blood flow, there’s someone who experiences the question and the data and the subsequent answer; but when asking about consciousness, it’s the very process of being able to understand the question in the first place that we’re asking about.
I think that, if you go this route, you arrive at a kind of solipsism. You know for a fact that you personally have a consciousness, but you don’t know this about anyone else, myself included. You can only infer that other beings are conscious based on their behavior. Ok, to be fair, the fact that they are biologically human and therefore possess the same kind of a brain that you do can count as supporting evidence; but I don’t know if you want to go that route (Searle does, AFAIK). Anyway, let’s assume that your main criterion for judging whether anyone else besides yourself is conscious is their behavior (if that’s not the case, I can offer some arguments for why it should be), and that you reject the solipsistic proposition that you are the only conscious being around (ditto). In this case, a perfect sleepwalker or a qualia-less computer that perfectly simulates having qualia, etc., is actually less parsimonious than the alternative, and therefore the concept of qualia buys you nothing (assuming that dualism is false, as always). And then, the “hard question” becomes one of those “mysterious questions” to which you could give a “mysterious answer”, as per the Sequences.
You might find it helpful to read the Wikipedia page on the hard problem.
I’d actually read that page earlier, and it (along with associated links) seemed to imply that either dualism offers the best answer to the “hard question”, or the “hard question” is meaningless as per Dennet—which is why I took the time to slam dualism in my previous posts.
Again, I think that was Yvain.
Darn, again, I’m sorry. But nevertheless, I think it’s a good thought experiment.
This, to me, sounds like a circular argument at worst, and a circular analogy (if there is such a thing) at best.
Mmm. Yes, I think you’re right. As I’ve chewed on this, I’ve come to wonder if that’s part of where I’ve been getting the impression that there’s a hard problem in the first place. As I’ve tried to reduce the question enough to notice where reduction seems to fail or at least get a bit lost, my confusion confuses me. I don’t know if that’s progress, but at least it’s different!
I guess the categorical difference is that when asking about blood flow, there’s someone who experiences the question and the data and the subsequent answer; but when asking about consciousness, it’s the very process of being able to understand the question in the first place that we’re asking about.
I think that, if you go this route, you arrive at a kind of solipsism.
I’m afraid I’m a bit slow on the uptake here. Why does this require solipsism? I agree that you can go there with a discussion of consciousness, but I’m not sure how it’s necessarily tied into the fact that consciousness is how you know there’s a question in the first place. Could you explain that a bit more?
Anyway, let’s assume that your main criterion for judging whether anyone else besides yourself is conscious is their behavior (if that’s not the case, I can offer some arguments for why it should be), and that you reject the solipsistic proposition that you are the only conscious being around (ditto).
Well… Yes, I think I agree in spirit. The term “behavior” is a bit fuzzy in an important way, because a lot of the impression I have that others are conscious comes from a perception that, as far as I can tell, is every bit as basic as my ability to identify a chair by sight. I don’t see a crying person and consciously deduce sadness; the sadness seems self-evident to me. Similarly, I sometimes just get a “feel” for what someone’s emotional state is without really being able to pinpoint why I get that impression. But as long as we’re talking about a generalized sense of “behavior” that includes cues that go unnoticed by the conscious mind, then sure!
In this case, a perfect sleepwalker or a qualia-less computer that perfectly simulates having qualia, etc., is actually less parsimonious than the alternative, and therefore the concept of qualia buys you nothing
It’s not a matter of what qualia buy you. The oddity is that they’re there at all, in anything. I think you’re pointing out that it’d be very odd to have a quale-free but otherwise perfect simulation of a human mind. I agree, that would be odd. But what’s even more odd is that even though we can be extremely confident that there’s some mechanism that goes from firing neurons to qualia, we have no clue what it could be. Not just that we don’t yet know what it is, but as far as I know we don’t know what could possibly play the role of such a mechanism.
It’s almost as though we’re in the position of early 19th century natural philosophers who are trying to make sense of magnetism: “Surely, objects can’t act at a distance without a medium, so there must be some kind of stuff going on between the magnets to pull them toward one another.” Sure, that’s close enough, but if you focus on building more and more powerful microscopes to try to find that medium, you’ll be SOL. The problem in this context is that there are some hidden assumptions that are being brought to bear on the question of what magnetism is that keep us from asking the right questions.
Mind you, I don’t know if understanding consciousness will actually turn out to yield that much of a shift in our understanding of the human mind. But it does seem to be slippery in much the same way that magnetism from a billiard-balls-colliding perspective was, as I understand it. I suspect in the end consciousness will turn out to be no more mysterious than magnetism, and we’ll be quite capable of building conscious machines someday.
In case this adds some clarity: My personal best proto-guess is that consciousness is a fuzzy term that applies to both (a) the coordination of various parts of the mind, including sensory input and our sense of social relationships; and (b) the internal narrative that accompanies (a). If this fuzzily stated guess is in the right ballpark, then the reason consciousness seems like such a hard problem is that we can’t ever pin down a part of the brain that is the “seat of consciousness”, nor can we ever say exactly when a signal from the optic nerve turns into vision. Similarly, we can’t just “remove consciousness”, although we can remove parts of it (e.g., cutting out the narrator or messing with the coordination, as in meditation or alcohol).
I wouldn’t be at all surprised if this guess were totally bollocks. But hopefully that gives you some idea of what I’m guessing the end result of solving the consciousness riddle might look like.
I’m afraid I’m a bit slow on the uptake here. Why does this require solipsism?
Well, there’s exactly one being in existence that you know for sure is conscious and experiences qualia: yourself. You suspect that other beings (such as myself) are conscious as well, based on available evidence, though you can’t be sure. This, by itself, is not a problem. What evidence could you use, though ? Here are some options.
You could say, “I think other humans are conscious because they have the same kind of brains that I do”, but then you’d have to exclude other potentially conscious beings, such as aliens, uploaded humans, etc., and I’m not sure if you want to go that route (let me know if you do). In addition, it’s still possible that any given human is not a human at all, but one of those perfect emulator-androids, so this doesn’t buy you much.
You could put the human under a brain scanner, and demonstrate that his brain states are similar to your own brain states, which you have identified as contributing to consciousness. If you could do that, though, then you would’ve reduced consciousness down to physical brain states, and the problem would be solved, and we wouldn’t be having this conversation (though you’d still have a problem with aliens and uploaded humans and such).
You could also observe the human’s behavior, and say, “this person behaves exactly as though he was conscious, therefore I’m going to assume that he is, until proven otherwise”. However, since you postulate the existence of androids/zombies/etc. that emulate consciousness perfectly without experiencing, you can’t rely on behavior, either.
Basically, try as I might, I can’t think of any piece of evidence that would let you distinguish between a being—other than yourself—who is consciousness and experiences qualia, and a being who pretends to be conscious with perfect fidelity, but does not in fact experience qualia. I don’t think that such evidence could even exist, given the existence of perfect zombies (since they would be imperfect if such evidence existed). Thus, you are forced to conclude that the only being who is conscious is yourself, which is a kind of solipsism (though not the classic, existential kind).
Similarly, I sometimes just get a “feel” for what someone’s emotional state is without really being able to pinpoint why I get that impression. But as long as we’re talking about a generalized sense of “behavior” that includes cues that go unnoticed by the conscious mind, then sure!
It seems like we agree on this point, then—yey ! Of course, I would go one step further, and argue that there’s nothing special about our subconscious mind. We know how some parts of it work, we have mapped them down to physical areas of the brain, and our maps are getting better every day.
I think you’re pointing out that it’d be very odd to have a quale-free but otherwise perfect simulation of a human mind. I agree, that would be odd.
I don’t just think it would be odd, I think it would be logically inconsistent, as long as you’re willing to assume that people other than yourself are, in fact, conscious. If you’re not willing to assume that, then you arrive at a kind of solipsism, which has its own problems.
But what’s even more odd is that even though we can be extremely confident that there’s some mechanism that goes from firing neurons to qualia, we have no clue what it could be.
Right, which is why I reject the existence of qualia as an independent entity altogether. As per your magnetism analogy:
“Surely, objects can’t act at a distance without a medium, so there must be some kind of stuff going on between the magnets to pull them toward one another.” Sure, that’s close enough, but if you focus on building more and more powerful microscopes to try to find that medium, you’ll be SOL.
Right, and the problem here is not that your microscopes aren’t powerful enough, but that your very idea of a magnetic attraction medium is flawed. In reality, there are (probably) no such things as “magnets” at all; there are just collections of waveforms of various kinds (again, probably). You choose to call some of them “magnets” and some others “apples”, but those words are just grossly simplified abstractions that you have created in order to talk about the world—because if you had to describe every single quark of it, you’d never get anywhere.
Similarly, “qualia” and “consciousness” are just abstractions that you’d created in order to talk about human brains—including your own brain. I understand that you can observe your own consciousness “from the inside”, which is not true of magnets, but I don’t see this as an especially interesting fact. After all, you can observe gravity “from the inside”, as well (your body is heavy, and tends to fall down a lot), but that doesn’t mean that your own gravity is somehow different from my gravity, or a rock’s gravity, because as far as gravity is concerned, you aren’t special.
If this fuzzily stated guess is in the right ballpark, then the reason consciousness seems like such a hard problem is that we can’t ever pin down a part of the brain that is the “seat of consciousness”, nor can we ever say exactly when a signal from the optic nerve turns into vision.
I don’t think that we need to necessarily pin down a single part of the brain that is the “seat of consciousness”. We can’t pin down a single part that constitutes the “seat of vision”, either, but human vision is nonetheless fairly well understood by now. The signal from the optic nerve is just part of the larger mechanism which includes the retina, the optic nerve, the visual cortex, and ultimately a large portion of the brain. There’s no point at which electrochemical signals turn into vision, because these signals are a part of vision. Similarly, there isn’t a single “seat of blood flow” within the human body, but blood flow is likewise fairly well understood.
Similarly, we can’t just “remove consciousness”, although we can remove parts of it (e.g., cutting out the narrator or messing with the coordination, as in meditation or alcohol).
I’m not sure I follow your reasoning here. What do you mean by “removing consciousness” and “cutting out the narrator”, and why is it important ? Drunk (or meditating) people are still conscious, after a fashion.
Basically, try as I might, I can’t think of any piece of evidence that would let you distinguish between a being—other than yourself—who is consciousness and experiences qualia, and a being who pretends to be conscious with perfect fidelity, but does not in fact experience qualia. I don’t think that such evidence could even exist, given the existence of perfect zombies (since they would be imperfect if such evidence existed). Thus, you are forced to conclude that the only being who is conscious is yourself, which is a kind of solipsism (though not the classic, existential kind).
Ah! Okay. Three points:
I think you’re arguing for something I agree with anyway. I don’t think of qualia as being inherently independent of everything else. I think of qualia as self-evident. I don’t think my experience of green can be entirely separated from the physical process of perceiving light of a certain wavelength, but I do think it’s fair to say that I’m conscious of the green color of the “Help” link below this text box.
Even if I did think qualia were divisible from the physical processes involved in perception (which I think would force dualism), I wouldn’t be able to conclude that I’m the only one who is conscious. I would have to conclude that as far as I currently know, I have no way of knowing who else is or isn’t conscious. So solipsism would then be a possibility, but not a logical necessity.
I’m not arguing that p-zombies can exist. I seriously doubt they can. If this is a point you’ve been trying to argue me into agreeing, please note that we started out agreeing in the first place!
It seems like we agree on this point, then—yey ! Of course, I would go one step further, and argue that there’s nothing special about our subconscious mind.
Er… Except that we’re not conscious of it! I’d say that’s pretty special—as long as we agree that “special” means “different” rather than “mysterious”.
I don’t just think it would be odd, I think it would be logically inconsistent, as long as you’re willing to assume that people other than yourself are, in fact, conscious.
Sorry, I meant “odd” in the artistically understated sense. We agree on this.
I reject the existence of qualia as an independent entity altogether.
So here, I think, is a source of our miscommunication. I also reject qualia as being independent.
I think part of the problem we’re running into here is that by naming qualia as nouns and talking about whether it’s possible to add or remove them, we’ve inadvertently employed our parietal cortices to make sense of conscious experience. It’s like how people talk about “government” as though it’s a person when, really, they’re just reifying complex social behavior (and as a result often hiding a lot of complexity from themselves).
“Quale” is a name that has been, sadly, agreed upon to capture the experience of blueness, or the sense of a melody, or what-have-you. We needed some kind of word to distinguish these components of conscious experience from the physical mechanisms of perception because there is a difference, just like there’s a difference between a software program and the physical processes that result in the program running. Yes, as far as the universe is concerned, it’s just quarks quarking about. But just like it’s helpful to talk about chairs and doors, it’s helpful to talk about qualia in order to understand what our experience consists of.
I suspect in the future we’ll be able to agree that “qualia” was actually a really bad term to use, with the benefit of hindsight. I suspect consciousness will turn out to be a reification, and thus talking about its components as though they’re things just throws us off the track and creates confusion in the guise of a mystery. But even if we dump the term “qualia”, we’re still stuck with the fact that we experience, and there’s a qualitative sense in which experience doesn’t seem like it’s even in-principle describable in terms of firing neurons. If you told me that it was discovered that there’s actually a region of the brain that’s responsible for adding qualia to vision (pardoning the horrid implicit metaphor), I wouldn’t feel like hardly anything had been explained. So you found circuitry that, when monkeyed with, makes all yellow vanish from my conscious awareness. But how did yellow appear in the first place, as opposed to being just neuronal signals bouncing around? Pointing to a region of the brain and saying “That does it” still leaves me baffled as to how. I don’t see how explaining the circuitry of that brain region in perfect synapse-level detail could answer that question.
However, I could totally see consciousness turning out to have this “hard problem” because it’s like trying to describe where Mario is in terms of the transistors in a game console.
Similarly, “qualia” and “consciousness” are just abstractions that you’d created in order to talk about human brains—including your own brain. I understand that you can observe your own consciousness “from the inside”, which is not true of magnets, but I don’t see this as an especially interesting fact.
On this point, I think we might just be frozen in disagreement. You seem to be taking as practically axiomatic that there’s nothing significantly different about consciousness as compared to anything else, like gravity. To me, that view of consciousness is internally incoherent. You can make sense of gravity as an outside observer, but you can’t make sense of your own consciousness as an outside observer. That’s hugely relevant for any attempt to approach consciousness with the same empirical eye as used on gravity, or magnetism, or any other physical phenomenon. We can look at those phenomena from a position that largely doesn’t interact with them in a relevant way, but I cannot fathom a comparable place to stand in order to be conscious of consciousness while not interacting with it.
This is not to say that consciousness is intrinsically more mysterious than gravity. I’m just utterly dumbfounded that you can think that your ability to be aware of anything is somehow no more interesting than any other random phenomenon in the universe.
I don’t think that we need to necessarily pin down a single part of the brain that is the “seat of consciousness”.
I don’t think so either.
We can’t pin down a single part that constitutes the “seat of vision”, either, but human vision is nonetheless fairly well understood by now.
...
We seem to keep doing this. I agree, because that’s part of the point I was making.
Similarly, we can’t just “remove consciousness”, although we can remove parts of it (e.g., cutting out the narrator or messing with the coordination, as in meditation or alcohol).
I’m not sure I follow your reasoning here. What do you mean by “removing consciousness” and “cutting out the narrator”, and why is it important ? Drunk (or meditating) people are still conscious, after a fashion.
Removing consciousness is exactly the process that would turn a person into a p-zombie, yes? So what I’ve suggested as a general direction to consider for how consciousness appears passes the sanity test of not allowing p-zombies.
As for the narrator… Well, you know how there’s a kind of running commentary going on in your mind? It’s possible to stop that narration, and if you do so it changes the quality of consciousness by quite a lot.
Meditation, alcohol, and quite a number of other things can all monkey with the way parts of the mind coordinate and also get the narrator to stop narrating (or at least not become an implicit center of attention anymore). And I’m not claiming that doing these things removes consciousness. Quite the opposite, I’m pointing out that drunk and meditating people have a different kind of conscious experience.
I would have to conclude that as far as I currently know, I have no way of knowing who else is or isn’t conscious. So solipsism would then be a possibility, but not a logical necessity.
True, but you can carry the reasoning one step further. The claim “other people are conscious” is a positive claim. As such, it requires positive evidence (unless it’s logically necessary, which in this case it’s not). If your concept of qualia/consciousness precludes the possibility of evidence, you’d be justified in rejecting the claim.
Er… Except that we’re not conscious of it! I’d say that’s pretty special—as long as we agree that “special” means “different” rather than “mysterious”.
Fair enough.
We needed some kind of word to distinguish these components of conscious experience from the physical mechanisms of perception because there is a difference...
Well, it depends on what you mean by “perception”. If you mean, for example, “light hitting my retina and producing a signal in my optic nerve”, then yes, experience is different—because the aforementioned process is a component of it. The overall process of experience involves your visual cortex, and ultimately your entire brain, and there’s a lot more stuff that goes on in there.
...just like there’s a difference between a software program and the physical processes that result in the program running.
Hmm, I don’t know, is there such a difference ? As far as I understand, when Firefox is running, we can (plus or minus some engineering constraints) reduce its functionality down to the individual electrons inside the integrated circuits of my computer (plus or minus some quantum physics constraints). Where does the difference come in ?
...and there’s a qualitative sense in which experience doesn’t seem like it’s even in-principle describable in terms of firing neurons.
I lack this sense, apparently :-(
If you told me that it was discovered that there’s actually a region of the brain that’s responsible for adding qualia to vision (pardoning the horrid implicit metaphor), I wouldn’t feel like hardly anything had been explained.
As it happens, there’s a real neurological phenomenon called “blindsight” which is similar to what you’re describing. It’s relatively well understood (AFAIK), and, in this specific case, we can indeed point to a specific region of the brain that causes it. So, at least in case of vision, we can actually map the presence or absence of conscious visual experience to a specific area of the brain. I suspect that there are scientists who are even now busily pursuing further explanations.
You seem to be taking as practically axiomatic that there’s nothing significantly different about consciousness as compared to anything else, like gravity.
The word “axiomatic” is perhaps too strong of a word. I just don’t think that it’s possible to treat consciousness as being categorically different from other phenomena, such as gravity, while still maintaining a logically and epistemically (if that’s a word) consistent, non-solipsistic worldview.
You can make sense of gravity as an outside observer, but you can’t make sense of your own consciousness as an outside observer. [emphasis mine]
Ok, let me temporarily grant you this premise. What about the consciousness of other people ? Can I make sense of those consciousness as an outside observer ? If the answer is “no”, then consciousness becomes totally mysterious, because I can only observe other people’s consciousness from the outside. If the answer is “yes”, then you end up saying, “my own consciousness is categorically different from anyone else’s”, which seems unlikely to be true, since you’re just a regular human like the rest of us.
but I cannot fathom a comparable place to stand in order to be conscious of consciousness while not interacting with it.
I agree, but I don’t think this means that you can’t “make sense” of your consciousness regardless. In a way, this entire site is a toolkit for making sense of your own consciousness—specifically, its biases—and for using this understanding to alter it.
Removing consciousness is exactly the process that would turn a person into a p-zombie, yes? … Quite the opposite, I’m pointing out that drunk and meditating people have a different kind of conscious experience.
Ah, ok, I get it, and I agree, but I’m still not sure how this relates to the point you’re making. If anything, it offers tangential evidence against it—because the existence of a relatively simple physical mechanism (such as alcohol) that can alter your consciousness points the way to reducing your own consciousness down to a collection of strictly physical interactions.
You know, I think we’re getting lost in the little details here, and we keep communicating past one another.
First, let me emphasize that I do think we’ll eventually be able to explain consciousness in a reductionist way. I’ve tried to make that clear, but some of your arguments make me wonder if I’ve failed to convey that.
Second, remember that this whole discussion arose because you questioned the value of trying to answer the hard problem of consciousness. I now suspect what you originally meant was that you don’t think there is a hard problem, so there wasn’t anything to answer. And in an ultimate sense, I think you’re right: I think people like Thomas Nagel are trying to argue that we need a complete paradigm shift in order to explain how qualia exist, and I think they’re wrong. Eventually it almost certainly comes down to brain behavior. Even if it’s not clear what that pathway could be, that’s a description of human creativity and not of the intrinsic mysteriousness of the phenomenon.
But what you said was this:
Is the answer even relevant ?
As far as I understand, there currently exists no “qualia-detector”, and building one may be impossible in principle. Thus, in the absence of any ability to detect qualia, and given the way you’d set up your thought experiment about the sleepwalker, there’s absolutely no way to tell a perfect sleepwalker from an awake person. As far as everyone—including the potential sleepwalker—is concerned, the two cases are completely functionally equivalent. Thus, it doesn’t matter who has qualia and who doesn’t, since these qualia do not affect anything that we can detect. They are kind of like souls or Saganesque teapots that way.
This, to me, really sounds like you’re saying we can’t detect qualia, so we might as well assume there are no qualia, so we shouldn’t worry about how qualia arise. Maybe that wasn’t your point. But if it was, I stand in firm disagreement because I think that qualia are the only things we can care about!
For some reason I can’t seem to convey why I think that. I feel rather like I’m pointing at the sun and saying “Look! Light!” and you’re responding with “We don’t have a way of detecting the light, so we might as well assume it isn’t there.” (Please excuse the flaw in the analogy in that we can detect light. Pretend for the moment that we can’t.) All I can do is blink stupidly and point again at the sun. If I can’t get you to acknowledge that you, too, can see, then no amount of argumentation is going to get the point across.
So all I’m left with is an insistence that if my understanding of the universe is completely off and it turns out to be possible to remove conscious experience from people, I most certainly would not want that done to me—not that I could care afterwards, but I absolutely would care beforehand! So to me, the presence or absence of qualia matters a lot.
But if you cannot relate to that at all, I don’t think I’ll ever be able to convey why I feel that way. I’m completely at a loss as to how this could possibly be a topic of disagreement.
You know, I think we’re getting lost in the little details here, and we keep communicating past one another.
Sorry, you’re right, I tend to do that a lot :-(
I now suspect what you originally meant was that you don’t think there is a hard problem, so there wasn’t anything to answer.
That’s correct, I think; though obviously I’m all for acquiring a better understanding of consciousness.
Eventually it almost certainly comes down to brain behavior. Even if it’s not clear what that pathway could be...
I think it’s not entirely clear what that pathway is, but there are some very good clues regarding what that pathway could be, since certain aspects of consciousness (such as vision, f.ex.) are reasonably well understood.
This, to me, really sounds like you’re saying we can’t detect qualia, so we might as well assume there are no qualia, so we shouldn’t worry about how qualia arise.
Pretty much, but I think we should make a distinction between a person’s own qualia, as experienced by the person, and the qualia of other people, from the point of view of that same person. Let’s call the person’s own qualia “P” and everyone else’s qualia (from the point of view of the person) “Q”.
Obviously, each person individually can detect P. Until some sort of telepathy gets developed (assuming that such a thing is possible in principle), no person can detect Q (at least, not directly).
You seem to be saying—and I could be wrong about this, so I apologize in advance if that’s the case—that, in order to build a general theory of consciousness, we need to figure out a way to study P in an objective way. This is hard (I would say, impossible), since P is by its nature subjective, and thus inaccessible to anyone other than yourself.
I, on the other hand, am arguing that a general theory of consciousness can be built based solely on the same kind of evidence that compels us to believe that other people experience things—i.e., that Q exists and is reducible to brain states. Let’s say that we built some sort of a statistical model of consciousness. We can estimate (with a reasonably high degree of certainty) what any given person will experience in any situation, by using this model and plugging in a whole bunch of parameters (representing the person and the situation). I think you would you agree that such a model can, in principle, exist (though please correct me if I’m wrong). Then, would you agree that this model can also predict what you, yourself, will experience in a given situation ? If not, then why not ? If yes, then how is P any different from Q ?
So all I’m left with is an insistence that if my understanding of the universe is completely off and it turns out to be possible to remove conscious experience from people, I most certainly would not want that done to me...
I agree, but I believe that removing a person’s consciousness will necessarily alter his behavior; in most cases, this alteration would be quite drastic. Thus, I definitely wouldn’t want this done to me, or to anyone else, for this matter.
However, I think you are contemplating a situation where we remove a person’s consciousness, and yet his behavior (which includes talking about his consciousness) remains exactly the same. I argue that, if such a thing is possible, then consciousness is a null concept, since it has literally no effect on anything we could ever detect. As far as I understand, you agree with me with respect to Q, but disagree with respect to P. But then, you must necessarily believe that P is categorically different from Q, somehow… mustn’t you ?
If you do believe this, then you must also believe that any model of consciousness that we could possibly build will work correctly for anyone other than yourself. This seems highly unlikely to me, however—what makes you such an outlier ? You are a human like the rest of us, after all. And if you are not an outlier, and yet you believe that the model won’t function for you, then you must believe that such a model cannot be built in principle (i.e., it won’t function for anyone else, either), and yet I think you would deny this. As I see it, the only way to reconcile these contradictions is to reject the idea that P is categorically different from Q, and thus there’s nothing special about your own qualia, and thus the problem consciousness isn’t any harder than the problem of, say, unifying gravity with the other fundamental forces (which is pretty hard, admittedly).
Apparently my reply is “too long”, so I’ll reply in two parts.
PART 1:
Sorry, you’re right, I tend to do that a lot :-(
Hey, apparently I do too!
That’s correct, I think
Excellent.
I think it’s not entirely clear what that pathway is, but there are some very good clues regarding what that pathway could be, since certain aspects of consciousness (such as vision, f.ex.) are reasonably well understood.
Um… Sure, let’s go with that. There’s a nuance here that’s disregarding the hard problem, but I don’t think we’ll get much mileage repeating the same kind of detail-focusing we’ve been doing. :-P
I think we should make a distinction between a person’s own qualia, as experienced by the person, and the qualia of other people, from the point of view of that same person. Let’s call the person’s own qualia “P” and everyone else’s qualia (from the point of view of the person) “Q”.
Sure, agreed.
I should warn you, though, that I’m not sure that this distinction is coherent. There’s some reason to suspect that our perception of others as conscious is part of how we construct our sense of self. So, it might not make sense to talk about “my” conscious experience as distinct from “your” conscious experience as though we start with a self and then grant it consciousness. It might be the other way around.
I emphasize this because explaining Q without ever touching P might not tell us much about P. If we start with conscious experience and then define the line between “my” experience and “others’” experience by the distinction between P and Q, all we do by detailing Q is explain our impression that others are conscious. We might think we’re addressing others’ P, but we never actually address our P (which, it seems, is the only P we can ever have access to—which might be because we define “me” in part by “that which has access to P” and “not me” by “that which doesn’t have access to P”).
So with that warning, I’ll just run with the intuitive distinction between P and Q that I believe you’re suggesting.
Obviously, each person individually can detect P. Until some sort of telepathy gets developed (assuming that such a thing is possible in principle), no person can detect Q (at least, not directly).
I agree, and I would go just a little bit farther: I would argue that it’s not possible even in principle to detect Q as a kind of P. If I experience another person’s experience from a first-person perspective, it’s not their experience anymore. It’s mine. Sure, we might share it, like two people watching the same movie. But the P I have access to is still my own, and the Q that I’m supposedly accessing as a kind of P is still removed: I still have to assume that the person sitting next to me is also experiencing the movie.
You seem to be saying—and I could be wrong about this, so I apologize in advance if that’s the case—that, in order to build a general theory of consciousness, we need to figure out a way to study P in an objective way. This is hard (I would say, impossible), since P is by its nature subjective, and thus inaccessible to anyone other than yourself.
Yeah, I think that’s a reasonably fair summary. :-)
I, on the other hand, am arguing that a general theory of consciousness can be built based solely on the same kind of evidence that compels us to believe that other people experience things—i.e., that Q exists and is reducible to brain states.
I agree with you on this. I just think it’s important to recognize that what we will have explained is our impression that others are conscious. That might give us insight into P, and it seems implausible that it wouldn’t, but it also doesn’t seem clear what kind of mechanism it could possibly reveal for P. At least to me!
I think you would you agree that such a model can, in principle, exist (though please correct me if I’m wrong).
Yes, I agree.
Then, would you agree that this model can also predict what you, yourself, will experience in a given situation ? If not, then why not ? If yes, then how is P any different from Q ?
I’m going to go with “maybe”, which I think requires me to answer both the “yes” and “no” branches. :-P
I think it’s certainly plausible that this model of Q could predict the behavior of P. But it needn’t do so. Why not? Because P and Q are different for precisely the reason that we gave them different names. I’m under the impression that my wife is conscious as a sort of immediate perception; surely I deduce it somehow, probably by my perception of her as a social entity with whom I could in principle interact, but that isn’t how it seems to me. I just see her as conscious. So when we explore my perception of her as conscious and we develop a thorough model of her consciousness as perceived by me (and others), what that model does is predict how our perception of her conscious experience changes.
But it requires an extra step to say that if I were her, I would be experiencing those changes as P.
Now, I suspect that this model would work out just fine. I suspect that when we determine that we’ve modeled Q, that the model of Q will predict my P. (I see this in the Enneagram all the time, in fact: it describes others’ experiences, and when I spell out their experiences they often give an “I’ve been caught!” kind of reaction. When someone does the same to me, I sure feel caught!) After all, part of the impression I get of Q comes from the fact that I know that I would react they way the other is reacting if I were to experience X, which draws me to think that they’re experiencing X. So for it to fail to model P, it seems likely that I’d have to react in a way that I would not recognize from the outside (assuming experiencing my own P as Q can be turned into a coherent idea). That seems like it’d be pretty weird.
But we’re still left with the fact that the application of the theory to Q feels tremendously different than its application to P. The fact that the model is attempting to explain in part why P and Q are different in the first place makes it difficult for me to see how an explanation of Q alone is going to do it. It feels as though its ability to capture P would be almost coincidental.
I think you are contemplating a situation where we remove a person’s consciousness, and yet his behavior (which includes talking about his consciousness) remains exactly the same. I argue that, if such a thing is possible, then consciousness is a null concept, since it has literally no effect on anything we could ever detect.
Yep. I believe that’s Eliezer’s argument (the “anti-zombie principle” I think it was called), and I agree. That’s why I prefaced it with saying that my understanding of the universe would have to be pretty far off in order for my self-zombification to even be possible. So, given the highly improbable event that p-zombies are possible, I sure wouldn’t want to become one! Ergo, my own qualia matter a great deal to me regardless of anyone else’s ability to detect them.
As far as I understand, you agree with me with respect to Q, but disagree with respect to P. But then, you must necessarily believe that P is categorically different from Q, somehow… mustn’t you ?
...
I’m not sure what it would mean for me to agree in terms of Q but not P. I’m not quite sure what you’re suggesting I’m saying. So maybe you’re right, but I honestly don’t know!
If you do believe this, then you must also believe that any model of consciousness that we could possibly build will work correctly for anyone other than yourself. This seems highly unlikely to me, however—what makes you such an outlier ? You are a human like the rest of us, after all.
Mmm… I’m not saying that I, personally, am special. I’m saying that an experiencing subject is special from the point of view of the experiencing subject, precisely because P is not the same as Q. It so happens that I’m an experiencing subject, so from my point of view my perspective is extremely special.
Remember that science doesn’t discover anything at all. Scientists do. Scientists explore natural phenomena and run experiments and experience the results and come to conclusions. So it’s not that exploring Q would just happen and then a model emerges from the mist. Instead, people explore Q and people develop a model that people can see predicts their impressions of Q. That’s what empiricism means!
I emphasize this because every description is always from some point of view. For most phenomena, we’ve found a way to take a point of view that doesn’t make the difference between P and Q all that relevant. A passive-voice description of gravity seems to hold from both P and Q, for instance. But when we’re trying to explore what makes P and Q different, we can’t start by modulating their difference. We have to decide what the point of view we’re taking is, and since part of what we’re studying is the phenomenon of there being points of view in the first place, that decision is going to matter a lot.
And if you are not an outlier, and yet you believe that the model won’t function for you, then you must believe that such a model cannot be built in principle (i.e., it won’t function for anyone else, either), and yet I think you would deny this.
I think that if a model of Q fails to inform us about P, then it will fail for P regardless of whose perspective we take.
However, I suspect that a good model of Q will tell us pretty much everything about P. I just can’t fathom at this point how it might do so.
As I see it, the only way to reconcile these contradictions is to reject the idea that P is categorically different from Q, and thus there’s nothing special about your own qualia, and thus the problem consciousness isn’t any harder than the problem of, say, unifying gravity with the other fundamental forces (which is pretty hard, admittedly).
Well, part of the problem is that we know P is categorically different than Q. Or rather, I know my P is categorically different than Q, and if Q is going to have any fidelity, everyone else will be under the same impression from their own points of view.
I can guarantee that any model that claims I don’t have conscious experience is flat-out wrong. This is perhaps the only thing I’d be willing to say has a probability of 1 of being true. I might discover that I’m not experiencing what I thought I was, but the fact that I’m under the impression of seeing these words, for instance, is something for which I believe it is not possible even in principle to provide me evidence against. (Yes, I know how strong a claim that is. I suppose that since I’m open to having this perspective challenged, I should still assign a probability of less than 1 to it. But if anything deserves a probability of 1 of being true, I’d say the fact that there is P-type experience is it!)
However, I can’t make a claim like that about Q. I’m certainly under the impression that my wife is conscious, but maybe she’s not. Maybe she doesn’t have P-type experience. I don’t know how I could discover that, but if it were possible to discover it and it turned out that she were not conscious, I wouldn’t view that as a contradiction in terms. It would just accent the difference between P-type experience and my impression of Q-type experience. Getting evidence for my wife not being conscious doesn’t seem to violate what it means for something to be evidence the way “evidence” against my own consciousness would be.
I’m oversimplifying somewhat since consciousness almost certainly isn’t a “yes” or “no” thing. Buddhists often claim that P-type consciousness can be made “more conscious” through mindfulness, and that once you’ve developed somewhat in that direction you’ll be able to look back and consider your past self to not have been “truly” conscious. However, the point I’m trying to make here is that we actually start with the immediate fact that P is different than Q, and it’s upon this foundation that empiricism is built. We can’t then turn around and deny the difference from an empirical point of view!
However, in spirit I think I agree with you. I think we’ll end up understanding P through Q. I don’t see how since I don’t see how to connect the two empirically even in principle. But science has surprised philosophers for three hundred years, so why stop now? :-D
Apparently my reply is “too long”, so I’ll reply in two parts.
Bah ! Curse you, machine overlords ! shakes fist
I should warn you, though, that I’m not sure that this distinction is coherent. There’s some reason to suspect that our perception of others as conscious is part of how we construct our sense of self.
I did not mean to imply that. In fact, I agree with you in principle when you say,
So, it might not make sense to talk about “my” conscious experience as distinct from “your” conscious experience as though we start with a self and then grant it consciousness. It might be the other way around.
Sure, it might be, or something else might be the case; my P and Q categories were meant to be purely descriptive, not explanatory. Your conscious experience, of whose existence you are certain, and which you are experiencing at this very minute, is P. Other people’s conscious experience, whose existence you can never personally experience, but can only infer based on available evidence, intuition, or whatever, is Q. That’s all I meant. Thus, when you say, ”...we might think we’re addressing others’ P, but we never actually address our P”, you are confusing the terminology; there’s no such thing as “other people’s P”, there’s only P and Q. You may suspect that other people have conscious experiences, but the best you can do as lump them into Q.
You move on to say several things which, I believe, reinforce my argument (my apologies if I seem to be quote-mining you out of context, please let me know if I’d done so on accident):
I emphasize this because explaining Q without ever touching P might not tell us much about P. … I would argue that it’s not possible even in principle to detect Q as a kind of P. … it’s important to recognize that what we will have explained is our impression that others are conscious, …but it also doesn’t seem clear what kind of mechanism it could possibly reveal for P. … But we’re still left with the fact that the application of the theory to Q feels tremendously different than its application to P. … I’m saying that an experiencing subject is special from the point of view of the experiencing subject, precisely because P is not the same as Q.
You appear to be very committed to the idea that your own experience is categorically different from anyone else’s, and that a general model of consciousness—assuming it was even possible to create such a thing—may not tell you anything about your own experience. The problem with this statement, though, is that there exists one, and only one, “experiencing subject” in this Universe: yourself. As I said above, you suspect that other people (such as your wife, for example) are experiencing things, but you aren’t sure of it; and you don’t know if they experience things the same way that you do, or whether it even makes sense to ask that latter question. There are two possible corollaries to this fact (well, there are two that I can think of):
1). Other people in this world are categorically similar to yourself, and thus a general model of consciousness can never be developed, in principle, because such a model will fail to predict P, as seen from the point of view of every person individually. Thus, consciousness is completely mysterious and inexplicable.
2). You are special. A general model of consciousness can be developed, but it will work for everyone other than yourself, specifically.
Option #2 is solipsism. Option #1 may seem attractive at the surface, but it contradicts the fact that we do have models of consciousness which work quite well—they are employed by psychologists, advertisers, political speech writers, and even computer scientists, f.ex. when they build things like HDR photo rendering or addictive Facebook games. One way to dodge this contradiction would be to say,
3). The models of consciousness that we currently possess do not actually model consciousness; they just model behavior. Consciousness is not correlated with behavior in any significant way.
Option #3, however, puts you on the road to discarding consciousness altogether as a null concept.
I can’t think of any way to resolve these contradictions, other than to posit that there’s nothing special about your own consciousness. Sure, it feels special in a truly visceral way, but there are lots of things we feel that aren’t actually true: the Earth is not flat, the stars are really huge and really hot, but very far away; choosing a different door in the Monty Hall scenario is the correct choice, etc. Thus, I disagree with you when you say,
However, the point I’m trying to make here is that we actually start with the immediate fact that P is different than Q, and it’s upon this foundation that empiricism is built.
Empiricism is based on the foundation of avoiding cognitive biases, and I am inclined to treat the (admittedly, very strong) intuition that I am very special as just another kind of a cognitive bias. And while it is true that ”...people explore Q and people develop a model that people can see predicts their impressions of Q...”, I don’t see why this is important. Why does it matter who (or what) came up with the model ? Doesn’t the predictive power of the model (or lack thereof) matter much, much more ?
It’s nice to see this discussion converging! I was afraid we’d get myred in confusing language forever and have to give up at some point. :-(
Bah ! Curse you, machine overlords ! shakes fist
:-D
...my P and Q categories were meant to be purely descriptive, not explanatory. Your conscious experience, of whose existence you are certain, and which you are experiencing at this very minute, is P. Other people’s conscious experience, whose existence you can never personally experience, but can only infer based on available evidence, intuition, or whatever, is Q.
Ah, okay. I thought you meant, “Given a subject, that subject’s experience is P, and others’ is Q.” The above distinction seems more coherent.
Let’s do away with possessive pronouns when referring to P and Q, then. We’ll say P is phenomenal experience (what I’m tempted to call “my experience” but am explicitly avoiding assigning to a particular subject since my sense of myself as a subject might well arise from the existence of P), and Q is the part of P that gives the impression that we describe as “Others seem to be conscious.” I think we can agree that those two phenomena are different, even if Q seems to be a part of P. (I have a hard time conceiving of a kind of experience that’s not part of P, for that matter!)
Sound good?
...you are confusing the terminology...
Sorry about that. I see what you mean.
(my apologies if I seem to be quote-mining you out of context, please let me know if I’d done so on accident)
It doesn’t look that way to me at first brush. Thanks for the consideration, though. :-)
You appear to be very committed to the idea that your own experience is categorically different from anyone else’s, and that a general model of consciousness—assuming it was even possible to create such a thing—may not tell you anything about your own experience.
I think here is where the use of possessive pronouns betrays us. What I’m very committed to is that P is more than Q, so a priori knowing everything about Q doesn’t necessarily tell us anything about why P arises in the first place. The only reason we seem to think this is likely, as far as I know, is that Q is specifically the impression that P-like phenomena exist “in others.” (I honestly can’t think of a way to describe the relationship between P and Q without talking about Q in terms of others. I think that might be intrinsic to the definition of Q.)
What we will have explained with a full and robust theory of Q is why the impression of “others who have P-type experience” arises. (Again, I don’t know how else to phrase that.) That wouldn’t tell us why red appears as red, although it would tell us why others who are conscious (if any) would be under the impression that we experience red as red.
Or said a little differently, it seems perfectly plausible to me that my impression that others are conscious might have nothing to do with why I’m conscious. It might be based solely on the fact that I’m conscious.
Now, if it turns out that those two really don’t have anything to do with one another, that would be surprising to me because of the nature of Q: my impression is that others are conscious for the same reason I am. But my evidence for others’ consciousness is of a completely different nature than that of my own. So, if they really don’t have anything to do with one another, then solipsism seems much more likely.
But even in that solipsistic case, I wouldn’t say that there’s something special about me. I’d say there’s something special about P in that it’s the only perspective possible. It just so happens that from the only possible perspective, there is this impression of a particular identity, which is under the delusion that there are other, comparable identities “out there”. In this situation, there’s no other perspective one can don in order to say that there’s nothing special about me as compared to any other random human. I’m special because I’m the one whose identity is wrapped up in P, and in a solipsistic universe there’s no one else like that. As far as I know, that’s what solipsism means.
(Of course, because of Q, I would predict that you would make the same argument about yourself. But I know better! :-P )
I’ll say once more that I suspect that a full theory of Q would, indeed, go a long way to explaining P. But I’m also aware that I’m under this impression because of Q. This makes it extremely difficult to fathom what the connection between a Q-explanation and a P-explanation could possibly look like. After all, if such a connection did not exist, I would still have a strong suspicion that a Q-explanation would yield a P-explanation.
Sure, it feels special in a truly visceral way, but there are lots of things we feel that aren’t actually true: the Earth is not flat, the stars are really huge and really hot, but very far away; choosing a different door in the Monty Hall scenario is the correct choice, etc.
I don’t think this comparison works because of a recursive element that’s in consciousness. With those other phenomena, we can look at an aspect of P, apply a mental model, and predict what the next experience in P will be. But what is to be explained is the arising of P in the first place. It’s hard to make sense of what making predictions in that context would even mean, in part because we can’t experience P from outside of P. We can’t look at P as a whole the way we can look at our visual impression that the Earth is flat.
Empiricism is based on the foundation of avoiding cognitive biases, and I am inclined to treat the (admittedly, very strong) intuition that I am very special as just another kind of a cognitive bias.
I’m inclined to agree. The problem is that there isn’t a perspective to take from where you can say that your ability to take perspectives is an external phenomenon. It becomes very convoluted to even try to say what it would mean for the fact that your subjective experience is special to you is a bias. Biased for whom, in what way? What’s the objective truth that this “bias” is causing you to mentally deviate from? It’s not an impression that my consciousness is different to me than my impression of others’ consciousness is; rather, it’s a fact, as objective as any fact could possibly be. I could totally believe that there’s some kind of weird cognitive illusion trick being played on me such that I’m not actually writing these words, but there is no possible way for my impression that I’m writing these words to not actually exist. What would that even mean? I’m more certain that I’m under the impression that I’m experiencing what I’m experiencing than of anything else, bar none. And I can be so certain of this because the only way to offer me evidence to the contrary is through experience.
I think part of the tangle here is this implicit idea that there’s what Nagel calls a “view from nowhere” that we can always take to describe phenomena. It is, for instance, the idea that 2+2=4 is an objective fact that is part of the universe itself rather than just a facet of how we happen to experience it. It’s true no matter who is talking, and disagreement with that fact is a form of being objectively wrong. But that model—the idea that there’s this objective truth out there independent of subjective experience—is not something we can ever even in principle get evidence for. It’s much like how you can never know for sure that you’re not dreaming: any test you can perform is a test you can dream. There’s no way out even in principle. This doesn’t mean you are dreaming, but it does mean that you can’t use the supposed fact that you’re not dreaming as part of your evidence that you’re not dreaming. In the same sort of way, there’s no way to get evidence that there’s an objective world outside of your experience. That doesn’t mean it isn’t there, but it does mean that there’s no way to get evidence for that world’s existence. Any such evidence is evidence you’d have to experience.
(Let me emphasize here that I’m not arguing against reductionistic materialism. I’m just pointing out a tangle in attempts to use reductionistic materialism to explain the ability to experience. I’m sure we can come up with a model of consciousness that works within reductionistic materialism, but it’s not at all clear how that model could possibly be true to the a priori fact that the model itself arises out of our experiences.)
So how do you get outside of experience in order to demonstrate how experience itself arises?
The question doesn’t seem even in principle answerable.
That’s why it’s called the hard problem of consciousness!
“Given a subject, that subject’s experience is P, and others’ is Q.” The above distinction seems more coherent.
Oops, actually, the latter definition is closer to what I had in mind. It seems like we need three letters:
P: Your own subjective personal experience.
Q: The personal subjective that you suspect other people are having, which may be similar to yours in some way; or, as you put it, the impression that “others have P-type experience”. You have no way of accessing this experience directly, and no way of experiencing it yourself.
Pq: “The part of P that gives the impression that we describe as “Others seem to be conscious.”″ Pq is all the evidence you have for Q’s existence.
Since Pq is a part of P, as you said, I don’t want to focus too much on it. I also want to emphasize that P is your own personal experience, not any abstract “subject’s”. It’s the one that you can access directly.
Moving on, you say:
My impression is that others are conscious for the same reason I am. But my evidence for others’ consciousness is of a completely different nature than that of my own.
1). I would agree with your statement if you removed the word “completely”. Obviously, you know you are conscious, and you can experience P directly. However, you can also collect the same kind of data on yourself (or have someone, or some thing, do it for you) as you would on other people. For example, you could get your brain scanned, record your own voice and then play it back, install a sensor on your fridge that records your feeding habits, etc.; these are all real pieces of evidence that people are routinely collecting for practical purposes.
2). If you think that the above paragraph is true, then it would follow that you (probably) can collect some data on your own Q, as it would be experienced by someone else who is conscious (assuming, again, that you are not the only conscious being in the Universe, and that your own consciousness is not privileged in any cosmic way).
3). If you agree with that as well, then, assuming that we ever develop a good enough model of Q which would allow you to predict any person’s behavior with some useful degree of certainty, such a model would then be able to predict your own behavior with some useful degree of certainty. You could, for example, cover yourself with cameras and other sensors like a Christmas tree, start the model running on your home computer, then leave for work. And when you came back, you could verify that the model predicted your behavior that day more or less correctly (and if you doubt your powers of recall, which you should, then you could always play back the video).
4). If you agree that the above is possible, then we can go one step further. A good model of Q would not only predict what a person would do, but also what he would think; in fact, this model would probably have to do that anyway—since a person’s thoughts are the hidden states that influence his behavior, which the model is trying to predict in the first place. Thus, the model will be able to predict your own thoughts, as well as your actions. I think this addresses your point regarding “the arising of P in the first place”, above.
5). At this point, we have a model that can explain both your thoughts and your actions, and it does so solely based on external evidence. It seems like there’s nothing left for P to explain, since Q explains everything. Thus, P is a null concept; this is the “objective truth that this “bias” is causing you to mentally deviate from”, which you asked about in your comment. That is, the “objective truth” is that P can be fully explained solely in terms of Q, even though it doesn’t feel like it could be.
I am pretty sure you disagree with the conclusion (5). Do you disagree with (1) through (4), as well, or do disagree that (5) follows from (4) -- or both ?
It is, for instance, the idea that 2+2=4 is an objective fact that is part of the universe itself rather than just a facet of how we happen to experience it.
Eeergh, that’s a whole other topic for a whole other thread...
It’s much like how you can never know for sure that you’re not dreaming: any test you can perform is a test you can dream. There’s no way out even in principle.
I also want to emphasize that P is your own personal experience, not any abstract “subject’s”. It’s the one that you can access directly.
Er… By “your”, do you mean to refer to me, personally? I’ll assume that’s what you meant unless you specify otherwise. Henceforth I am the subject! :-D
I would agree with your statement if you removed the word “completely”.
But that’s the crux! I know I’m conscious in a way that is so devastatingly self-evident that “evidence” to the contrary would render itself meaningless. But if some theory for P were developed that demonstrated that Q doesn’t exist, I wouldn’t view that theory as nonsensical. It’d be surprising, but not blatantly self-contradictory like a theory that says P doesn’t exist. I believe in Q for highly fallible reasons, but I believe in P for completely different reasons that don’t seem to be at all fallible to me. I deduce Q but I don’t deduce P.
(Although I wonder if we’re just spinning our wheels in the muck produced from a fuzzy word. If we both agree that P is self-evident while Q is deduced from Pq, perhaps there’s no disagreement...?)
Obviously, you know you are conscious, and you can experience P directly. However, you can also collect the same kind of data on yourself (or have someone, or some thing, do it for you) as you would on other people. For example, you could get your brain scanned, record your own voice and then play it back, install a sensor on your fridge that records your feeding habits, etc.; these are all real pieces of evidence that people are routinely collecting for practical purposes.
Agreed. Notice, though, that the only way I’m able to correlate this Q-like data with P is because I can see the results of, say, the brain scan and recognize that it pairs with a particular part of P. For instance, I can tell that a certain brain scan corresponds with when I’m mentally rehearsing a Mozart piece because I experienced the rehearsal when the brain scanning occurred. So P is still implicit in the data-collection and -interpretation process.
If you think that the above paragraph is true, then it would follow that you (probably) can collect some data on your own Q, as it would be experienced by someone else who is conscious (assuming, again, that you are not the only conscious being in the Universe, and that your own consciousness is not privileged in any cosmic way).
Mostly agreed. If others experience, then others experience. :-)
The main point at which I disagree is that P is privileged. There’s no such thing as a P-less perspective. But if we’re granting that others are actually conscious (i.e., that Q exists) and that we can switch subjects with a sort of P-transformation (i.e., we can grant that you have P and that within your P my consciousness is part of Q), then I think that might not be terribly important to your point. We can mimic strong objectivity by looking at those truths that remain invariant under such transformations.
If you agree with that as well, then, assuming that we ever develop a good enough model of Q which would allow you to predict any person’s behavior with some useful degree of certainty, such a model would then be able to predict your own behavior with some useful degree of certainty.
Hmm… “behavior” is being used in two different ways here. When we use our “theory of Q” to make predictions, what we’re doing is assuming that Q exists and is indicated by Pq, and then we make predictions about what happens to Pq under certain circumstances. On the other hand, when we look at my “behavior”, what we’re considering is my P in a wider scope going beyond just Pq. For instance, others claim that they see blue when we shine light of a wavelength of 450 nm into their functional eyes. When we shine such light into my eyes, I see blue. Those are two very different kinds of “behavior” from my perspective!
But presumably under the P-transformation mentioned earlier, other subjects actually do experience blue, too. So we’ll just go with this. :-)
If you agree that the above is possible, then we can go one step further. A good model of Q would not only predict what a person would do, but also what he would think...
I agree with what you elaborate upon after this. Since the “behavior” here is a kind of experience, I would include the experience of thinking in that. So yes, already granted.
At this point, we have a model that can explain both your thoughts and your actions, and it does so solely based on external evidence. It seems like there’s nothing left for P to explain, since Q explains everything.
I wonder if you arranged your sentence a little bit backwards...? I think you meant to say, “It seems like there’s nothing left of P to explain, since our theory of Q explains everything.” Is that what you meant?
If so, then sure. There’s a detail here I’m uneasy about, but I think it’s minor enough to ignore (rather than write three more paragraphs on!).
Thus, P is a null concept; this is the “objective truth that this “bias” is causing you to mentally deviate from”, which you asked about in your comment. That is, the “objective truth” is that P can be fully explained solely in terms of Q, even though it doesn’t feel like it could be.
Hmm. You seem to be saying two different things here as though they’re the same thing. One I strongly disagree with, and the other I half-agree with.
The one I half-agree with is that based on the trajectory you describe, it seems we can describe P with the same brush we use to explain Q. The half I hesitate about is this claim that we can just equate P and Q. That’s the part that is to be explained! But perhaps something would arise in the process of elaborating on a theory of Q.
The part I totally disagree with is the claim that “P is a null concept”. Any theory that disregards P as a hallucination, or irrelevant, or a bias of any sort, is incoherent. I’ll grant that the impression that P is special could turn out to be a bias, but not P itself. And we can’t disregard the relevance of P. How would we ever gain evidence that P can be disregarded? Doesn’t that evidence have to come through P?
But I do agree:
We should be able to predict Pq with evidence that remains fixed under a P-transformation.
It seems easier and more consistent to assume that Pq points to an extant Q.
If Q exists, then under a P-transformation my experience (previously P) is part of Q.
Therefore, a full model of Pq should offer a kind of explanation of P.
But I still don’t see how this model actually connects P and Q. It just assumes that Q exists and that it’s a kind of P (i.e., that P-transformations make sense and are possible).
Eeergh, that’s a whole other topic for a whole other thread...
Fair enough!
It’s much like how you can never know for sure that you’re not dreaming: any test you can perform is a test you can dream. There’s no way out even in principle.
Why not just use Occam’s Razor ?
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
I’m reminded of some brilliant times I’ve tried to become lucid in my dreams. I look at an elephant standing in my living room and think, “Why is there an elephant in my living room? That’s awfully odd. Could I be dreaming? Well, if I were, this would be really strange without much of an explanation. But the elephant is here because I went to China and drank tea with a spoon. That makes sense, so clearly I’m not dreaming.”
So when you go through an analysis of whether the assumption that you’re awake yields shorter code in its description than the assumption that you’re dreaming does, how sure can you really be that you have any evidence at all that you’re not dreaming? Sure, you can resort to Bayesian analysis—but how do you know you didn’t just concoct that in your dream tonight and that it’s actually gibberish?
I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time (which might be part of why I’m not lucid in more of my dreams!). But if you really want to tackle the issue, you’re going to run into some pretty basic epistemic obstacles. How do you come to any conclusions at all when anything you think you know could have been completely fabricated in the last three seconds?
Er… By “your”, do you mean to refer to me, personally? I’ll assume that’s what you meant unless you specify otherwise.
Yep, that’s right. I’m just electrons in a circuit as far as you’re concerned ! :-)
I know I’m conscious in a way that is so devastatingly self-evident that “evidence” to the contrary would render itself meaningless. … If we both agree that P is self-evident while Q is deduced from Pq, perhaps there’s no disagreement...?
Sure, that makes sense, but I’m not trying to abolish P altogether. All I’m trying to do is establish that P and Q are the same thing (most likely), and thus the “Hard Problem of Consciousness” is a non-issue. Thus, I can agree with the last sentence in the quote above, but that probably isn’t worth much as far as our discussion is concerned.
For instance, I can tell that a certain brain scan corresponds with when I’m mentally rehearsing a Mozart piece because I experienced the rehearsal when the brain scanning occurred. So P is still implicit in the data-collection and -interpretation process.
I’m not sure how these two sentences are connected. Obviously, a perfect brain scan shouldn’t indicate that you’re mentally rehearsing Mozart when you are not, in fact, mentally rehearsing Mozart. But such a brain scan will work on anyone, not just you, so I’m not sure what you’re driving at.
I agree with what you elaborate upon after this. Since the “behavior” here is a kind of experience, I would include the experience of thinking in that.
When I used the word “behavior”, I actually had a much narrower definition in mind—i.e., “something that we and our instruments can observe”. So, brain scans would fit into this category, but also things like, “the subject answers ‘blue’ when we ask him what color this 450nm light is”. I deliberately split up “what the test subject would say” from “what he will actually think and experience”. But it seems like you agree with both points, maybe:
I think you meant to say, “It seems like there’s nothing left of P to explain, since our theory of Q explains everything.” Is that what you meant?
Pretty much. What I meant was that, since our theory of Q explains everything, we gain nothing (intellectually speaking) by postulating hat P and Q are different. Doing so would be similar to saying, “sure, the theory of gravity fully explains why the Earth doesn’t fall into the Sun, but there must also be invisible gnomes constantly pushing the Earth away to prevent that from happening”. Sure, the gnomes could exist, but there are lots of things that could exist...
The one I half-agree with is that based on the trajectory you describe, it seems we can describe P with the same brush we use to explain Q. The half I hesitate about is this claim that we can just equate P and Q.
If you agree with the first part, what are your reasons for disagreeing with the second ? To me, this sounds like saying, “sure, we can explain electricity with the same theory we use to explain magnetism, but that doesn’t mean that we can just equate electricity and magnetism”.
Maybe we disagree because of this:
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
Well, yeah, Occam’s Razor isn’t an oracle… It seems to me like we might have a fundamental disagreement about epistemology. You say “I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time”; I’m in total agreement there. But then, you say,
But if you really want to tackle the issue, you’re going to run into some pretty basic epistemic obstacles. How do you come to any conclusions at all when anything you think you know could have been completely fabricated in the last three seconds?
I personally don’t see any issues to tackle. Sure, I could be dreaming. I could also be insane, or a simulation, or a brain in a jar, or an infinite number of other things. But why should I care about these possibilities—not just “most of the time”, but at all ? If there’s no way, by definition, for me to tell whether I’m really, truly awake; and if I appear to be awake; then I’m going to go ahead and assume I’m awake after all. Otherwise, I might have to consider all of the alternatives simultaneously, and since there’s an infinite number of them, it would take a while.
It looks like you firmly disagree with the paragraph above, but I still can’t see why. But that does explain (if somewhat tangentially) why you believe that the “Hard Problem of Consciousness” is a legitimate problem, and why I do not.
You know, something clicked last night as I was falling asleep, and I realized why you’re right and where my confusion has been. But thanks for giving me something specific to work from! :-D
I think my argument can be summarized like so:
All data comes through P.
Therefore, all data about P comes through P.
All theories about P must be verified through data about P.
This means P is required to explain P.
Therefore, it doesn’t seem like there can be an explanation about P.
That last step is nuts. Here’s an analogy:
All (visual) data is seen.
Therefore, all (visual) data about how we see is seen.
All theories of vision must be verified through data about vision. (Let’s say we count only visual data. So we can use charts, but not the way an optic nerve feels to the touch.)
This means vision is required to explain vision.
Therefore, it doesn’t seem like there can be an explanation of vision.
The glaring problem is that explaining vision doesn’t render it retroactively useless for data-collection.
Thanks for giving me time to wrestle with this dumbth. Wrongness acknowledged. :-)
I’m not sure how these two sentences are connected. Obviously, a perfect brain scan shouldn’t indicate that you’re mentally rehearsing Mozart when you are not, in fact, mentally rehearsing Mozart. But such a brain scan will work on anyone, not just you, so I’m not sure what you’re driving at.
What I was driving at is that there’s no evidence that it corresponds to mentally rehearsing Mozart for anyone until I look at my own brain scan. All we can correlate the brain scans with is people’s reports of what they were doing. For instance, if my brain scan said I was rehearsing Mozart but I wasn’t, and yet I was inclined to report that I was, that would give me reason for concern.
The confusion here comes down to a point that I still think is true, but only because I think it’s tautological: From my point of view, my point of view is special. But I’m not sure what it would mean for this to be false, so I’m not sure there’s any additional information in this point—aside from maybe an emotional one (e.g., there’s a kind of emotional shift that occurs when I make the empathic shift and realize what something feels like from another person’s perspective rather than just my own).
What I meant was that, since our theory of Q explains everything, we gain nothing (intellectually speaking) by postulating hat P and Q are different. Doing so would be similar to saying, “sure, the theory of gravity fully explains why the Earth doesn’t fall into the Sun, but there must also be invisible gnomes constantly pushing the Earth away to prevent that from happening”. Sure, the gnomes could exist, but there are lots of things that could exist...
Well, I do know that P exists, and I know that from my point of view P is extremely special. That’s not invisible gnomes; it’s just true. But saying “from my point of view P is extremely special” is tautological since P is my perspective. When something is a tautology, there’s nothing to explain. That’s why it’s hard to come up with an explanation for it. :-P
If you agree with the first part, what are your reasons for disagreeing with the second ? To me, this sounds like saying, “sure, we can explain electricity with the same theory we use to explain magnetism, but that doesn’t mean that we can just equate electricity and magnetism”.
I agree with you now.
Maybe we disagree because of this:
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
Oh, no no no! I didn’t mean to make a particularly big deal out of the possibility that we’re dreaming. I was trying to point out an analogous situation. There’s no plausible way to gather data in favor of the hypothesis that we’re not dreaming because the epistemology itself is entirely contained within the dream. I figured that might be easier to see than the point I was trying to make, which was the bit of balderdash that there’s no way to gather evidence in favor of P arising from something else because that evidence has to come through P. The arguments are somewhat analogous, only the one for dreaming works and the one for P doesn’t.
I personally don’t see any issues to tackle. Sure, I could be dreaming. I could also be insane, or a simulation, or a brain in a jar, or an infinite number of other things. But why should I care about these possibilities—not just “most of the time”, but at all ?
Two and a half points:
Again, this was meant to be an analogy. I wasn’t trying to argue that we can’t trust our data-collection process because we could be dreaming. I meant to offer a situation about dreaming that seemed analogous to the situation with consciousness. I was hoping to illustrate where the “hard” part of the hard problem of consciousness is by pointing out where the “hard” part in what I suppose we could call the “hard problem of dreaming” is.
This issue actually does become extremely pragmatic as soon as you start trying to practice lucid dreaming. The mind seems to default to assuming that whatever is being experienced is being experienced in a wakeful state, at least for most people. You have to challenge that to get to lucid dreaming. There have been many times where I’ve been totally sure I’m awake after asking myself if I’m dreaming, and have even done dream-tests like trying to read text and trying to fly, only to discover that all my testing and certainty was ultimately irrelevant because once I wake up, I can know with absurdly high probability that I was in fact dreaming.
Closely related to that second point is the fact that you know you dream regularly. In fact, there’s quite a bit of evidence to suggest that pretty much everyone dreams several times every night. Most people aren’t crazy, or discover that they’re brains in a jar, or whatever every day. So if there’s a way that everything you know could be completely wrong, the possibility that you’re dreaming is much, much higher on the list of hypotheses than that, say, you have amnesia and are on the Star Trek holodeck. So picking out dreaming as a particular issue to be concerned about over the other possibilities isn’t really committing the fallacy of privileging the hypothesis. If we’re going to go with “You’re hallucinating everything you know,” the “You’re dreaming” hypothesis is a pretty darn good one to start with!
Again, though, I’m not trying to argue that we could be dreaming and therefore we can’t trust what we know. I was trying to point out an analogy which, upon reflection, doesn’t actually work.
All right, so it seems like we mostly agree now—cool !
I meant to offer a situation about dreaming that seemed analogous to the situation with consciousness.
Ok, I get it now, but I would still argue that we should assume we’re awake, until we have some evidence to the contrary; thus, the “hard problem of dreaming” is a non-issue. It looks like you might agree with me, somewhat:
This issue actually does become extremely pragmatic as soon as you start trying to practice lucid dreaming. The mind seems to default to assuming that whatever is being experienced is being experienced in a wakeful state, at least for most people. You have to challenge that to get to lucid dreaming.
In this situation, we assume that we’re awake a priori, and we are then deliberately trying to induce dreaming (which should be lucid, a well). So, we need a test that tells us whether we’ve succeeded or not. Thus, we need to develop some evidence-collecting techniques that work even when we’re asleep. This seems perfectly reasonable to me, but the setup is not analogous to your previous one—since we start out with the a priori assumption that we’re currently in the awake state; that we could transition to the dream state when we choose; and that there exists some evidence that will tell us which state we’re in. By contrast, the “hard problem of dreaming” scenario assumes that we don’t know which state we’re in, and that there’s no way to collect any relevant evidence at all.
All right, so it seems like we mostly agree now—cool !
Yep!
Rationality training: helping minds change since 2002. :-D
Ok, I get it now, but I would still argue that we should assume we’re awake, until we have some evidence to the contrary; thus, the “hard problem of dreaming” is a non-issue.
You’re coming at it from a philosophical angle, I think. I’m coming at it from a purely pragmatic one. Let’s say you’re dreaming right now. If you start with the assumption that you’re awake and then look for evidence to the contrary, typically the dream will accomodate your assumption and let you conclude you’re really awake. Even if your empirical tests conclusively show that you’re dreaming, dreams have a way of screwing with your reasoning process so that early assumptions don’t update on evidence.
For instance, a typical dream test is jumping up in the air and trying to stay there a bit longer than physics would allow. The goal, usually, is flight. I commonly find that if I jump into the air and then hang there for just a little itty bitty bit longer than physics would allow, I think something like, “Oh, that was barely longer than possible. So I must not be quite dreaming.” That makes absolutely no sense at all, but it’s worth bearing in mind that you typically don’t have your whole mind available to you when you’re trying to become lucid. (You might once you are lucid, but that’s not terribly useful, is it?)
In this case, you have to be really, insanely careful not to jump to the conclusion that you’re awake. If you think you’re awake, you have to pause and ask yourself, “Well, is there any way I could be mistaken?” Otherwise your stupid dreaming self will just go along with the plot and ignore the floating pink elephants passing through your living room walls. This means that when you’re working on lucid dreaming, it usually pays to recognize that you could be dreaming and can never actually prove conclusively that you’re awake.
But I agree with you in all cases where lucid dreaming isn’t of interest. :-)
You’re coming at it from a philosophical angle, I think. I’m coming at it from a purely pragmatic one.
That’s funny, I was about to say the same thing, only about yourself instead of me. But I think I see where you’re coming from:
If you start with the assumption that you’re awake and then look for evidence to the contrary, typically the dream will accomodate your assumption and let you conclude you’re really awake… it’s worth bearing in mind that you typically don’t have your whole mind available to you when you’re trying to become lucid.
So, your primary goal (in this specific case) is not to gain any new insights about epistemology or consciousness or whatever, but to develop a useful skill: lucid dreaming. In this case, yes, your assumptions make perfect sense, since you must correct for an incredibly strong built-in bias that only surfaces while you’re dreaming. That makes sense.
Basically, try as I might, I can’t think of any piece of evidence that would let you distinguish between a being—other than yourself—who is consciousness and experiences qualia, and a being who pretends to be conscious with perfect fidelity, but does not in fact experience qualia.
As I discussed here—see also this comment for clarification—we should in theory be able to discover if other beings have qualia if we were to learn about their brains in such microscopic detail that we are performing approximately the same computations in our brains that their brains are running; we then “get their qualia” first-hand.
As for arguing about qualia verbally, I hold qualia to be both entirely indefinable (implying that the concept is irreducible, if it exists) and something that the vast majority of humans apprehend directly and believe very strongly to exist. There is little to be gained by arguing about whether qualia exist, because of this problem—the best that can be achieved through argument is that both of you accept the consensus regarding the existence of this indefinable thing that nonetheless needs to be given a name.
Ok, I read your article as well as your comment, and found them very confusing. More on this in a minute.
As for arguing about qualia verbally, I hold qualia to be both entirely indefinable...
How is that different from saying, “I found qualia to be a meaningless concept” ? I may as well say, “I think that human consciousness can best be explained by asdfgh, where asdfgh is an undefinable concept”. That’s not much of an explanation. In addition, this makes it impossible to discuss qualia at all (with anyone other than yourself, that is), which once again hints at a kind of solipsism.
...and something that the vast majority of humans apprehend directly and believe very strongly to exist.
This is weak evidence at best. The vast majority of humans apprehend all kinds of stuff directly (or so they believe), including gods, demons, honest politicians, etc. At least some of these things have a very low probability of existing, so how are qualia any different ? In addition, regardless of what the vast majority of people believe, I personally disagree with this “consensus regarding the existence of this indefinable thing”, so you’ll need to convince me some other way other than stating the consensus.
Note that I agree with the statement, “humans appear to act as though they believe that they experience things, just as I do”—a statement which we may reduce to something like, “humans experience things” (with the usual understanding that there’s some non-zero probability of this being false). I just don’t see why we need a special name for these experiences, and why we have to treat them any differently from anything else that humans do (or that rocks do, for that matter).
Which brings me back to your article (and comment). In it, you describe qualia as being indefinable. You then proceed to discuss them at great length, which means that you must have some sort of a definition in mind, or else your article would be meaningless (or perhaps it would be meaningless to everyone other than yourself, which isn’t much better). Your central argument appears to rest on the assumption that qualia are irreducible, but I still don’t understand why you’d assume that in the first place.
In short, qualia appear to be a “mysterious answer to a mysterious question”: they are impossible to define, irreducible, and totally inexplicable—and thus impossible to study or even discuss. They are a kind of elan vital, and therefore not terribly useful as a concept.
Ok, that makes sense. I understand now that this is what you believe, but I still don’t see why. You say:
This, to me, sounds like a circular argument at worst, and a circular analogy (if there is such a thing) at best. You are trying to illustrate your belief that qualia are categorically different from visual perception (just f.ex.), by introducing a computer which possesses visual perception but not qualia, because, due to the qualia being so different from visual perception, there is no way to grant qualia to the computer even in principle. So, “qualia are hard because qualia are hard”, which is a tautology. Your next paragraph makes a lot more sense to me:
I think that, if you go this route, you arrive at a kind of solipsism. You know for a fact that you personally have a consciousness, but you don’t know this about anyone else, myself included. You can only infer that other beings are conscious based on their behavior. Ok, to be fair, the fact that they are biologically human and therefore possess the same kind of a brain that you do can count as supporting evidence; but I don’t know if you want to go that route (Searle does, AFAIK). Anyway, let’s assume that your main criterion for judging whether anyone else besides yourself is conscious is their behavior (if that’s not the case, I can offer some arguments for why it should be), and that you reject the solipsistic proposition that you are the only conscious being around (ditto). In this case, a perfect sleepwalker or a qualia-less computer that perfectly simulates having qualia, etc., is actually less parsimonious than the alternative, and therefore the concept of qualia buys you nothing (assuming that dualism is false, as always). And then, the “hard question” becomes one of those “mysterious questions” to which you could give a “mysterious answer”, as per the Sequences.
I’d actually read that page earlier, and it (along with associated links) seemed to imply that either dualism offers the best answer to the “hard question”, or the “hard question” is meaningless as per Dennet—which is why I took the time to slam dualism in my previous posts.
Darn, again, I’m sorry. But nevertheless, I think it’s a good thought experiment.
Mmm. Yes, I think you’re right. As I’ve chewed on this, I’ve come to wonder if that’s part of where I’ve been getting the impression that there’s a hard problem in the first place. As I’ve tried to reduce the question enough to notice where reduction seems to fail or at least get a bit lost, my confusion confuses me. I don’t know if that’s progress, but at least it’s different!
I’m afraid I’m a bit slow on the uptake here. Why does this require solipsism? I agree that you can go there with a discussion of consciousness, but I’m not sure how it’s necessarily tied into the fact that consciousness is how you know there’s a question in the first place. Could you explain that a bit more?
Well… Yes, I think I agree in spirit. The term “behavior” is a bit fuzzy in an important way, because a lot of the impression I have that others are conscious comes from a perception that, as far as I can tell, is every bit as basic as my ability to identify a chair by sight. I don’t see a crying person and consciously deduce sadness; the sadness seems self-evident to me. Similarly, I sometimes just get a “feel” for what someone’s emotional state is without really being able to pinpoint why I get that impression. But as long as we’re talking about a generalized sense of “behavior” that includes cues that go unnoticed by the conscious mind, then sure!
It’s not a matter of what qualia buy you. The oddity is that they’re there at all, in anything. I think you’re pointing out that it’d be very odd to have a quale-free but otherwise perfect simulation of a human mind. I agree, that would be odd. But what’s even more odd is that even though we can be extremely confident that there’s some mechanism that goes from firing neurons to qualia, we have no clue what it could be. Not just that we don’t yet know what it is, but as far as I know we don’t know what could possibly play the role of such a mechanism.
It’s almost as though we’re in the position of early 19th century natural philosophers who are trying to make sense of magnetism: “Surely, objects can’t act at a distance without a medium, so there must be some kind of stuff going on between the magnets to pull them toward one another.” Sure, that’s close enough, but if you focus on building more and more powerful microscopes to try to find that medium, you’ll be SOL. The problem in this context is that there are some hidden assumptions that are being brought to bear on the question of what magnetism is that keep us from asking the right questions.
Mind you, I don’t know if understanding consciousness will actually turn out to yield that much of a shift in our understanding of the human mind. But it does seem to be slippery in much the same way that magnetism from a billiard-balls-colliding perspective was, as I understand it. I suspect in the end consciousness will turn out to be no more mysterious than magnetism, and we’ll be quite capable of building conscious machines someday.
In case this adds some clarity: My personal best proto-guess is that consciousness is a fuzzy term that applies to both (a) the coordination of various parts of the mind, including sensory input and our sense of social relationships; and (b) the internal narrative that accompanies (a). If this fuzzily stated guess is in the right ballpark, then the reason consciousness seems like such a hard problem is that we can’t ever pin down a part of the brain that is the “seat of consciousness”, nor can we ever say exactly when a signal from the optic nerve turns into vision. Similarly, we can’t just “remove consciousness”, although we can remove parts of it (e.g., cutting out the narrator or messing with the coordination, as in meditation or alcohol).
I wouldn’t be at all surprised if this guess were totally bollocks. But hopefully that gives you some idea of what I’m guessing the end result of solving the consciousness riddle might look like.
Well, there’s exactly one being in existence that you know for sure is conscious and experiences qualia: yourself. You suspect that other beings (such as myself) are conscious as well, based on available evidence, though you can’t be sure. This, by itself, is not a problem. What evidence could you use, though ? Here are some options.
You could say, “I think other humans are conscious because they have the same kind of brains that I do”, but then you’d have to exclude other potentially conscious beings, such as aliens, uploaded humans, etc., and I’m not sure if you want to go that route (let me know if you do). In addition, it’s still possible that any given human is not a human at all, but one of those perfect emulator-androids, so this doesn’t buy you much.
You could put the human under a brain scanner, and demonstrate that his brain states are similar to your own brain states, which you have identified as contributing to consciousness. If you could do that, though, then you would’ve reduced consciousness down to physical brain states, and the problem would be solved, and we wouldn’t be having this conversation (though you’d still have a problem with aliens and uploaded humans and such).
You could also observe the human’s behavior, and say, “this person behaves exactly as though he was conscious, therefore I’m going to assume that he is, until proven otherwise”. However, since you postulate the existence of androids/zombies/etc. that emulate consciousness perfectly without experiencing, you can’t rely on behavior, either.
Basically, try as I might, I can’t think of any piece of evidence that would let you distinguish between a being—other than yourself—who is consciousness and experiences qualia, and a being who pretends to be conscious with perfect fidelity, but does not in fact experience qualia. I don’t think that such evidence could even exist, given the existence of perfect zombies (since they would be imperfect if such evidence existed). Thus, you are forced to conclude that the only being who is conscious is yourself, which is a kind of solipsism (though not the classic, existential kind).
It seems like we agree on this point, then—yey ! Of course, I would go one step further, and argue that there’s nothing special about our subconscious mind. We know how some parts of it work, we have mapped them down to physical areas of the brain, and our maps are getting better every day.
I don’t just think it would be odd, I think it would be logically inconsistent, as long as you’re willing to assume that people other than yourself are, in fact, conscious. If you’re not willing to assume that, then you arrive at a kind of solipsism, which has its own problems.
Right, which is why I reject the existence of qualia as an independent entity altogether. As per your magnetism analogy:
Right, and the problem here is not that your microscopes aren’t powerful enough, but that your very idea of a magnetic attraction medium is flawed. In reality, there are (probably) no such things as “magnets” at all; there are just collections of waveforms of various kinds (again, probably). You choose to call some of them “magnets” and some others “apples”, but those words are just grossly simplified abstractions that you have created in order to talk about the world—because if you had to describe every single quark of it, you’d never get anywhere.
Similarly, “qualia” and “consciousness” are just abstractions that you’d created in order to talk about human brains—including your own brain. I understand that you can observe your own consciousness “from the inside”, which is not true of magnets, but I don’t see this as an especially interesting fact. After all, you can observe gravity “from the inside”, as well (your body is heavy, and tends to fall down a lot), but that doesn’t mean that your own gravity is somehow different from my gravity, or a rock’s gravity, because as far as gravity is concerned, you aren’t special.
I don’t think that we need to necessarily pin down a single part of the brain that is the “seat of consciousness”. We can’t pin down a single part that constitutes the “seat of vision”, either, but human vision is nonetheless fairly well understood by now. The signal from the optic nerve is just part of the larger mechanism which includes the retina, the optic nerve, the visual cortex, and ultimately a large portion of the brain. There’s no point at which electrochemical signals turn into vision, because these signals are a part of vision. Similarly, there isn’t a single “seat of blood flow” within the human body, but blood flow is likewise fairly well understood.
I’m not sure I follow your reasoning here. What do you mean by “removing consciousness” and “cutting out the narrator”, and why is it important ? Drunk (or meditating) people are still conscious, after a fashion.
Ah! Okay. Three points:
I think you’re arguing for something I agree with anyway. I don’t think of qualia as being inherently independent of everything else. I think of qualia as self-evident. I don’t think my experience of green can be entirely separated from the physical process of perceiving light of a certain wavelength, but I do think it’s fair to say that I’m conscious of the green color of the “Help” link below this text box.
Even if I did think qualia were divisible from the physical processes involved in perception (which I think would force dualism), I wouldn’t be able to conclude that I’m the only one who is conscious. I would have to conclude that as far as I currently know, I have no way of knowing who else is or isn’t conscious. So solipsism would then be a possibility, but not a logical necessity.
I’m not arguing that p-zombies can exist. I seriously doubt they can. If this is a point you’ve been trying to argue me into agreeing, please note that we started out agreeing in the first place!
Er… Except that we’re not conscious of it! I’d say that’s pretty special—as long as we agree that “special” means “different” rather than “mysterious”.
Sorry, I meant “odd” in the artistically understated sense. We agree on this.
So here, I think, is a source of our miscommunication. I also reject qualia as being independent.
I think part of the problem we’re running into here is that by naming qualia as nouns and talking about whether it’s possible to add or remove them, we’ve inadvertently employed our parietal cortices to make sense of conscious experience. It’s like how people talk about “government” as though it’s a person when, really, they’re just reifying complex social behavior (and as a result often hiding a lot of complexity from themselves).
“Quale” is a name that has been, sadly, agreed upon to capture the experience of blueness, or the sense of a melody, or what-have-you. We needed some kind of word to distinguish these components of conscious experience from the physical mechanisms of perception because there is a difference, just like there’s a difference between a software program and the physical processes that result in the program running. Yes, as far as the universe is concerned, it’s just quarks quarking about. But just like it’s helpful to talk about chairs and doors, it’s helpful to talk about qualia in order to understand what our experience consists of.
I suspect in the future we’ll be able to agree that “qualia” was actually a really bad term to use, with the benefit of hindsight. I suspect consciousness will turn out to be a reification, and thus talking about its components as though they’re things just throws us off the track and creates confusion in the guise of a mystery. But even if we dump the term “qualia”, we’re still stuck with the fact that we experience, and there’s a qualitative sense in which experience doesn’t seem like it’s even in-principle describable in terms of firing neurons. If you told me that it was discovered that there’s actually a region of the brain that’s responsible for adding qualia to vision (pardoning the horrid implicit metaphor), I wouldn’t feel like hardly anything had been explained. So you found circuitry that, when monkeyed with, makes all yellow vanish from my conscious awareness. But how did yellow appear in the first place, as opposed to being just neuronal signals bouncing around? Pointing to a region of the brain and saying “That does it” still leaves me baffled as to how. I don’t see how explaining the circuitry of that brain region in perfect synapse-level detail could answer that question.
However, I could totally see consciousness turning out to have this “hard problem” because it’s like trying to describe where Mario is in terms of the transistors in a game console.
On this point, I think we might just be frozen in disagreement. You seem to be taking as practically axiomatic that there’s nothing significantly different about consciousness as compared to anything else, like gravity. To me, that view of consciousness is internally incoherent. You can make sense of gravity as an outside observer, but you can’t make sense of your own consciousness as an outside observer. That’s hugely relevant for any attempt to approach consciousness with the same empirical eye as used on gravity, or magnetism, or any other physical phenomenon. We can look at those phenomena from a position that largely doesn’t interact with them in a relevant way, but I cannot fathom a comparable place to stand in order to be conscious of consciousness while not interacting with it.
This is not to say that consciousness is intrinsically more mysterious than gravity. I’m just utterly dumbfounded that you can think that your ability to be aware of anything is somehow no more interesting than any other random phenomenon in the universe.
I don’t think so either.
...
We seem to keep doing this. I agree, because that’s part of the point I was making.
Removing consciousness is exactly the process that would turn a person into a p-zombie, yes? So what I’ve suggested as a general direction to consider for how consciousness appears passes the sanity test of not allowing p-zombies.
As for the narrator… Well, you know how there’s a kind of running commentary going on in your mind? It’s possible to stop that narration, and if you do so it changes the quality of consciousness by quite a lot.
Meditation, alcohol, and quite a number of other things can all monkey with the way parts of the mind coordinate and also get the narrator to stop narrating (or at least not become an implicit center of attention anymore). And I’m not claiming that doing these things removes consciousness. Quite the opposite, I’m pointing out that drunk and meditating people have a different kind of conscious experience.
True, but you can carry the reasoning one step further. The claim “other people are conscious” is a positive claim. As such, it requires positive evidence (unless it’s logically necessary, which in this case it’s not). If your concept of qualia/consciousness precludes the possibility of evidence, you’d be justified in rejecting the claim.
Fair enough.
Well, it depends on what you mean by “perception”. If you mean, for example, “light hitting my retina and producing a signal in my optic nerve”, then yes, experience is different—because the aforementioned process is a component of it. The overall process of experience involves your visual cortex, and ultimately your entire brain, and there’s a lot more stuff that goes on in there.
Hmm, I don’t know, is there such a difference ? As far as I understand, when Firefox is running, we can (plus or minus some engineering constraints) reduce its functionality down to the individual electrons inside the integrated circuits of my computer (plus or minus some quantum physics constraints). Where does the difference come in ?
I lack this sense, apparently :-(
As it happens, there’s a real neurological phenomenon called “blindsight” which is similar to what you’re describing. It’s relatively well understood (AFAIK), and, in this specific case, we can indeed point to a specific region of the brain that causes it. So, at least in case of vision, we can actually map the presence or absence of conscious visual experience to a specific area of the brain. I suspect that there are scientists who are even now busily pursuing further explanations.
The word “axiomatic” is perhaps too strong of a word. I just don’t think that it’s possible to treat consciousness as being categorically different from other phenomena, such as gravity, while still maintaining a logically and epistemically (if that’s a word) consistent, non-solipsistic worldview.
Ok, let me temporarily grant you this premise. What about the consciousness of other people ? Can I make sense of those consciousness as an outside observer ? If the answer is “no”, then consciousness becomes totally mysterious, because I can only observe other people’s consciousness from the outside. If the answer is “yes”, then you end up saying, “my own consciousness is categorically different from anyone else’s”, which seems unlikely to be true, since you’re just a regular human like the rest of us.
I agree, but I don’t think this means that you can’t “make sense” of your consciousness regardless. In a way, this entire site is a toolkit for making sense of your own consciousness—specifically, its biases—and for using this understanding to alter it.
Ah, ok, I get it, and I agree, but I’m still not sure how this relates to the point you’re making. If anything, it offers tangential evidence against it—because the existence of a relatively simple physical mechanism (such as alcohol) that can alter your consciousness points the way to reducing your own consciousness down to a collection of strictly physical interactions.
You know, I think we’re getting lost in the little details here, and we keep communicating past one another.
First, let me emphasize that I do think we’ll eventually be able to explain consciousness in a reductionist way. I’ve tried to make that clear, but some of your arguments make me wonder if I’ve failed to convey that.
Second, remember that this whole discussion arose because you questioned the value of trying to answer the hard problem of consciousness. I now suspect what you originally meant was that you don’t think there is a hard problem, so there wasn’t anything to answer. And in an ultimate sense, I think you’re right: I think people like Thomas Nagel are trying to argue that we need a complete paradigm shift in order to explain how qualia exist, and I think they’re wrong. Eventually it almost certainly comes down to brain behavior. Even if it’s not clear what that pathway could be, that’s a description of human creativity and not of the intrinsic mysteriousness of the phenomenon.
But what you said was this:
This, to me, really sounds like you’re saying we can’t detect qualia, so we might as well assume there are no qualia, so we shouldn’t worry about how qualia arise. Maybe that wasn’t your point. But if it was, I stand in firm disagreement because I think that qualia are the only things we can care about!
For some reason I can’t seem to convey why I think that. I feel rather like I’m pointing at the sun and saying “Look! Light!” and you’re responding with “We don’t have a way of detecting the light, so we might as well assume it isn’t there.” (Please excuse the flaw in the analogy in that we can detect light. Pretend for the moment that we can’t.) All I can do is blink stupidly and point again at the sun. If I can’t get you to acknowledge that you, too, can see, then no amount of argumentation is going to get the point across.
So all I’m left with is an insistence that if my understanding of the universe is completely off and it turns out to be possible to remove conscious experience from people, I most certainly would not want that done to me—not that I could care afterwards, but I absolutely would care beforehand! So to me, the presence or absence of qualia matters a lot.
But if you cannot relate to that at all, I don’t think I’ll ever be able to convey why I feel that way. I’m completely at a loss as to how this could possibly be a topic of disagreement.
Sorry, you’re right, I tend to do that a lot :-(
That’s correct, I think; though obviously I’m all for acquiring a better understanding of consciousness.
I think it’s not entirely clear what that pathway is, but there are some very good clues regarding what that pathway could be, since certain aspects of consciousness (such as vision, f.ex.) are reasonably well understood.
Pretty much, but I think we should make a distinction between a person’s own qualia, as experienced by the person, and the qualia of other people, from the point of view of that same person. Let’s call the person’s own qualia “P” and everyone else’s qualia (from the point of view of the person) “Q”.
Obviously, each person individually can detect P. Until some sort of telepathy gets developed (assuming that such a thing is possible in principle), no person can detect Q (at least, not directly).
You seem to be saying—and I could be wrong about this, so I apologize in advance if that’s the case—that, in order to build a general theory of consciousness, we need to figure out a way to study P in an objective way. This is hard (I would say, impossible), since P is by its nature subjective, and thus inaccessible to anyone other than yourself.
I, on the other hand, am arguing that a general theory of consciousness can be built based solely on the same kind of evidence that compels us to believe that other people experience things—i.e., that Q exists and is reducible to brain states. Let’s say that we built some sort of a statistical model of consciousness. We can estimate (with a reasonably high degree of certainty) what any given person will experience in any situation, by using this model and plugging in a whole bunch of parameters (representing the person and the situation). I think you would you agree that such a model can, in principle, exist (though please correct me if I’m wrong). Then, would you agree that this model can also predict what you, yourself, will experience in a given situation ? If not, then why not ? If yes, then how is P any different from Q ?
I agree, but I believe that removing a person’s consciousness will necessarily alter his behavior; in most cases, this alteration would be quite drastic. Thus, I definitely wouldn’t want this done to me, or to anyone else, for this matter.
However, I think you are contemplating a situation where we remove a person’s consciousness, and yet his behavior (which includes talking about his consciousness) remains exactly the same. I argue that, if such a thing is possible, then consciousness is a null concept, since it has literally no effect on anything we could ever detect. As far as I understand, you agree with me with respect to Q, but disagree with respect to P. But then, you must necessarily believe that P is categorically different from Q, somehow… mustn’t you ?
If you do believe this, then you must also believe that any model of consciousness that we could possibly build will work correctly for anyone other than yourself. This seems highly unlikely to me, however—what makes you such an outlier ? You are a human like the rest of us, after all. And if you are not an outlier, and yet you believe that the model won’t function for you, then you must believe that such a model cannot be built in principle (i.e., it won’t function for anyone else, either), and yet I think you would deny this. As I see it, the only way to reconcile these contradictions is to reject the idea that P is categorically different from Q, and thus there’s nothing special about your own qualia, and thus the problem consciousness isn’t any harder than the problem of, say, unifying gravity with the other fundamental forces (which is pretty hard, admittedly).
Apparently my reply is “too long”, so I’ll reply in two parts.
PART 1:
Hey, apparently I do too!
Excellent.
Um… Sure, let’s go with that. There’s a nuance here that’s disregarding the hard problem, but I don’t think we’ll get much mileage repeating the same kind of detail-focusing we’ve been doing. :-P
Sure, agreed.
I should warn you, though, that I’m not sure that this distinction is coherent. There’s some reason to suspect that our perception of others as conscious is part of how we construct our sense of self. So, it might not make sense to talk about “my” conscious experience as distinct from “your” conscious experience as though we start with a self and then grant it consciousness. It might be the other way around.
I emphasize this because explaining Q without ever touching P might not tell us much about P. If we start with conscious experience and then define the line between “my” experience and “others’” experience by the distinction between P and Q, all we do by detailing Q is explain our impression that others are conscious. We might think we’re addressing others’ P, but we never actually address our P (which, it seems, is the only P we can ever have access to—which might be because we define “me” in part by “that which has access to P” and “not me” by “that which doesn’t have access to P”).
So with that warning, I’ll just run with the intuitive distinction between P and Q that I believe you’re suggesting.
I agree, and I would go just a little bit farther: I would argue that it’s not possible even in principle to detect Q as a kind of P. If I experience another person’s experience from a first-person perspective, it’s not their experience anymore. It’s mine. Sure, we might share it, like two people watching the same movie. But the P I have access to is still my own, and the Q that I’m supposedly accessing as a kind of P is still removed: I still have to assume that the person sitting next to me is also experiencing the movie.
Yeah, I think that’s a reasonably fair summary. :-)
I agree with you on this. I just think it’s important to recognize that what we will have explained is our impression that others are conscious. That might give us insight into P, and it seems implausible that it wouldn’t, but it also doesn’t seem clear what kind of mechanism it could possibly reveal for P. At least to me!
Yes, I agree.
I’m going to go with “maybe”, which I think requires me to answer both the “yes” and “no” branches. :-P
I think it’s certainly plausible that this model of Q could predict the behavior of P. But it needn’t do so. Why not? Because P and Q are different for precisely the reason that we gave them different names. I’m under the impression that my wife is conscious as a sort of immediate perception; surely I deduce it somehow, probably by my perception of her as a social entity with whom I could in principle interact, but that isn’t how it seems to me. I just see her as conscious. So when we explore my perception of her as conscious and we develop a thorough model of her consciousness as perceived by me (and others), what that model does is predict how our perception of her conscious experience changes.
But it requires an extra step to say that if I were her, I would be experiencing those changes as P.
Now, I suspect that this model would work out just fine. I suspect that when we determine that we’ve modeled Q, that the model of Q will predict my P. (I see this in the Enneagram all the time, in fact: it describes others’ experiences, and when I spell out their experiences they often give an “I’ve been caught!” kind of reaction. When someone does the same to me, I sure feel caught!) After all, part of the impression I get of Q comes from the fact that I know that I would react they way the other is reacting if I were to experience X, which draws me to think that they’re experiencing X. So for it to fail to model P, it seems likely that I’d have to react in a way that I would not recognize from the outside (assuming experiencing my own P as Q can be turned into a coherent idea). That seems like it’d be pretty weird.
But we’re still left with the fact that the application of the theory to Q feels tremendously different than its application to P. The fact that the model is attempting to explain in part why P and Q are different in the first place makes it difficult for me to see how an explanation of Q alone is going to do it. It feels as though its ability to capture P would be almost coincidental.
(continued...)
PART 2:
Yep. I believe that’s Eliezer’s argument (the “anti-zombie principle” I think it was called), and I agree. That’s why I prefaced it with saying that my understanding of the universe would have to be pretty far off in order for my self-zombification to even be possible. So, given the highly improbable event that p-zombies are possible, I sure wouldn’t want to become one! Ergo, my own qualia matter a great deal to me regardless of anyone else’s ability to detect them.
...
I’m not sure what it would mean for me to agree in terms of Q but not P. I’m not quite sure what you’re suggesting I’m saying. So maybe you’re right, but I honestly don’t know!
Mmm… I’m not saying that I, personally, am special. I’m saying that an experiencing subject is special from the point of view of the experiencing subject, precisely because P is not the same as Q. It so happens that I’m an experiencing subject, so from my point of view my perspective is extremely special.
Remember that science doesn’t discover anything at all. Scientists do. Scientists explore natural phenomena and run experiments and experience the results and come to conclusions. So it’s not that exploring Q would just happen and then a model emerges from the mist. Instead, people explore Q and people develop a model that people can see predicts their impressions of Q. That’s what empiricism means!
I emphasize this because every description is always from some point of view. For most phenomena, we’ve found a way to take a point of view that doesn’t make the difference between P and Q all that relevant. A passive-voice description of gravity seems to hold from both P and Q, for instance. But when we’re trying to explore what makes P and Q different, we can’t start by modulating their difference. We have to decide what the point of view we’re taking is, and since part of what we’re studying is the phenomenon of there being points of view in the first place, that decision is going to matter a lot.
I think that if a model of Q fails to inform us about P, then it will fail for P regardless of whose perspective we take.
However, I suspect that a good model of Q will tell us pretty much everything about P. I just can’t fathom at this point how it might do so.
Well, part of the problem is that we know P is categorically different than Q. Or rather, I know my P is categorically different than Q, and if Q is going to have any fidelity, everyone else will be under the same impression from their own points of view.
I can guarantee that any model that claims I don’t have conscious experience is flat-out wrong. This is perhaps the only thing I’d be willing to say has a probability of 1 of being true. I might discover that I’m not experiencing what I thought I was, but the fact that I’m under the impression of seeing these words, for instance, is something for which I believe it is not possible even in principle to provide me evidence against. (Yes, I know how strong a claim that is. I suppose that since I’m open to having this perspective challenged, I should still assign a probability of less than 1 to it. But if anything deserves a probability of 1 of being true, I’d say the fact that there is P-type experience is it!)
However, I can’t make a claim like that about Q. I’m certainly under the impression that my wife is conscious, but maybe she’s not. Maybe she doesn’t have P-type experience. I don’t know how I could discover that, but if it were possible to discover it and it turned out that she were not conscious, I wouldn’t view that as a contradiction in terms. It would just accent the difference between P-type experience and my impression of Q-type experience. Getting evidence for my wife not being conscious doesn’t seem to violate what it means for something to be evidence the way “evidence” against my own consciousness would be.
I’m oversimplifying somewhat since consciousness almost certainly isn’t a “yes” or “no” thing. Buddhists often claim that P-type consciousness can be made “more conscious” through mindfulness, and that once you’ve developed somewhat in that direction you’ll be able to look back and consider your past self to not have been “truly” conscious. However, the point I’m trying to make here is that we actually start with the immediate fact that P is different than Q, and it’s upon this foundation that empiricism is built. We can’t then turn around and deny the difference from an empirical point of view!
However, in spirit I think I agree with you. I think we’ll end up understanding P through Q. I don’t see how since I don’t see how to connect the two empirically even in principle. But science has surprised philosophers for three hundred years, so why stop now? :-D
Bah ! Curse you, machine overlords ! shakes fist
I did not mean to imply that. In fact, I agree with you in principle when you say,
Sure, it might be, or something else might be the case; my P and Q categories were meant to be purely descriptive, not explanatory. Your conscious experience, of whose existence you are certain, and which you are experiencing at this very minute, is P. Other people’s conscious experience, whose existence you can never personally experience, but can only infer based on available evidence, intuition, or whatever, is Q. That’s all I meant. Thus, when you say, ”...we might think we’re addressing others’ P, but we never actually address our P”, you are confusing the terminology; there’s no such thing as “other people’s P”, there’s only P and Q. You may suspect that other people have conscious experiences, but the best you can do as lump them into Q.
You move on to say several things which, I believe, reinforce my argument (my apologies if I seem to be quote-mining you out of context, please let me know if I’d done so on accident):
You appear to be very committed to the idea that your own experience is categorically different from anyone else’s, and that a general model of consciousness—assuming it was even possible to create such a thing—may not tell you anything about your own experience. The problem with this statement, though, is that there exists one, and only one, “experiencing subject” in this Universe: yourself. As I said above, you suspect that other people (such as your wife, for example) are experiencing things, but you aren’t sure of it; and you don’t know if they experience things the same way that you do, or whether it even makes sense to ask that latter question. There are two possible corollaries to this fact (well, there are two that I can think of):
1). Other people in this world are categorically similar to yourself, and thus a general model of consciousness can never be developed, in principle, because such a model will fail to predict P, as seen from the point of view of every person individually. Thus, consciousness is completely mysterious and inexplicable.
2). You are special. A general model of consciousness can be developed, but it will work for everyone other than yourself, specifically.
Option #2 is solipsism. Option #1 may seem attractive at the surface, but it contradicts the fact that we do have models of consciousness which work quite well—they are employed by psychologists, advertisers, political speech writers, and even computer scientists, f.ex. when they build things like HDR photo rendering or addictive Facebook games. One way to dodge this contradiction would be to say,
3). The models of consciousness that we currently possess do not actually model consciousness; they just model behavior. Consciousness is not correlated with behavior in any significant way.
Option #3, however, puts you on the road to discarding consciousness altogether as a null concept.
I can’t think of any way to resolve these contradictions, other than to posit that there’s nothing special about your own consciousness. Sure, it feels special in a truly visceral way, but there are lots of things we feel that aren’t actually true: the Earth is not flat, the stars are really huge and really hot, but very far away; choosing a different door in the Monty Hall scenario is the correct choice, etc. Thus, I disagree with you when you say,
Empiricism is based on the foundation of avoiding cognitive biases, and I am inclined to treat the (admittedly, very strong) intuition that I am very special as just another kind of a cognitive bias. And while it is true that ”...people explore Q and people develop a model that people can see predicts their impressions of Q...”, I don’t see why this is important. Why does it matter who (or what) came up with the model ? Doesn’t the predictive power of the model (or lack thereof) matter much, much more ?
It’s nice to see this discussion converging! I was afraid we’d get myred in confusing language forever and have to give up at some point. :-(
:-D
Ah, okay. I thought you meant, “Given a subject, that subject’s experience is P, and others’ is Q.” The above distinction seems more coherent.
Let’s do away with possessive pronouns when referring to P and Q, then. We’ll say P is phenomenal experience (what I’m tempted to call “my experience” but am explicitly avoiding assigning to a particular subject since my sense of myself as a subject might well arise from the existence of P), and Q is the part of P that gives the impression that we describe as “Others seem to be conscious.” I think we can agree that those two phenomena are different, even if Q seems to be a part of P. (I have a hard time conceiving of a kind of experience that’s not part of P, for that matter!)
Sound good?
Sorry about that. I see what you mean.
It doesn’t look that way to me at first brush. Thanks for the consideration, though. :-)
I think here is where the use of possessive pronouns betrays us. What I’m very committed to is that P is more than Q, so a priori knowing everything about Q doesn’t necessarily tell us anything about why P arises in the first place. The only reason we seem to think this is likely, as far as I know, is that Q is specifically the impression that P-like phenomena exist “in others.” (I honestly can’t think of a way to describe the relationship between P and Q without talking about Q in terms of others. I think that might be intrinsic to the definition of Q.)
What we will have explained with a full and robust theory of Q is why the impression of “others who have P-type experience” arises. (Again, I don’t know how else to phrase that.) That wouldn’t tell us why red appears as red, although it would tell us why others who are conscious (if any) would be under the impression that we experience red as red.
Or said a little differently, it seems perfectly plausible to me that my impression that others are conscious might have nothing to do with why I’m conscious. It might be based solely on the fact that I’m conscious.
Now, if it turns out that those two really don’t have anything to do with one another, that would be surprising to me because of the nature of Q: my impression is that others are conscious for the same reason I am. But my evidence for others’ consciousness is of a completely different nature than that of my own. So, if they really don’t have anything to do with one another, then solipsism seems much more likely.
But even in that solipsistic case, I wouldn’t say that there’s something special about me. I’d say there’s something special about P in that it’s the only perspective possible. It just so happens that from the only possible perspective, there is this impression of a particular identity, which is under the delusion that there are other, comparable identities “out there”. In this situation, there’s no other perspective one can don in order to say that there’s nothing special about me as compared to any other random human. I’m special because I’m the one whose identity is wrapped up in P, and in a solipsistic universe there’s no one else like that. As far as I know, that’s what solipsism means.
(Of course, because of Q, I would predict that you would make the same argument about yourself. But I know better! :-P )
I’ll say once more that I suspect that a full theory of Q would, indeed, go a long way to explaining P. But I’m also aware that I’m under this impression because of Q. This makes it extremely difficult to fathom what the connection between a Q-explanation and a P-explanation could possibly look like. After all, if such a connection did not exist, I would still have a strong suspicion that a Q-explanation would yield a P-explanation.
I don’t think this comparison works because of a recursive element that’s in consciousness. With those other phenomena, we can look at an aspect of P, apply a mental model, and predict what the next experience in P will be. But what is to be explained is the arising of P in the first place. It’s hard to make sense of what making predictions in that context would even mean, in part because we can’t experience P from outside of P. We can’t look at P as a whole the way we can look at our visual impression that the Earth is flat.
I’m inclined to agree. The problem is that there isn’t a perspective to take from where you can say that your ability to take perspectives is an external phenomenon. It becomes very convoluted to even try to say what it would mean for the fact that your subjective experience is special to you is a bias. Biased for whom, in what way? What’s the objective truth that this “bias” is causing you to mentally deviate from? It’s not an impression that my consciousness is different to me than my impression of others’ consciousness is; rather, it’s a fact, as objective as any fact could possibly be. I could totally believe that there’s some kind of weird cognitive illusion trick being played on me such that I’m not actually writing these words, but there is no possible way for my impression that I’m writing these words to not actually exist. What would that even mean? I’m more certain that I’m under the impression that I’m experiencing what I’m experiencing than of anything else, bar none. And I can be so certain of this because the only way to offer me evidence to the contrary is through experience.
I think part of the tangle here is this implicit idea that there’s what Nagel calls a “view from nowhere” that we can always take to describe phenomena. It is, for instance, the idea that 2+2=4 is an objective fact that is part of the universe itself rather than just a facet of how we happen to experience it. It’s true no matter who is talking, and disagreement with that fact is a form of being objectively wrong. But that model—the idea that there’s this objective truth out there independent of subjective experience—is not something we can ever even in principle get evidence for. It’s much like how you can never know for sure that you’re not dreaming: any test you can perform is a test you can dream. There’s no way out even in principle. This doesn’t mean you are dreaming, but it does mean that you can’t use the supposed fact that you’re not dreaming as part of your evidence that you’re not dreaming. In the same sort of way, there’s no way to get evidence that there’s an objective world outside of your experience. That doesn’t mean it isn’t there, but it does mean that there’s no way to get evidence for that world’s existence. Any such evidence is evidence you’d have to experience.
(Let me emphasize here that I’m not arguing against reductionistic materialism. I’m just pointing out a tangle in attempts to use reductionistic materialism to explain the ability to experience. I’m sure we can come up with a model of consciousness that works within reductionistic materialism, but it’s not at all clear how that model could possibly be true to the a priori fact that the model itself arises out of our experiences.)
So how do you get outside of experience in order to demonstrate how experience itself arises?
The question doesn’t seem even in principle answerable.
That’s why it’s called the hard problem of consciousness!
Oops, actually, the latter definition is closer to what I had in mind. It seems like we need three letters:
P: Your own subjective personal experience.
Q: The personal subjective that you suspect other people are having, which may be similar to yours in some way; or, as you put it, the impression that “others have P-type experience”. You have no way of accessing this experience directly, and no way of experiencing it yourself.
Pq: “The part of P that gives the impression that we describe as “Others seem to be conscious.”″ Pq is all the evidence you have for Q’s existence.
Since Pq is a part of P, as you said, I don’t want to focus too much on it. I also want to emphasize that P is your own personal experience, not any abstract “subject’s”. It’s the one that you can access directly.
Moving on, you say:
1). I would agree with your statement if you removed the word “completely”. Obviously, you know you are conscious, and you can experience P directly. However, you can also collect the same kind of data on yourself (or have someone, or some thing, do it for you) as you would on other people. For example, you could get your brain scanned, record your own voice and then play it back, install a sensor on your fridge that records your feeding habits, etc.; these are all real pieces of evidence that people are routinely collecting for practical purposes.
2). If you think that the above paragraph is true, then it would follow that you (probably) can collect some data on your own Q, as it would be experienced by someone else who is conscious (assuming, again, that you are not the only conscious being in the Universe, and that your own consciousness is not privileged in any cosmic way).
3). If you agree with that as well, then, assuming that we ever develop a good enough model of Q which would allow you to predict any person’s behavior with some useful degree of certainty, such a model would then be able to predict your own behavior with some useful degree of certainty. You could, for example, cover yourself with cameras and other sensors like a Christmas tree, start the model running on your home computer, then leave for work. And when you came back, you could verify that the model predicted your behavior that day more or less correctly (and if you doubt your powers of recall, which you should, then you could always play back the video).
4). If you agree that the above is possible, then we can go one step further. A good model of Q would not only predict what a person would do, but also what he would think; in fact, this model would probably have to do that anyway—since a person’s thoughts are the hidden states that influence his behavior, which the model is trying to predict in the first place. Thus, the model will be able to predict your own thoughts, as well as your actions. I think this addresses your point regarding “the arising of P in the first place”, above.
5). At this point, we have a model that can explain both your thoughts and your actions, and it does so solely based on external evidence. It seems like there’s nothing left for P to explain, since Q explains everything. Thus, P is a null concept; this is the “objective truth that this “bias” is causing you to mentally deviate from”, which you asked about in your comment. That is, the “objective truth” is that P can be fully explained solely in terms of Q, even though it doesn’t feel like it could be.
I am pretty sure you disagree with the conclusion (5). Do you disagree with (1) through (4), as well, or do disagree that (5) follows from (4) -- or both ?
Eeergh, that’s a whole other topic for a whole other thread...
Why not just use Occam’s Razor ?
I guess so!
Er… By “your”, do you mean to refer to me, personally? I’ll assume that’s what you meant unless you specify otherwise. Henceforth I am the subject! :-D
But that’s the crux! I know I’m conscious in a way that is so devastatingly self-evident that “evidence” to the contrary would render itself meaningless. But if some theory for P were developed that demonstrated that Q doesn’t exist, I wouldn’t view that theory as nonsensical. It’d be surprising, but not blatantly self-contradictory like a theory that says P doesn’t exist. I believe in Q for highly fallible reasons, but I believe in P for completely different reasons that don’t seem to be at all fallible to me. I deduce Q but I don’t deduce P.
(Although I wonder if we’re just spinning our wheels in the muck produced from a fuzzy word. If we both agree that P is self-evident while Q is deduced from Pq, perhaps there’s no disagreement...?)
Agreed. Notice, though, that the only way I’m able to correlate this Q-like data with P is because I can see the results of, say, the brain scan and recognize that it pairs with a particular part of P. For instance, I can tell that a certain brain scan corresponds with when I’m mentally rehearsing a Mozart piece because I experienced the rehearsal when the brain scanning occurred. So P is still implicit in the data-collection and -interpretation process.
Mostly agreed. If others experience, then others experience. :-)
The main point at which I disagree is that P is privileged. There’s no such thing as a P-less perspective. But if we’re granting that others are actually conscious (i.e., that Q exists) and that we can switch subjects with a sort of P-transformation (i.e., we can grant that you have P and that within your P my consciousness is part of Q), then I think that might not be terribly important to your point. We can mimic strong objectivity by looking at those truths that remain invariant under such transformations.
Hmm… “behavior” is being used in two different ways here. When we use our “theory of Q” to make predictions, what we’re doing is assuming that Q exists and is indicated by Pq, and then we make predictions about what happens to Pq under certain circumstances. On the other hand, when we look at my “behavior”, what we’re considering is my P in a wider scope going beyond just Pq. For instance, others claim that they see blue when we shine light of a wavelength of 450 nm into their functional eyes. When we shine such light into my eyes, I see blue. Those are two very different kinds of “behavior” from my perspective!
But presumably under the P-transformation mentioned earlier, other subjects actually do experience blue, too. So we’ll just go with this. :-)
I agree with what you elaborate upon after this. Since the “behavior” here is a kind of experience, I would include the experience of thinking in that. So yes, already granted.
I wonder if you arranged your sentence a little bit backwards...? I think you meant to say, “It seems like there’s nothing left of P to explain, since our theory of Q explains everything.” Is that what you meant?
If so, then sure. There’s a detail here I’m uneasy about, but I think it’s minor enough to ignore (rather than write three more paragraphs on!).
Hmm. You seem to be saying two different things here as though they’re the same thing. One I strongly disagree with, and the other I half-agree with.
The one I half-agree with is that based on the trajectory you describe, it seems we can describe P with the same brush we use to explain Q. The half I hesitate about is this claim that we can just equate P and Q. That’s the part that is to be explained! But perhaps something would arise in the process of elaborating on a theory of Q.
The part I totally disagree with is the claim that “P is a null concept”. Any theory that disregards P as a hallucination, or irrelevant, or a bias of any sort, is incoherent. I’ll grant that the impression that P is special could turn out to be a bias, but not P itself. And we can’t disregard the relevance of P. How would we ever gain evidence that P can be disregarded? Doesn’t that evidence have to come through P?
But I do agree:
We should be able to predict Pq with evidence that remains fixed under a P-transformation.
It seems easier and more consistent to assume that Pq points to an extant Q.
If Q exists, then under a P-transformation my experience (previously P) is part of Q.
Therefore, a full model of Pq should offer a kind of explanation of P.
But I still don’t see how this model actually connects P and Q. It just assumes that Q exists and that it’s a kind of P (i.e., that P-transformations make sense and are possible).
Fair enough!
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
I’m reminded of some brilliant times I’ve tried to become lucid in my dreams. I look at an elephant standing in my living room and think, “Why is there an elephant in my living room? That’s awfully odd. Could I be dreaming? Well, if I were, this would be really strange without much of an explanation. But the elephant is here because I went to China and drank tea with a spoon. That makes sense, so clearly I’m not dreaming.”
So when you go through an analysis of whether the assumption that you’re awake yields shorter code in its description than the assumption that you’re dreaming does, how sure can you really be that you have any evidence at all that you’re not dreaming? Sure, you can resort to Bayesian analysis—but how do you know you didn’t just concoct that in your dream tonight and that it’s actually gibberish?
I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time (which might be part of why I’m not lucid in more of my dreams!). But if you really want to tackle the issue, you’re going to run into some pretty basic epistemic obstacles. How do you come to any conclusions at all when anything you think you know could have been completely fabricated in the last three seconds?
Yep, that’s right. I’m just electrons in a circuit as far as you’re concerned ! :-)
Sure, that makes sense, but I’m not trying to abolish P altogether. All I’m trying to do is establish that P and Q are the same thing (most likely), and thus the “Hard Problem of Consciousness” is a non-issue. Thus, I can agree with the last sentence in the quote above, but that probably isn’t worth much as far as our discussion is concerned.
I’m not sure how these two sentences are connected. Obviously, a perfect brain scan shouldn’t indicate that you’re mentally rehearsing Mozart when you are not, in fact, mentally rehearsing Mozart. But such a brain scan will work on anyone, not just you, so I’m not sure what you’re driving at.
When I used the word “behavior”, I actually had a much narrower definition in mind—i.e., “something that we and our instruments can observe”. So, brain scans would fit into this category, but also things like, “the subject answers ‘blue’ when we ask him what color this 450nm light is”. I deliberately split up “what the test subject would say” from “what he will actually think and experience”. But it seems like you agree with both points, maybe:
Pretty much. What I meant was that, since our theory of Q explains everything, we gain nothing (intellectually speaking) by postulating hat P and Q are different. Doing so would be similar to saying, “sure, the theory of gravity fully explains why the Earth doesn’t fall into the Sun, but there must also be invisible gnomes constantly pushing the Earth away to prevent that from happening”. Sure, the gnomes could exist, but there are lots of things that could exist...
If you agree with the first part, what are your reasons for disagreeing with the second ? To me, this sounds like saying, “sure, we can explain electricity with the same theory we use to explain magnetism, but that doesn’t mean that we can just equate electricity and magnetism”.
Maybe we disagree because of this:
Well, yeah, Occam’s Razor isn’t an oracle… It seems to me like we might have a fundamental disagreement about epistemology. You say “I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time”; I’m in total agreement there. But then, you say,
I personally don’t see any issues to tackle. Sure, I could be dreaming. I could also be insane, or a simulation, or a brain in a jar, or an infinite number of other things. But why should I care about these possibilities—not just “most of the time”, but at all ? If there’s no way, by definition, for me to tell whether I’m really, truly awake; and if I appear to be awake; then I’m going to go ahead and assume I’m awake after all. Otherwise, I might have to consider all of the alternatives simultaneously, and since there’s an infinite number of them, it would take a while.
It looks like you firmly disagree with the paragraph above, but I still can’t see why. But that does explain (if somewhat tangentially) why you believe that the “Hard Problem of Consciousness” is a legitimate problem, and why I do not.
You know, something clicked last night as I was falling asleep, and I realized why you’re right and where my confusion has been. But thanks for giving me something specific to work from! :-D
I think my argument can be summarized like so:
All data comes through P.
Therefore, all data about P comes through P.
All theories about P must be verified through data about P.
This means P is required to explain P.
Therefore, it doesn’t seem like there can be an explanation about P.
That last step is nuts. Here’s an analogy:
All (visual) data is seen.
Therefore, all (visual) data about how we see is seen.
All theories of vision must be verified through data about vision. (Let’s say we count only visual data. So we can use charts, but not the way an optic nerve feels to the touch.)
This means vision is required to explain vision.
Therefore, it doesn’t seem like there can be an explanation of vision.
The glaring problem is that explaining vision doesn’t render it retroactively useless for data-collection.
Thanks for giving me time to wrestle with this dumbth. Wrongness acknowledged. :-)
What I was driving at is that there’s no evidence that it corresponds to mentally rehearsing Mozart for anyone until I look at my own brain scan. All we can correlate the brain scans with is people’s reports of what they were doing. For instance, if my brain scan said I was rehearsing Mozart but I wasn’t, and yet I was inclined to report that I was, that would give me reason for concern.
The confusion here comes down to a point that I still think is true, but only because I think it’s tautological: From my point of view, my point of view is special. But I’m not sure what it would mean for this to be false, so I’m not sure there’s any additional information in this point—aside from maybe an emotional one (e.g., there’s a kind of emotional shift that occurs when I make the empathic shift and realize what something feels like from another person’s perspective rather than just my own).
Well, I do know that P exists, and I know that from my point of view P is extremely special. That’s not invisible gnomes; it’s just true. But saying “from my point of view P is extremely special” is tautological since P is my perspective. When something is a tautology, there’s nothing to explain. That’s why it’s hard to come up with an explanation for it. :-P
I agree with you now.
Oh, no no no! I didn’t mean to make a particularly big deal out of the possibility that we’re dreaming. I was trying to point out an analogous situation. There’s no plausible way to gather data in favor of the hypothesis that we’re not dreaming because the epistemology itself is entirely contained within the dream. I figured that might be easier to see than the point I was trying to make, which was the bit of balderdash that there’s no way to gather evidence in favor of P arising from something else because that evidence has to come through P. The arguments are somewhat analogous, only the one for dreaming works and the one for P doesn’t.
Two and a half points:
Again, this was meant to be an analogy. I wasn’t trying to argue that we can’t trust our data-collection process because we could be dreaming. I meant to offer a situation about dreaming that seemed analogous to the situation with consciousness. I was hoping to illustrate where the “hard” part of the hard problem of consciousness is by pointing out where the “hard” part in what I suppose we could call the “hard problem of dreaming” is.
This issue actually does become extremely pragmatic as soon as you start trying to practice lucid dreaming. The mind seems to default to assuming that whatever is being experienced is being experienced in a wakeful state, at least for most people. You have to challenge that to get to lucid dreaming. There have been many times where I’ve been totally sure I’m awake after asking myself if I’m dreaming, and have even done dream-tests like trying to read text and trying to fly, only to discover that all my testing and certainty was ultimately irrelevant because once I wake up, I can know with absurdly high probability that I was in fact dreaming.
Closely related to that second point is the fact that you know you dream regularly. In fact, there’s quite a bit of evidence to suggest that pretty much everyone dreams several times every night. Most people aren’t crazy, or discover that they’re brains in a jar, or whatever every day. So if there’s a way that everything you know could be completely wrong, the possibility that you’re dreaming is much, much higher on the list of hypotheses than that, say, you have amnesia and are on the Star Trek holodeck. So picking out dreaming as a particular issue to be concerned about over the other possibilities isn’t really committing the fallacy of privileging the hypothesis. If we’re going to go with “You’re hallucinating everything you know,” the “You’re dreaming” hypothesis is a pretty darn good one to start with!
Again, though, I’m not trying to argue that we could be dreaming and therefore we can’t trust what we know. I was trying to point out an analogy which, upon reflection, doesn’t actually work.
All right, so it seems like we mostly agree now—cool !
Ok, I get it now, but I would still argue that we should assume we’re awake, until we have some evidence to the contrary; thus, the “hard problem of dreaming” is a non-issue. It looks like you might agree with me, somewhat:
In this situation, we assume that we’re awake a priori, and we are then deliberately trying to induce dreaming (which should be lucid, a well). So, we need a test that tells us whether we’ve succeeded or not. Thus, we need to develop some evidence-collecting techniques that work even when we’re asleep. This seems perfectly reasonable to me, but the setup is not analogous to your previous one—since we start out with the a priori assumption that we’re currently in the awake state; that we could transition to the dream state when we choose; and that there exists some evidence that will tell us which state we’re in. By contrast, the “hard problem of dreaming” scenario assumes that we don’t know which state we’re in, and that there’s no way to collect any relevant evidence at all.
Yep!
Rationality training: helping minds change since 2002. :-D
You’re coming at it from a philosophical angle, I think. I’m coming at it from a purely pragmatic one. Let’s say you’re dreaming right now. If you start with the assumption that you’re awake and then look for evidence to the contrary, typically the dream will accomodate your assumption and let you conclude you’re really awake. Even if your empirical tests conclusively show that you’re dreaming, dreams have a way of screwing with your reasoning process so that early assumptions don’t update on evidence.
For instance, a typical dream test is jumping up in the air and trying to stay there a bit longer than physics would allow. The goal, usually, is flight. I commonly find that if I jump into the air and then hang there for just a little itty bitty bit longer than physics would allow, I think something like, “Oh, that was barely longer than possible. So I must not be quite dreaming.” That makes absolutely no sense at all, but it’s worth bearing in mind that you typically don’t have your whole mind available to you when you’re trying to become lucid. (You might once you are lucid, but that’s not terribly useful, is it?)
In this case, you have to be really, insanely careful not to jump to the conclusion that you’re awake. If you think you’re awake, you have to pause and ask yourself, “Well, is there any way I could be mistaken?” Otherwise your stupid dreaming self will just go along with the plot and ignore the floating pink elephants passing through your living room walls. This means that when you’re working on lucid dreaming, it usually pays to recognize that you could be dreaming and can never actually prove conclusively that you’re awake.
But I agree with you in all cases where lucid dreaming isn’t of interest. :-)
That’s funny, I was about to say the same thing, only about yourself instead of me. But I think I see where you’re coming from:
So, your primary goal (in this specific case) is not to gain any new insights about epistemology or consciousness or whatever, but to develop a useful skill: lucid dreaming. In this case, yes, your assumptions make perfect sense, since you must correct for an incredibly strong built-in bias that only surfaces while you’re dreaming. That makes sense.
As I discussed here—see also this comment for clarification—we should in theory be able to discover if other beings have qualia if we were to learn about their brains in such microscopic detail that we are performing approximately the same computations in our brains that their brains are running; we then “get their qualia” first-hand.
As for arguing about qualia verbally, I hold qualia to be both entirely indefinable (implying that the concept is irreducible, if it exists) and something that the vast majority of humans apprehend directly and believe very strongly to exist. There is little to be gained by arguing about whether qualia exist, because of this problem—the best that can be achieved through argument is that both of you accept the consensus regarding the existence of this indefinable thing that nonetheless needs to be given a name.
Ok, I read your article as well as your comment, and found them very confusing. More on this in a minute.
How is that different from saying, “I found qualia to be a meaningless concept” ? I may as well say, “I think that human consciousness can best be explained by asdfgh, where asdfgh is an undefinable concept”. That’s not much of an explanation. In addition, this makes it impossible to discuss qualia at all (with anyone other than yourself, that is), which once again hints at a kind of solipsism.
This is weak evidence at best. The vast majority of humans apprehend all kinds of stuff directly (or so they believe), including gods, demons, honest politicians, etc. At least some of these things have a very low probability of existing, so how are qualia any different ? In addition, regardless of what the vast majority of people believe, I personally disagree with this “consensus regarding the existence of this indefinable thing”, so you’ll need to convince me some other way other than stating the consensus.
Note that I agree with the statement, “humans appear to act as though they believe that they experience things, just as I do”—a statement which we may reduce to something like, “humans experience things” (with the usual understanding that there’s some non-zero probability of this being false). I just don’t see why we need a special name for these experiences, and why we have to treat them any differently from anything else that humans do (or that rocks do, for that matter).
Which brings me back to your article (and comment). In it, you describe qualia as being indefinable. You then proceed to discuss them at great length, which means that you must have some sort of a definition in mind, or else your article would be meaningless (or perhaps it would be meaningless to everyone other than yourself, which isn’t much better). Your central argument appears to rest on the assumption that qualia are irreducible, but I still don’t understand why you’d assume that in the first place.
In short, qualia appear to be a “mysterious answer to a mysterious question”: they are impossible to define, irreducible, and totally inexplicable—and thus impossible to study or even discuss. They are a kind of elan vital, and therefore not terribly useful as a concept.