1′. There is a world, which includes subjective experiences, and (presumably) things which are not subjective experiences.
Let’s talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only “presumable”? Do you have in mind an experiment to falsify it?
2′. All information I have about the world, including the subjective experiences of other people, comes through my subjective experiences.
And these subjective experiences are all caused by, and contain the same information as, objective events in the MP world. Therefore all information you have about the MP world is also contained in the MP world. Do you agree?
3′. I possess mathematical/physical theories which appear adequate to describe much of the posited world to varying degrees, but which do not refer to subjective experiences.
Do you agree with my expectation that even with future refinements of these theories, the MP world’s theories will remain “closed on MP-ness” and are not likely to lead to descriptions of subjective experiences?
4′. Subjective experiences are causally consequential; they are affected by sensation and they affect behavior, among other things.
Sensation and behaviour are MP, not subjective.
Each subjective experience has an objective, MP counterpart which ultimately contains the same information (expanding on my point (2)). They have the same correlations with other events, and the same causative and explanatory power, as the subjective experiences they cause (or are identical to). Therefore, in a causal theory, it is possible to assign causative power only to MP phenomena without loss of explanatory power. Such a theory is better, because it’s simpler and also because we have theories of physics to account for causation, but we cannot account for subjective phenomena causing MP events.
Do you agree with the above?
I can put this another way, as per my item (5): to say that sensation affects (or causes) subjective experience is to imply the logical possibility of a counterfactual world where sensation affects experience differently or not at all. However, if we define sensation as the total of all relevant MP events—the entire state of your brain when sensing something—then I claim that sensation cannot, logically, lead to any subjective experience different from the one it does lead to. IOW, sensation does not cause experience, it is identical with experience.
This theory appears consistent with all we know to date. Do you expect it to be falsified in the future?
5′. The way the world actually is and the way the world actually works is a little more complicated than any theory I currently possess.
This doesn’t seem related to my own item (5), so please respond to that as well—do you agree with it?
As for your response, I agree that our MP theories are incomplete. Do you think that more complete theories would not, or could not, remain restricted to the MP world? (item 3)
I think I must try one more largely indirect response and see if that leaves anything unanswered.
Reality consists, at least in part, of entities in causal interaction. There will be some comprehensive and correct description of this. Then, there will be descriptions which leave something out. For example, descriptions which say nothing about the states of the basic entities beyond assigning each state a label, and which then describe those causal interactions in terms of state labels. The fundamental theories we have are largely of this second type. The controversial aspects of consciousness are precisely those aspects which are lost in passing to a description of the second type. These aspects of consciousness are not causally inert, or else conscious beings wouldn’t be able to notice them and remark upon them; but again, all the interesting details of how this works are lost in the passage to a description of the second type, which by its very nature can only describe causality in terms of arbitrary laws acting on entities whose natures and differences have been reduced to a matter of labels.
What you call “MP theories” only employ these inherently incomplete descriptions. However, these theories are causally closed. So, even though we can see that they are ontologically incomplete, people are tempted to think that there is no need to expand the ontology; we just need to find a way to talk about life and everything we want to explain in terms of the incomplete ontology.
Since ontological understandings can develop incrementally, in practice such a program might develop towards ontologically complete theories anyway, as people felt the need to expand what they mean by their concepts. But that’s an optimistic interpretation, and clearly a see-no-evil approach also has the potential to delay progress.
Let’s talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only “presumable”? Do you have in mind an experiment to falsify it?
I have trouble calling this a “world”. The actual world contains consciousness. We can talk about the parts of the actual world that don’t include consciousness. We can talk about the actual world, described in some abstract way which just doesn’t mention consciousness. We can talk about a possible world that doesn’t contain consciousness.
But the way you set things up, it’s as if you’re inviting me to talk about the actual world, using a theoretical framework which doesn’t mention consciousness, and in a way which supposes that consciousness also plays no causal role. It just seems the maximally unproductive way to proceed. Imagine if we tried to talk about gravity in this way: we assume models which don’t contain gravity, and we try to talk about phenomena as if there was no such thing as gravity. That’s not a recipe for understanding gravity, it’s a recipe for entirely dispensing with the concept of gravity. Yet it doesn’t seem you want to do without the concept of consciousness. Instead you want to assume a framework in which consciousness does not appear and plays no role, and then deduce consequences. Given that starting point, it’s hardly surprising that you then reach conclusions like “it is impossible to say anything about conscious mind states”. And yet we all do every day, so something is wrong with your assumptions.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory—“each subjective experience has an objective, MP counterpart”, “sensation … is identical with experience”. So I’m a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don’t understand why it feels like anything?
Originally you said:
All your questions come down to: why does our existence feel like something?
but that’s not quite right. The feel of existence is not an ineffable thing about which nothing more can be said, except that it’s “something”. To repeat my current list of problem items, experience includes colors, meanings, time, and a sort of unity. Each one poses a concrete problem. And for each one, we do have some sort of phenomenological access to the thing itself, which permits us to judge whether a given ontological account answers the problem or not. I’m not saying such judgments are infallible or even agreed upon, just that we do possess the resources to bring subjective ontology and physical ontology into contact, for comparison and contrast.
Does this make things any clearer? You are creating problems (impossibility of knowledge of consciousness) and limitations (future theories won’t contain descriptions of subjective experience) by deciding in advance to consider only theories with the same general ontology we have now. Meanwhile, on the side you make a little progress by deciding to think about consciousness as a causal element after all, but then you handicap this progress by insisting on switching back to the no-consciousness ontology as soon as possible.
As a footnote, I would dispute that sensation and behavior, as concepts, contain no reference to subjectivity. A sensation was originally something which occurred in consciousness. A behavior was an act of an organism, partly issuing from its mental state. They originally suppose the ontology of folk psychology. It is possible to describe a behavior without reference to mental states, and it is possible to define sensation or behavior analogously, but to judge whether the entities picked out by such definitions really deserve those names, you have to go back to the mentalistic context in which the words originated and see if you are indeed talking about the same thing.
What you call “MP theories” only employ these inherently incomplete descriptions. However, these theories are causally closed.
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that
These aspects of consciousness are not causally inert
So which is correct?
Also, I don’t understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by:
descriptions which say nothing about the states of the basic entities beyond assigning each state a label
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Given that starting point, it’s hardly surprising that you then reach conclusions like “it is impossible to say anything about conscious mind states”. And yet we all do every day, so something is wrong with your assumptions.
About-ness is tricky.
If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing.
If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it’s not clear how we can check if they’re true or not (I see no reason to believe in free will of any kind).
Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the “MP theories” that don’t inculde concsiousness are causally closed.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory—“each subjective experience has an objective, MP counterpart”, “sensation … is identical with experience”. So I’m a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don’t understand why it feels like anything?
Here’s what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem.
All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experience as primary.
The feel of existence is not an ineffable thing about which nothing more can be said, except that it’s “something”.
I said that “all your questions come down to, why does our existence feel like something? and why does it feel the way it does?”
You focus on the second question—you consider different (counterfactual) possible experiences. When ask why we experience colors, you’re implicitly adding “why colors rather than something else?” But to me that kind of question seems meaningless because we can’t ask the more fundamental question of “why do we experience anything at all?”
The core problem is that we can’t imagine or describe lack-of-experience. This is just another way of saying we can’t describe what experience is except by appealing to shared experience.
If we encountered aliens (or some members of LW) and they simply had no idea what we were talking about when we discussed conscious experience—there’s nothing we could say to explain to them what it is, much less why it’s a Hard Problem.
If they [MP theories] are causally closed, then our conscious experience cannot influence our behaviour.
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Let’s take a specific aspect of conscious experience—color vision. For the sake of argument (since the reality is much more complicated than this), let’s suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane.
Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way—as being about color, etc—from the beginning.
What’s the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understanding of what the theory is about, grounded only in abstraction and in the world-of-external-things-in-space part of experience.
Of course, you could arrive at a theory which you initially understood only in the objectified way, but then you managed to make the correct identifications with subjective experience. That’s what proposals for neural correlates of consciousness (e.g. Drescher’s gensyms) are trying to do. When I criticize these proposals, it’s not because I object in principle to proceeding that way, but because of the details—I don’t believe that the specific candidates being offered have the right properties for them to be identical with the elements of consciousness in question.
If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I don’t think we need to further consider epiphenomenalism, unless you have a special reason to do so. Common sense tells us that experiences are both causes and effects, and that a psychophysical identity theory is the sort of theory of consciousness we should be seeking. I just think that the thing on the physical end of the identity relationship is not at all what people expect.
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem, but maybe we need to be able to say what we feel and what feeling is before we can say why feeling is.
Consider the similar question, “why is there something rather than nothing?” I think Heidegger, for one, was very interested in this question. But he ended up spending his life trying to make progress on what existence is, rather than why existence is.
I like to think that “reverse monism” is a small step in the right direction, even regarding the question “why is there experience”, because it undoes one mode of puzzlement: the property-dualistic one which focuses on the objectified understanding of the MP theory, and then says “why does the existence of those objects feel like something”. If you see the relevant part of the theory as simply being about those feelings to begin with, then the question should collapse to “why do such things exist” rather than “why do those existing things feel like something”. Though that is such a subtle difference, that maybe it’s no difference at all. Mostly, I’m focused on the concrete question of “what would physics have to be like for a psychophysical identity theory to be possible?”
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks.
there are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell’s equations) is causally disconnected from the rest of them. They all describe common interacting entities.
The other possibility [....] an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn’t one of these.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem
It needn’t be. Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness.
Relevant facts: we believe in and report conscious experience even though we can’t define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts—IOW, without the innate feeling we wouldn’t even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically every culture: e.g. belief in spirits, gods, or an afterlife.
Here’s a random mechanism too: cognitive architecture makes regularly think “I am conscious!”. Repeated thoughts, with nothing opposing them (at younger ages at least), become belief (ref: people being brought to believe anything not frowned upon by society, tend to keep believing it).
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed.
Consider the Standard Model of particle physics. It’s an inventory of fundamental particles and forces and how they interact. As a model it’s causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus “wrong”.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only “self-awareness” is meant, or all forms of “awareness”. It takes time and care to develop a shared language and understanding here.
However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: “You know how sometimes you’re asleep and sometimes you’re awake, and how the two states are really different? That difference is what I mean by consciousness!” And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we’d be on our way down one path.
The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive changes.
So this is all about the relationship between the first and second paths of inquiry. If on the second path we find nothing called consciousness, that presents one sort of problem. If we do find, on the second path, something we wish to call consciousness, that presents a different problem and a lesser problem, namely, what is its relationship to consciousness as investigated in the first way? Do the two accounts of consciousness match up? If they don’t, how is that to be resolved?
These days, I think most people on the second path do believe in something called consciousness, which has a causal and explanatory role, but they may disagree with some or much of what people on the first path say about it. In that situation, you only face the lesser problem: you agree that consciousness exists, but you have some dispute about its nature. (Of course, the followers of the two paths have their internal disagreements, with their peers, as well. We are not talking about two internally homogeneous factions of opinion.)
Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciousness.
If you want to deny that there actually is any such thing as consciousness (saying that there is only a belief in it), you’ll need to define your terms too. It may be that you are not denying consciousness as such, just some particular concept of it. Let’s start with the difference between sleep and wakefulness. Do you agree that there is a subjective difference there?
Let’s talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only “presumable”? Do you have in mind an experiment to falsify it?
And these subjective experiences are all caused by, and contain the same information as, objective events in the MP world. Therefore all information you have about the MP world is also contained in the MP world. Do you agree?
Do you agree with my expectation that even with future refinements of these theories, the MP world’s theories will remain “closed on MP-ness” and are not likely to lead to descriptions of subjective experiences?
Sensation and behaviour are MP, not subjective.
Each subjective experience has an objective, MP counterpart which ultimately contains the same information (expanding on my point (2)). They have the same correlations with other events, and the same causative and explanatory power, as the subjective experiences they cause (or are identical to). Therefore, in a causal theory, it is possible to assign causative power only to MP phenomena without loss of explanatory power. Such a theory is better, because it’s simpler and also because we have theories of physics to account for causation, but we cannot account for subjective phenomena causing MP events.
Do you agree with the above?
I can put this another way, as per my item (5): to say that sensation affects (or causes) subjective experience is to imply the logical possibility of a counterfactual world where sensation affects experience differently or not at all. However, if we define sensation as the total of all relevant MP events—the entire state of your brain when sensing something—then I claim that sensation cannot, logically, lead to any subjective experience different from the one it does lead to. IOW, sensation does not cause experience, it is identical with experience.
This theory appears consistent with all we know to date. Do you expect it to be falsified in the future?
This doesn’t seem related to my own item (5), so please respond to that as well—do you agree with it?
As for your response, I agree that our MP theories are incomplete. Do you think that more complete theories would not, or could not, remain restricted to the MP world? (item 3)
I think I must try one more largely indirect response and see if that leaves anything unanswered.
Reality consists, at least in part, of entities in causal interaction. There will be some comprehensive and correct description of this. Then, there will be descriptions which leave something out. For example, descriptions which say nothing about the states of the basic entities beyond assigning each state a label, and which then describe those causal interactions in terms of state labels. The fundamental theories we have are largely of this second type. The controversial aspects of consciousness are precisely those aspects which are lost in passing to a description of the second type. These aspects of consciousness are not causally inert, or else conscious beings wouldn’t be able to notice them and remark upon them; but again, all the interesting details of how this works are lost in the passage to a description of the second type, which by its very nature can only describe causality in terms of arbitrary laws acting on entities whose natures and differences have been reduced to a matter of labels.
What you call “MP theories” only employ these inherently incomplete descriptions. However, these theories are causally closed. So, even though we can see that they are ontologically incomplete, people are tempted to think that there is no need to expand the ontology; we just need to find a way to talk about life and everything we want to explain in terms of the incomplete ontology.
Since ontological understandings can develop incrementally, in practice such a program might develop towards ontologically complete theories anyway, as people felt the need to expand what they mean by their concepts. But that’s an optimistic interpretation, and clearly a see-no-evil approach also has the potential to delay progress.
I have trouble calling this a “world”. The actual world contains consciousness. We can talk about the parts of the actual world that don’t include consciousness. We can talk about the actual world, described in some abstract way which just doesn’t mention consciousness. We can talk about a possible world that doesn’t contain consciousness.
But the way you set things up, it’s as if you’re inviting me to talk about the actual world, using a theoretical framework which doesn’t mention consciousness, and in a way which supposes that consciousness also plays no causal role. It just seems the maximally unproductive way to proceed. Imagine if we tried to talk about gravity in this way: we assume models which don’t contain gravity, and we try to talk about phenomena as if there was no such thing as gravity. That’s not a recipe for understanding gravity, it’s a recipe for entirely dispensing with the concept of gravity. Yet it doesn’t seem you want to do without the concept of consciousness. Instead you want to assume a framework in which consciousness does not appear and plays no role, and then deduce consequences. Given that starting point, it’s hardly surprising that you then reach conclusions like “it is impossible to say anything about conscious mind states”. And yet we all do every day, so something is wrong with your assumptions.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory—“each subjective experience has an objective, MP counterpart”, “sensation … is identical with experience”. So I’m a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don’t understand why it feels like anything?
Originally you said:
but that’s not quite right. The feel of existence is not an ineffable thing about which nothing more can be said, except that it’s “something”. To repeat my current list of problem items, experience includes colors, meanings, time, and a sort of unity. Each one poses a concrete problem. And for each one, we do have some sort of phenomenological access to the thing itself, which permits us to judge whether a given ontological account answers the problem or not. I’m not saying such judgments are infallible or even agreed upon, just that we do possess the resources to bring subjective ontology and physical ontology into contact, for comparison and contrast.
Does this make things any clearer? You are creating problems (impossibility of knowledge of consciousness) and limitations (future theories won’t contain descriptions of subjective experience) by deciding in advance to consider only theories with the same general ontology we have now. Meanwhile, on the side you make a little progress by deciding to think about consciousness as a causal element after all, but then you handicap this progress by insisting on switching back to the no-consciousness ontology as soon as possible.
As a footnote, I would dispute that sensation and behavior, as concepts, contain no reference to subjectivity. A sensation was originally something which occurred in consciousness. A behavior was an act of an organism, partly issuing from its mental state. They originally suppose the ontology of folk psychology. It is possible to describe a behavior without reference to mental states, and it is possible to define sensation or behavior analogously, but to judge whether the entities picked out by such definitions really deserve those names, you have to go back to the mentalistic context in which the words originated and see if you are indeed talking about the same thing.
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that
So which is correct?
Also, I don’t understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by:
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
About-ness is tricky.
If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing.
If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it’s not clear how we can check if they’re true or not (I see no reason to believe in free will of any kind).
Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the “MP theories” that don’t inculde concsiousness are causally closed.
Here’s what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem.
All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experience as primary.
I said that “all your questions come down to, why does our existence feel like something? and why does it feel the way it does?”
You focus on the second question—you consider different (counterfactual) possible experiences. When ask why we experience colors, you’re implicitly adding “why colors rather than something else?” But to me that kind of question seems meaningless because we can’t ask the more fundamental question of “why do we experience anything at all?”
The core problem is that we can’t imagine or describe lack-of-experience. This is just another way of saying we can’t describe what experience is except by appealing to shared experience.
If we encountered aliens (or some members of LW) and they simply had no idea what we were talking about when we discussed conscious experience—there’s nothing we could say to explain to them what it is, much less why it’s a Hard Problem.
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
Let’s take a specific aspect of conscious experience—color vision. For the sake of argument (since the reality is much more complicated than this), let’s suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane.
Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way—as being about color, etc—from the beginning.
What’s the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understanding of what the theory is about, grounded only in abstraction and in the world-of-external-things-in-space part of experience.
Of course, you could arrive at a theory which you initially understood only in the objectified way, but then you managed to make the correct identifications with subjective experience. That’s what proposals for neural correlates of consciousness (e.g. Drescher’s gensyms) are trying to do. When I criticize these proposals, it’s not because I object in principle to proceeding that way, but because of the details—I don’t believe that the specific candidates being offered have the right properties for them to be identical with the elements of consciousness in question.
I don’t think we need to further consider epiphenomenalism, unless you have a special reason to do so. Common sense tells us that experiences are both causes and effects, and that a psychophysical identity theory is the sort of theory of consciousness we should be seeking. I just think that the thing on the physical end of the identity relationship is not at all what people expect.
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem, but maybe we need to be able to say what we feel and what feeling is before we can say why feeling is.
Consider the similar question, “why is there something rather than nothing?” I think Heidegger, for one, was very interested in this question. But he ended up spending his life trying to make progress on what existence is, rather than why existence is.
I like to think that “reverse monism” is a small step in the right direction, even regarding the question “why is there experience”, because it undoes one mode of puzzlement: the property-dualistic one which focuses on the objectified understanding of the MP theory, and then says “why does the existence of those objects feel like something”. If you see the relevant part of the theory as simply being about those feelings to begin with, then the question should collapse to “why do such things exist” rather than “why do those existing things feel like something”. Though that is such a subtle difference, that maybe it’s no difference at all. Mostly, I’m focused on the concrete question of “what would physics have to be like for a psychophysical identity theory to be possible?”
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks.
If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell’s equations) is causally disconnected from the rest of them. They all describe common interacting entities.
Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn’t one of these.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
It needn’t be. Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness.
Relevant facts: we believe in and report conscious experience even though we can’t define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts—IOW, without the innate feeling we wouldn’t even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically every culture: e.g. belief in spirits, gods, or an afterlife.
Here’s a random mechanism too: cognitive architecture makes regularly think “I am conscious!”. Repeated thoughts, with nothing opposing them (at younger ages at least), become belief (ref: people being brought to believe anything not frowned upon by society, tend to keep believing it).
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed.
Consider the Standard Model of particle physics. It’s an inventory of fundamental particles and forces and how they interact. As a model it’s causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus “wrong”.
I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only “self-awareness” is meant, or all forms of “awareness”. It takes time and care to develop a shared language and understanding here.
However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: “You know how sometimes you’re asleep and sometimes you’re awake, and how the two states are really different? That difference is what I mean by consciousness!” And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we’d be on our way down one path.
The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive changes.
So this is all about the relationship between the first and second paths of inquiry. If on the second path we find nothing called consciousness, that presents one sort of problem. If we do find, on the second path, something we wish to call consciousness, that presents a different problem and a lesser problem, namely, what is its relationship to consciousness as investigated in the first way? Do the two accounts of consciousness match up? If they don’t, how is that to be resolved?
These days, I think most people on the second path do believe in something called consciousness, which has a causal and explanatory role, but they may disagree with some or much of what people on the first path say about it. In that situation, you only face the lesser problem: you agree that consciousness exists, but you have some dispute about its nature. (Of course, the followers of the two paths have their internal disagreements, with their peers, as well. We are not talking about two internally homogeneous factions of opinion.)
If you want to deny that there actually is any such thing as consciousness (saying that there is only a belief in it), you’ll need to define your terms too. It may be that you are not denying consciousness as such, just some particular concept of it. Let’s start with the difference between sleep and wakefulness. Do you agree that there is a subjective difference there?