What you call “MP theories” only employ these inherently incomplete descriptions. However, these theories are causally closed.
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that
These aspects of consciousness are not causally inert
So which is correct?
Also, I don’t understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by:
descriptions which say nothing about the states of the basic entities beyond assigning each state a label
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Given that starting point, it’s hardly surprising that you then reach conclusions like “it is impossible to say anything about conscious mind states”. And yet we all do every day, so something is wrong with your assumptions.
About-ness is tricky.
If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing.
If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it’s not clear how we can check if they’re true or not (I see no reason to believe in free will of any kind).
Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the “MP theories” that don’t inculde concsiousness are causally closed.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory—“each subjective experience has an objective, MP counterpart”, “sensation … is identical with experience”. So I’m a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don’t understand why it feels like anything?
Here’s what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem.
All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experience as primary.
The feel of existence is not an ineffable thing about which nothing more can be said, except that it’s “something”.
I said that “all your questions come down to, why does our existence feel like something? and why does it feel the way it does?”
You focus on the second question—you consider different (counterfactual) possible experiences. When ask why we experience colors, you’re implicitly adding “why colors rather than something else?” But to me that kind of question seems meaningless because we can’t ask the more fundamental question of “why do we experience anything at all?”
The core problem is that we can’t imagine or describe lack-of-experience. This is just another way of saying we can’t describe what experience is except by appealing to shared experience.
If we encountered aliens (or some members of LW) and they simply had no idea what we were talking about when we discussed conscious experience—there’s nothing we could say to explain to them what it is, much less why it’s a Hard Problem.
If they [MP theories] are causally closed, then our conscious experience cannot influence our behaviour.
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Let’s take a specific aspect of conscious experience—color vision. For the sake of argument (since the reality is much more complicated than this), let’s suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane.
Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way—as being about color, etc—from the beginning.
What’s the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understanding of what the theory is about, grounded only in abstraction and in the world-of-external-things-in-space part of experience.
Of course, you could arrive at a theory which you initially understood only in the objectified way, but then you managed to make the correct identifications with subjective experience. That’s what proposals for neural correlates of consciousness (e.g. Drescher’s gensyms) are trying to do. When I criticize these proposals, it’s not because I object in principle to proceeding that way, but because of the details—I don’t believe that the specific candidates being offered have the right properties for them to be identical with the elements of consciousness in question.
If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I don’t think we need to further consider epiphenomenalism, unless you have a special reason to do so. Common sense tells us that experiences are both causes and effects, and that a psychophysical identity theory is the sort of theory of consciousness we should be seeking. I just think that the thing on the physical end of the identity relationship is not at all what people expect.
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem, but maybe we need to be able to say what we feel and what feeling is before we can say why feeling is.
Consider the similar question, “why is there something rather than nothing?” I think Heidegger, for one, was very interested in this question. But he ended up spending his life trying to make progress on what existence is, rather than why existence is.
I like to think that “reverse monism” is a small step in the right direction, even regarding the question “why is there experience”, because it undoes one mode of puzzlement: the property-dualistic one which focuses on the objectified understanding of the MP theory, and then says “why does the existence of those objects feel like something”. If you see the relevant part of the theory as simply being about those feelings to begin with, then the question should collapse to “why do such things exist” rather than “why do those existing things feel like something”. Though that is such a subtle difference, that maybe it’s no difference at all. Mostly, I’m focused on the concrete question of “what would physics have to be like for a psychophysical identity theory to be possible?”
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks.
there are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell’s equations) is causally disconnected from the rest of them. They all describe common interacting entities.
The other possibility [....] an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn’t one of these.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem
It needn’t be. Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness.
Relevant facts: we believe in and report conscious experience even though we can’t define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts—IOW, without the innate feeling we wouldn’t even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically every culture: e.g. belief in spirits, gods, or an afterlife.
Here’s a random mechanism too: cognitive architecture makes regularly think “I am conscious!”. Repeated thoughts, with nothing opposing them (at younger ages at least), become belief (ref: people being brought to believe anything not frowned upon by society, tend to keep believing it).
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed.
Consider the Standard Model of particle physics. It’s an inventory of fundamental particles and forces and how they interact. As a model it’s causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus “wrong”.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only “self-awareness” is meant, or all forms of “awareness”. It takes time and care to develop a shared language and understanding here.
However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: “You know how sometimes you’re asleep and sometimes you’re awake, and how the two states are really different? That difference is what I mean by consciousness!” And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we’d be on our way down one path.
The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive changes.
So this is all about the relationship between the first and second paths of inquiry. If on the second path we find nothing called consciousness, that presents one sort of problem. If we do find, on the second path, something we wish to call consciousness, that presents a different problem and a lesser problem, namely, what is its relationship to consciousness as investigated in the first way? Do the two accounts of consciousness match up? If they don’t, how is that to be resolved?
These days, I think most people on the second path do believe in something called consciousness, which has a causal and explanatory role, but they may disagree with some or much of what people on the first path say about it. In that situation, you only face the lesser problem: you agree that consciousness exists, but you have some dispute about its nature. (Of course, the followers of the two paths have their internal disagreements, with their peers, as well. We are not talking about two internally homogeneous factions of opinion.)
Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciousness.
If you want to deny that there actually is any such thing as consciousness (saying that there is only a belief in it), you’ll need to define your terms too. It may be that you are not denying consciousness as such, just some particular concept of it. Let’s start with the difference between sleep and wakefulness. Do you agree that there is a subjective difference there?
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that
So which is correct?
Also, I don’t understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by:
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
About-ness is tricky.
If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing.
If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it’s not clear how we can check if they’re true or not (I see no reason to believe in free will of any kind).
Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the “MP theories” that don’t inculde concsiousness are causally closed.
Here’s what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem.
All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experience as primary.
I said that “all your questions come down to, why does our existence feel like something? and why does it feel the way it does?”
You focus on the second question—you consider different (counterfactual) possible experiences. When ask why we experience colors, you’re implicitly adding “why colors rather than something else?” But to me that kind of question seems meaningless because we can’t ask the more fundamental question of “why do we experience anything at all?”
The core problem is that we can’t imagine or describe lack-of-experience. This is just another way of saying we can’t describe what experience is except by appealing to shared experience.
If we encountered aliens (or some members of LW) and they simply had no idea what we were talking about when we discussed conscious experience—there’s nothing we could say to explain to them what it is, much less why it’s a Hard Problem.
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
Let’s take a specific aspect of conscious experience—color vision. For the sake of argument (since the reality is much more complicated than this), let’s suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane.
Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way—as being about color, etc—from the beginning.
What’s the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understanding of what the theory is about, grounded only in abstraction and in the world-of-external-things-in-space part of experience.
Of course, you could arrive at a theory which you initially understood only in the objectified way, but then you managed to make the correct identifications with subjective experience. That’s what proposals for neural correlates of consciousness (e.g. Drescher’s gensyms) are trying to do. When I criticize these proposals, it’s not because I object in principle to proceeding that way, but because of the details—I don’t believe that the specific candidates being offered have the right properties for them to be identical with the elements of consciousness in question.
I don’t think we need to further consider epiphenomenalism, unless you have a special reason to do so. Common sense tells us that experiences are both causes and effects, and that a psychophysical identity theory is the sort of theory of consciousness we should be seeking. I just think that the thing on the physical end of the identity relationship is not at all what people expect.
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem, but maybe we need to be able to say what we feel and what feeling is before we can say why feeling is.
Consider the similar question, “why is there something rather than nothing?” I think Heidegger, for one, was very interested in this question. But he ended up spending his life trying to make progress on what existence is, rather than why existence is.
I like to think that “reverse monism” is a small step in the right direction, even regarding the question “why is there experience”, because it undoes one mode of puzzlement: the property-dualistic one which focuses on the objectified understanding of the MP theory, and then says “why does the existence of those objects feel like something”. If you see the relevant part of the theory as simply being about those feelings to begin with, then the question should collapse to “why do such things exist” rather than “why do those existing things feel like something”. Though that is such a subtle difference, that maybe it’s no difference at all. Mostly, I’m focused on the concrete question of “what would physics have to be like for a psychophysical identity theory to be possible?”
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks.
If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell’s equations) is causally disconnected from the rest of them. They all describe common interacting entities.
Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn’t one of these.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
It needn’t be. Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness.
Relevant facts: we believe in and report conscious experience even though we can’t define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts—IOW, without the innate feeling we wouldn’t even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically every culture: e.g. belief in spirits, gods, or an afterlife.
Here’s a random mechanism too: cognitive architecture makes regularly think “I am conscious!”. Repeated thoughts, with nothing opposing them (at younger ages at least), become belief (ref: people being brought to believe anything not frowned upon by society, tend to keep believing it).
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed.
Consider the Standard Model of particle physics. It’s an inventory of fundamental particles and forces and how they interact. As a model it’s causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus “wrong”.
I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only “self-awareness” is meant, or all forms of “awareness”. It takes time and care to develop a shared language and understanding here.
However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: “You know how sometimes you’re asleep and sometimes you’re awake, and how the two states are really different? That difference is what I mean by consciousness!” And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we’d be on our way down one path.
The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive changes.
So this is all about the relationship between the first and second paths of inquiry. If on the second path we find nothing called consciousness, that presents one sort of problem. If we do find, on the second path, something we wish to call consciousness, that presents a different problem and a lesser problem, namely, what is its relationship to consciousness as investigated in the first way? Do the two accounts of consciousness match up? If they don’t, how is that to be resolved?
These days, I think most people on the second path do believe in something called consciousness, which has a causal and explanatory role, but they may disagree with some or much of what people on the first path say about it. In that situation, you only face the lesser problem: you agree that consciousness exists, but you have some dispute about its nature. (Of course, the followers of the two paths have their internal disagreements, with their peers, as well. We are not talking about two internally homogeneous factions of opinion.)
If you want to deny that there actually is any such thing as consciousness (saying that there is only a belief in it), you’ll need to define your terms too. It may be that you are not denying consciousness as such, just some particular concept of it. Let’s start with the difference between sleep and wakefulness. Do you agree that there is a subjective difference there?