All your questions come down to: why does our existence feel like something? Why is there subjective, personal, conscious experience? And why does it feel the way it does and not some other way?
In the following, I assume that your position about qualia deserving an explanation is correct. I don’t have a fully formed opinion yet myself—I defer an explanation—but here’s what I came up with by assuming your position.
First, I propose that we both accept the Materialistic Hypothesis as regards minds. In the following text I will use the abbreviation MP for the materialistic, physical world. My formulation of the hypothesis states:
There is an MP world which is objective, common to everyone, and exists independently of our conscious, subjective experiences.
All information I have about the experiences (e.g. of color) of others is part of the MP world. I can receive such information only through MP means, via the self-reporting of others. I cannot in any way experience or inspect their experiences directly, in the way I have my own experience, or in some third non-materialistic way. Symmetrically, I can only provide information to anyone else about my own experiences via the MP world. (I am not special.)
If we ignore subjective/conscious experience, our physical theories are a complete description of the MP world. They may be modified, extended or refined in the future, but it is reasonable to assume (until shown otherwise) that they will remain theories of the MP world only. IOW, the MP world is “closed on itself”: MP theories do not naturally say anything about the existence or properties of conscious experience such as that of color.
MP models, and the sum of all information that can be gotten from the MP world, provides a complete description of the behavior of brains and other embodiments of “minds”. IOW, Descartes’ dualism is false: there is no extra-physical “soul” agent violating the MP world’s internal causality.
MP models of brains have a one-to-one correspondence to all the conscious states, feelings and experiences of the minds in those brains. Your experience of green can be identified with some brain-state, and occurs whenever that brain state arises, and only then. (By item 2, this is unfalsifiable.)
Do you disagree with any of this?
If you accept this hypothesis, then it follows that it is impossible to say anything about conscious mind states. There are no experiments, observations, or tools that could tell us anything about them, even in principle. You can build a model of your own consciousness if you like, but it will be based entirely on introspection, and we will be able to achieve similar results by building the mirror model in MP terms of your brain-states.
Now, it’s possible that future discoveries will refute part of this hypothesis—by leading to such complex or weird MP theories that it would be easier to postulate Descartian dualism, for instance. But until that occurs, our subjective experiences cannot be grounds for declaring MP theories incomplete. They are apparently complete as regards the MP world.
When you say that the color green “exists”, or that your experience of green “exists”, this is misleading. It is not the same sense of “exists” as in “this apple exists”. I’m not denying the “existence” of your, er, qualia, but we should not use the same word or infer the same qualities that MP existing objects have.
I agree with your interpretation of our current physical and experiential evidence. I believe the perceived dualistic problem arises from imperfections in our current modeling of brain states and control of our own. We cannot easily simulate experiential brain states, reconfigure our own brains to match, and try them out ourselves. We cannot make adjustments of these states on a continuum that would allow us to say physical state A corresponds exactly to experience B and here’s the math. We cannot create experience on a machine and have it tell us that it is experiencing. Without internal access to our source-code, our experiences come into our consciousness fully formed and appear magical.
That being said, the blunt tools we do have—descriptions of other’s experiences, drugs, brain stimulation, fMRI, and psychophysics—do seem to indicate that experience follows directly from physical states of the brain without the need for a dualist explanation. Perhaps the problem will dissolve itself once uploading is possible and individual experiences are more tradeable and malleable.
1′. There is a world, which includes subjective experiences, and (presumably) things which are not subjective experiences.
2′. All information I have about the world, including the subjective experiences of other people, comes through my subjective experiences.
3′. I possess mathematical/physical theories which appear adequate to describe much of the posited world to varying degrees, but which do not refer to subjective experiences.
4′. Subjective experiences are causally consequential; they are affected by sensation and they affect behavior, among other things.
5′. The way the world actually is and the way the world actually works is a little more complicated than any theory I currently possess.
This is really frustrating. When you ask questions of us who disagree with you, we tend to say “I don’t think the question is well posed”. But when we ask questions of you, you won’t say yes, or no, or explicitly reject the question—you just return to your own questions. If you don’t think the questions you’re being asked are well-posed enough to answer, could you say more about why? Otherwise we’re not engaging, we’re just talking past each other.
It can take a long time to say what the problem is. I just spent several hours trying to do this in Dan’s case, and I’m not sure I succeeded. The questions aren’t ill-posed, but the whole starting point was problematic. In effect I wanted to demonstrate the possibility of an alternative starting point. Dan managed to respond, and now I to that, and even this comment of yours contributed, but it took a lot of time and consideration of context even to produce an imperfect further reply. It’s a tradeoff between responding adequately and responding promptly. There’s been an improvement in communication since last time, but it can still get better.
Clarify 5′, please: do you intend to say that the base rules of the world are more complicated than the current physics—e.g. how a creature in a Conway’s Game of Life board might say, “I know that any live cell with two or three live neighbours lives on to the next generation, but I’m missing how cells become live”?
In physics, if A is part of B, it means it’s a spatial part. I think the “parts” of a conscious experience are part of it in some other way. I say this very metaphorically, and only metaphorically, but it’s more like the way that polyhedra have faces. The components of a conscious experience, I would think, don’t even occur independently of conscious experiences.
There’s a whole sub-branch of ontology concerning part-whole relations, called mereology. It potentially encompasses not only spatial parts, but also subsets, “logical parts”, “metaphysical parts” (e.g. the property is part of the thing with the property), the “organic wholes” of various holisms, and so on. Of course, this is philosophy, so you have people with sparse ontologies, who think most of this is not really real, and then you have the people who are realists about various abstract or exotic relations.
I think I’ve invented a name for my own ontological position, by the way—reverse monism. I’ll have to explain what that means somewhere…
Before I respond to this: how much physics have you studied? Just high school, or the standard three semesters of college work? How well did you do in those classes? Have you read any popular-science discussions of physics, etc. outside of the classes you took? Have you studied any particular field of physics-related problems (e.g. materials science/engineering)?
I’m asking this because your discussion of part-whole relations doesn’t sound like something a scientist would invoke. If you are an expert, I’ll back off, but I have to wonder if you’ve ever used Newton’s Laws on a deeper level than cannonballs fired off cliffs.
I come from theoretical physics. I’ve trashed my career several times over, but I’ve always remained engaged with the culture. However, I’ve also studied philosophy, and that’s where all this talk of ontology comes from.
Can you explain that in terms of physics? According to my understanding, ‘part-whole relations’ are never explicitly described in the models; only implicit in the solution to the common special cases. For example, quantum mechanics includes no description of temperature; we prove temperature in quantum mechanics through statistical mechanics, without ever invoking additional laws.
Certainly there’s no fundamental physical law which talks about part-whole relations. “Spatial part” is a higher-order concept. But it’s still an utterly basic one. If I say “the proton is part of that nucleus”, that’s a physically meaningful statement.
We might have avoided this digression if, instead of part-whole relations, I’d mentioned something like “spatial and temporal adjacency” as an example of the “modes of combination” of fundamental entities which exist in physical ontology. If you take the basic physical reality to be “something at a point in space-time” (where something might be a particle or a bit of field), and then say, how do I conceptually build up bigger, more complicated things? - you do that by putting other somethings at the space-time points “next door”—locations adjacent in space, or upstream/downstream in time.
There are other perspectives on how to make complexity out of simplicity in physics. A more physical perspective would look at interaction, and separate objects becoming dynamically bound in some way. This is the basis of Mario Bunge’s philosophy of systems (Bunge was a physicist before he became a philosopher); it’s causal interaction which binds subsystems into systems.
So, trying to sum up, we can say that the modes of combination of basic entities in physics have a non-causal aspect—connectedness, being next to each other, in space and time—and a causal aspect—interaction, the state of one affecting the state of another. And these aspects are even related, in that spatiotemporal proximity is required for causal interaction to occur.
Finally, returning to your question—how do I expect physical ontology to change—part of the answer is that I expect the elementary non-causal bindings between things to include options besides spatial adjacency. Spatial proximity builds up spatial geometry and spatially extended objects. I think there will be ontological complexes where the relational glue is something other than space, that conscious states are an instance of this, and that such complexes show up in our present physics in the form of entanglement. Going back to the language of monads—spatial relations are inter-monadic, but intra-monadic relations will be something else.
1′. There is a world, which includes subjective experiences, and (presumably) things which are not subjective experiences.
Let’s talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only “presumable”? Do you have in mind an experiment to falsify it?
2′. All information I have about the world, including the subjective experiences of other people, comes through my subjective experiences.
And these subjective experiences are all caused by, and contain the same information as, objective events in the MP world. Therefore all information you have about the MP world is also contained in the MP world. Do you agree?
3′. I possess mathematical/physical theories which appear adequate to describe much of the posited world to varying degrees, but which do not refer to subjective experiences.
Do you agree with my expectation that even with future refinements of these theories, the MP world’s theories will remain “closed on MP-ness” and are not likely to lead to descriptions of subjective experiences?
4′. Subjective experiences are causally consequential; they are affected by sensation and they affect behavior, among other things.
Sensation and behaviour are MP, not subjective.
Each subjective experience has an objective, MP counterpart which ultimately contains the same information (expanding on my point (2)). They have the same correlations with other events, and the same causative and explanatory power, as the subjective experiences they cause (or are identical to). Therefore, in a causal theory, it is possible to assign causative power only to MP phenomena without loss of explanatory power. Such a theory is better, because it’s simpler and also because we have theories of physics to account for causation, but we cannot account for subjective phenomena causing MP events.
Do you agree with the above?
I can put this another way, as per my item (5): to say that sensation affects (or causes) subjective experience is to imply the logical possibility of a counterfactual world where sensation affects experience differently or not at all. However, if we define sensation as the total of all relevant MP events—the entire state of your brain when sensing something—then I claim that sensation cannot, logically, lead to any subjective experience different from the one it does lead to. IOW, sensation does not cause experience, it is identical with experience.
This theory appears consistent with all we know to date. Do you expect it to be falsified in the future?
5′. The way the world actually is and the way the world actually works is a little more complicated than any theory I currently possess.
This doesn’t seem related to my own item (5), so please respond to that as well—do you agree with it?
As for your response, I agree that our MP theories are incomplete. Do you think that more complete theories would not, or could not, remain restricted to the MP world? (item 3)
I think I must try one more largely indirect response and see if that leaves anything unanswered.
Reality consists, at least in part, of entities in causal interaction. There will be some comprehensive and correct description of this. Then, there will be descriptions which leave something out. For example, descriptions which say nothing about the states of the basic entities beyond assigning each state a label, and which then describe those causal interactions in terms of state labels. The fundamental theories we have are largely of this second type. The controversial aspects of consciousness are precisely those aspects which are lost in passing to a description of the second type. These aspects of consciousness are not causally inert, or else conscious beings wouldn’t be able to notice them and remark upon them; but again, all the interesting details of how this works are lost in the passage to a description of the second type, which by its very nature can only describe causality in terms of arbitrary laws acting on entities whose natures and differences have been reduced to a matter of labels.
What you call “MP theories” only employ these inherently incomplete descriptions. However, these theories are causally closed. So, even though we can see that they are ontologically incomplete, people are tempted to think that there is no need to expand the ontology; we just need to find a way to talk about life and everything we want to explain in terms of the incomplete ontology.
Since ontological understandings can develop incrementally, in practice such a program might develop towards ontologically complete theories anyway, as people felt the need to expand what they mean by their concepts. But that’s an optimistic interpretation, and clearly a see-no-evil approach also has the potential to delay progress.
Let’s talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only “presumable”? Do you have in mind an experiment to falsify it?
I have trouble calling this a “world”. The actual world contains consciousness. We can talk about the parts of the actual world that don’t include consciousness. We can talk about the actual world, described in some abstract way which just doesn’t mention consciousness. We can talk about a possible world that doesn’t contain consciousness.
But the way you set things up, it’s as if you’re inviting me to talk about the actual world, using a theoretical framework which doesn’t mention consciousness, and in a way which supposes that consciousness also plays no causal role. It just seems the maximally unproductive way to proceed. Imagine if we tried to talk about gravity in this way: we assume models which don’t contain gravity, and we try to talk about phenomena as if there was no such thing as gravity. That’s not a recipe for understanding gravity, it’s a recipe for entirely dispensing with the concept of gravity. Yet it doesn’t seem you want to do without the concept of consciousness. Instead you want to assume a framework in which consciousness does not appear and plays no role, and then deduce consequences. Given that starting point, it’s hardly surprising that you then reach conclusions like “it is impossible to say anything about conscious mind states”. And yet we all do every day, so something is wrong with your assumptions.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory—“each subjective experience has an objective, MP counterpart”, “sensation … is identical with experience”. So I’m a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don’t understand why it feels like anything?
Originally you said:
All your questions come down to: why does our existence feel like something?
but that’s not quite right. The feel of existence is not an ineffable thing about which nothing more can be said, except that it’s “something”. To repeat my current list of problem items, experience includes colors, meanings, time, and a sort of unity. Each one poses a concrete problem. And for each one, we do have some sort of phenomenological access to the thing itself, which permits us to judge whether a given ontological account answers the problem or not. I’m not saying such judgments are infallible or even agreed upon, just that we do possess the resources to bring subjective ontology and physical ontology into contact, for comparison and contrast.
Does this make things any clearer? You are creating problems (impossibility of knowledge of consciousness) and limitations (future theories won’t contain descriptions of subjective experience) by deciding in advance to consider only theories with the same general ontology we have now. Meanwhile, on the side you make a little progress by deciding to think about consciousness as a causal element after all, but then you handicap this progress by insisting on switching back to the no-consciousness ontology as soon as possible.
As a footnote, I would dispute that sensation and behavior, as concepts, contain no reference to subjectivity. A sensation was originally something which occurred in consciousness. A behavior was an act of an organism, partly issuing from its mental state. They originally suppose the ontology of folk psychology. It is possible to describe a behavior without reference to mental states, and it is possible to define sensation or behavior analogously, but to judge whether the entities picked out by such definitions really deserve those names, you have to go back to the mentalistic context in which the words originated and see if you are indeed talking about the same thing.
What you call “MP theories” only employ these inherently incomplete descriptions. However, these theories are causally closed.
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that
These aspects of consciousness are not causally inert
So which is correct?
Also, I don’t understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by:
descriptions which say nothing about the states of the basic entities beyond assigning each state a label
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Given that starting point, it’s hardly surprising that you then reach conclusions like “it is impossible to say anything about conscious mind states”. And yet we all do every day, so something is wrong with your assumptions.
About-ness is tricky.
If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing.
If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it’s not clear how we can check if they’re true or not (I see no reason to believe in free will of any kind).
Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the “MP theories” that don’t inculde concsiousness are causally closed.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory—“each subjective experience has an objective, MP counterpart”, “sensation … is identical with experience”. So I’m a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don’t understand why it feels like anything?
Here’s what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem.
All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experience as primary.
The feel of existence is not an ineffable thing about which nothing more can be said, except that it’s “something”.
I said that “all your questions come down to, why does our existence feel like something? and why does it feel the way it does?”
You focus on the second question—you consider different (counterfactual) possible experiences. When ask why we experience colors, you’re implicitly adding “why colors rather than something else?” But to me that kind of question seems meaningless because we can’t ask the more fundamental question of “why do we experience anything at all?”
The core problem is that we can’t imagine or describe lack-of-experience. This is just another way of saying we can’t describe what experience is except by appealing to shared experience.
If we encountered aliens (or some members of LW) and they simply had no idea what we were talking about when we discussed conscious experience—there’s nothing we could say to explain to them what it is, much less why it’s a Hard Problem.
If they [MP theories] are causally closed, then our conscious experience cannot influence our behaviour.
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Let’s take a specific aspect of conscious experience—color vision. For the sake of argument (since the reality is much more complicated than this), let’s suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane.
Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way—as being about color, etc—from the beginning.
What’s the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understanding of what the theory is about, grounded only in abstraction and in the world-of-external-things-in-space part of experience.
Of course, you could arrive at a theory which you initially understood only in the objectified way, but then you managed to make the correct identifications with subjective experience. That’s what proposals for neural correlates of consciousness (e.g. Drescher’s gensyms) are trying to do. When I criticize these proposals, it’s not because I object in principle to proceeding that way, but because of the details—I don’t believe that the specific candidates being offered have the right properties for them to be identical with the elements of consciousness in question.
If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I don’t think we need to further consider epiphenomenalism, unless you have a special reason to do so. Common sense tells us that experiences are both causes and effects, and that a psychophysical identity theory is the sort of theory of consciousness we should be seeking. I just think that the thing on the physical end of the identity relationship is not at all what people expect.
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem, but maybe we need to be able to say what we feel and what feeling is before we can say why feeling is.
Consider the similar question, “why is there something rather than nothing?” I think Heidegger, for one, was very interested in this question. But he ended up spending his life trying to make progress on what existence is, rather than why existence is.
I like to think that “reverse monism” is a small step in the right direction, even regarding the question “why is there experience”, because it undoes one mode of puzzlement: the property-dualistic one which focuses on the objectified understanding of the MP theory, and then says “why does the existence of those objects feel like something”. If you see the relevant part of the theory as simply being about those feelings to begin with, then the question should collapse to “why do such things exist” rather than “why do those existing things feel like something”. Though that is such a subtle difference, that maybe it’s no difference at all. Mostly, I’m focused on the concrete question of “what would physics have to be like for a psychophysical identity theory to be possible?”
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks.
there are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell’s equations) is causally disconnected from the rest of them. They all describe common interacting entities.
The other possibility [....] an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn’t one of these.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem
It needn’t be. Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness.
Relevant facts: we believe in and report conscious experience even though we can’t define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts—IOW, without the innate feeling we wouldn’t even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically every culture: e.g. belief in spirits, gods, or an afterlife.
Here’s a random mechanism too: cognitive architecture makes regularly think “I am conscious!”. Repeated thoughts, with nothing opposing them (at younger ages at least), become belief (ref: people being brought to believe anything not frowned upon by society, tend to keep believing it).
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed.
Consider the Standard Model of particle physics. It’s an inventory of fundamental particles and forces and how they interact. As a model it’s causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus “wrong”.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only “self-awareness” is meant, or all forms of “awareness”. It takes time and care to develop a shared language and understanding here.
However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: “You know how sometimes you’re asleep and sometimes you’re awake, and how the two states are really different? That difference is what I mean by consciousness!” And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we’d be on our way down one path.
The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive changes.
So this is all about the relationship between the first and second paths of inquiry. If on the second path we find nothing called consciousness, that presents one sort of problem. If we do find, on the second path, something we wish to call consciousness, that presents a different problem and a lesser problem, namely, what is its relationship to consciousness as investigated in the first way? Do the two accounts of consciousness match up? If they don’t, how is that to be resolved?
These days, I think most people on the second path do believe in something called consciousness, which has a causal and explanatory role, but they may disagree with some or much of what people on the first path say about it. In that situation, you only face the lesser problem: you agree that consciousness exists, but you have some dispute about its nature. (Of course, the followers of the two paths have their internal disagreements, with their peers, as well. We are not talking about two internally homogeneous factions of opinion.)
Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciousness.
If you want to deny that there actually is any such thing as consciousness (saying that there is only a belief in it), you’ll need to define your terms too. It may be that you are not denying consciousness as such, just some particular concept of it. Let’s start with the difference between sleep and wakefulness. Do you agree that there is a subjective difference there?
All your questions come down to: why does our existence feel like something? Why is there subjective, personal, conscious experience? And why does it feel the way it does and not some other way?
In the following, I assume that your position about qualia deserving an explanation is correct. I don’t have a fully formed opinion yet myself—I defer an explanation—but here’s what I came up with by assuming your position.
First, I propose that we both accept the Materialistic Hypothesis as regards minds. In the following text I will use the abbreviation MP for the materialistic, physical world. My formulation of the hypothesis states:
There is an MP world which is objective, common to everyone, and exists independently of our conscious, subjective experiences.
All information I have about the experiences (e.g. of color) of others is part of the MP world. I can receive such information only through MP means, via the self-reporting of others. I cannot in any way experience or inspect their experiences directly, in the way I have my own experience, or in some third non-materialistic way. Symmetrically, I can only provide information to anyone else about my own experiences via the MP world. (I am not special.)
If we ignore subjective/conscious experience, our physical theories are a complete description of the MP world. They may be modified, extended or refined in the future, but it is reasonable to assume (until shown otherwise) that they will remain theories of the MP world only. IOW, the MP world is “closed on itself”: MP theories do not naturally say anything about the existence or properties of conscious experience such as that of color.
MP models, and the sum of all information that can be gotten from the MP world, provides a complete description of the behavior of brains and other embodiments of “minds”. IOW, Descartes’ dualism is false: there is no extra-physical “soul” agent violating the MP world’s internal causality.
MP models of brains have a one-to-one correspondence to all the conscious states, feelings and experiences of the minds in those brains. Your experience of green can be identified with some brain-state, and occurs whenever that brain state arises, and only then. (By item 2, this is unfalsifiable.)
Do you disagree with any of this?
If you accept this hypothesis, then it follows that it is impossible to say anything about conscious mind states. There are no experiments, observations, or tools that could tell us anything about them, even in principle. You can build a model of your own consciousness if you like, but it will be based entirely on introspection, and we will be able to achieve similar results by building the mirror model in MP terms of your brain-states.
Now, it’s possible that future discoveries will refute part of this hypothesis—by leading to such complex or weird MP theories that it would be easier to postulate Descartian dualism, for instance. But until that occurs, our subjective experiences cannot be grounds for declaring MP theories incomplete. They are apparently complete as regards the MP world.
When you say that the color green “exists”, or that your experience of green “exists”, this is misleading. It is not the same sense of “exists” as in “this apple exists”. I’m not denying the “existence” of your, er, qualia, but we should not use the same word or infer the same qualities that MP existing objects have.
I agree with your interpretation of our current physical and experiential evidence. I believe the perceived dualistic problem arises from imperfections in our current modeling of brain states and control of our own. We cannot easily simulate experiential brain states, reconfigure our own brains to match, and try them out ourselves. We cannot make adjustments of these states on a continuum that would allow us to say physical state A corresponds exactly to experience B and here’s the math. We cannot create experience on a machine and have it tell us that it is experiencing. Without internal access to our source-code, our experiences come into our consciousness fully formed and appear magical.
That being said, the blunt tools we do have—descriptions of other’s experiences, drugs, brain stimulation, fMRI, and psychophysics—do seem to indicate that experience follows directly from physical states of the brain without the need for a dualist explanation. Perhaps the problem will dissolve itself once uploading is possible and individual experiences are more tradeable and malleable.
I certainly think about things differently:
1′. There is a world, which includes subjective experiences, and (presumably) things which are not subjective experiences.
2′. All information I have about the world, including the subjective experiences of other people, comes through my subjective experiences.
3′. I possess mathematical/physical theories which appear adequate to describe much of the posited world to varying degrees, but which do not refer to subjective experiences.
4′. Subjective experiences are causally consequential; they are affected by sensation and they affect behavior, among other things.
5′. The way the world actually is and the way the world actually works is a little more complicated than any theory I currently possess.
This is really frustrating. When you ask questions of us who disagree with you, we tend to say “I don’t think the question is well posed”. But when we ask questions of you, you won’t say yes, or no, or explicitly reject the question—you just return to your own questions. If you don’t think the questions you’re being asked are well-posed enough to answer, could you say more about why? Otherwise we’re not engaging, we’re just talking past each other.
It can take a long time to say what the problem is. I just spent several hours trying to do this in Dan’s case, and I’m not sure I succeeded. The questions aren’t ill-posed, but the whole starting point was problematic. In effect I wanted to demonstrate the possibility of an alternative starting point. Dan managed to respond, and now I to that, and even this comment of yours contributed, but it took a lot of time and consideration of context even to produce an imperfect further reply. It’s a tradeoff between responding adequately and responding promptly. There’s been an improvement in communication since last time, but it can still get better.
I respect that.
Clarify 5′, please: do you intend to say that the base rules of the world are more complicated than the current physics—e.g. how a creature in a Conway’s Game of Life board might say, “I know that any live cell with two or three live neighbours lives on to the next generation, but I’m missing how cells become live”?
The basic ingredients and their modes of combination (not interaction, but things like part-whole relations) need to be different. See descriptions of the second type and I want to be a monist.
What are “part-whole relations”? That doesn’t sound like a natural category in physics.
In physics, if A is part of B, it means it’s a spatial part. I think the “parts” of a conscious experience are part of it in some other way. I say this very metaphorically, and only metaphorically, but it’s more like the way that polyhedra have faces. The components of a conscious experience, I would think, don’t even occur independently of conscious experiences.
There’s a whole sub-branch of ontology concerning part-whole relations, called mereology. It potentially encompasses not only spatial parts, but also subsets, “logical parts”, “metaphysical parts” (e.g. the property is part of the thing with the property), the “organic wholes” of various holisms, and so on. Of course, this is philosophy, so you have people with sparse ontologies, who think most of this is not really real, and then you have the people who are realists about various abstract or exotic relations.
I think I’ve invented a name for my own ontological position, by the way—reverse monism. I’ll have to explain what that means somewhere…
Before I respond to this: how much physics have you studied? Just high school, or the standard three semesters of college work? How well did you do in those classes? Have you read any popular-science discussions of physics, etc. outside of the classes you took? Have you studied any particular field of physics-related problems (e.g. materials science/engineering)?
I’m asking this because your discussion of part-whole relations doesn’t sound like something a scientist would invoke. If you are an expert, I’ll back off, but I have to wonder if you’ve ever used Newton’s Laws on a deeper level than cannonballs fired off cliffs.
I come from theoretical physics. I’ve trashed my career several times over, but I’ve always remained engaged with the culture. However, I’ve also studied philosophy, and that’s where all this talk of ontology comes from.
Fair enough—I will read through the thread and make a new response.
Can you explain that in terms of physics? According to my understanding, ‘part-whole relations’ are never explicitly described in the models; only implicit in the solution to the common special cases. For example, quantum mechanics includes no description of temperature; we prove temperature in quantum mechanics through statistical mechanics, without ever invoking additional laws.
Certainly there’s no fundamental physical law which talks about part-whole relations. “Spatial part” is a higher-order concept. But it’s still an utterly basic one. If I say “the proton is part of that nucleus”, that’s a physically meaningful statement.
We might have avoided this digression if, instead of part-whole relations, I’d mentioned something like “spatial and temporal adjacency” as an example of the “modes of combination” of fundamental entities which exist in physical ontology. If you take the basic physical reality to be “something at a point in space-time” (where something might be a particle or a bit of field), and then say, how do I conceptually build up bigger, more complicated things? - you do that by putting other somethings at the space-time points “next door”—locations adjacent in space, or upstream/downstream in time.
There are other perspectives on how to make complexity out of simplicity in physics. A more physical perspective would look at interaction, and separate objects becoming dynamically bound in some way. This is the basis of Mario Bunge’s philosophy of systems (Bunge was a physicist before he became a philosopher); it’s causal interaction which binds subsystems into systems.
So, trying to sum up, we can say that the modes of combination of basic entities in physics have a non-causal aspect—connectedness, being next to each other, in space and time—and a causal aspect—interaction, the state of one affecting the state of another. And these aspects are even related, in that spatiotemporal proximity is required for causal interaction to occur.
Finally, returning to your question—how do I expect physical ontology to change—part of the answer is that I expect the elementary non-causal bindings between things to include options besides spatial adjacency. Spatial proximity builds up spatial geometry and spatially extended objects. I think there will be ontological complexes where the relational glue is something other than space, that conscious states are an instance of this, and that such complexes show up in our present physics in the form of entanglement. Going back to the language of monads—spatial relations are inter-monadic, but intra-monadic relations will be something else.
Let’s talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only “presumable”? Do you have in mind an experiment to falsify it?
And these subjective experiences are all caused by, and contain the same information as, objective events in the MP world. Therefore all information you have about the MP world is also contained in the MP world. Do you agree?
Do you agree with my expectation that even with future refinements of these theories, the MP world’s theories will remain “closed on MP-ness” and are not likely to lead to descriptions of subjective experiences?
Sensation and behaviour are MP, not subjective.
Each subjective experience has an objective, MP counterpart which ultimately contains the same information (expanding on my point (2)). They have the same correlations with other events, and the same causative and explanatory power, as the subjective experiences they cause (or are identical to). Therefore, in a causal theory, it is possible to assign causative power only to MP phenomena without loss of explanatory power. Such a theory is better, because it’s simpler and also because we have theories of physics to account for causation, but we cannot account for subjective phenomena causing MP events.
Do you agree with the above?
I can put this another way, as per my item (5): to say that sensation affects (or causes) subjective experience is to imply the logical possibility of a counterfactual world where sensation affects experience differently or not at all. However, if we define sensation as the total of all relevant MP events—the entire state of your brain when sensing something—then I claim that sensation cannot, logically, lead to any subjective experience different from the one it does lead to. IOW, sensation does not cause experience, it is identical with experience.
This theory appears consistent with all we know to date. Do you expect it to be falsified in the future?
This doesn’t seem related to my own item (5), so please respond to that as well—do you agree with it?
As for your response, I agree that our MP theories are incomplete. Do you think that more complete theories would not, or could not, remain restricted to the MP world? (item 3)
I think I must try one more largely indirect response and see if that leaves anything unanswered.
Reality consists, at least in part, of entities in causal interaction. There will be some comprehensive and correct description of this. Then, there will be descriptions which leave something out. For example, descriptions which say nothing about the states of the basic entities beyond assigning each state a label, and which then describe those causal interactions in terms of state labels. The fundamental theories we have are largely of this second type. The controversial aspects of consciousness are precisely those aspects which are lost in passing to a description of the second type. These aspects of consciousness are not causally inert, or else conscious beings wouldn’t be able to notice them and remark upon them; but again, all the interesting details of how this works are lost in the passage to a description of the second type, which by its very nature can only describe causality in terms of arbitrary laws acting on entities whose natures and differences have been reduced to a matter of labels.
What you call “MP theories” only employ these inherently incomplete descriptions. However, these theories are causally closed. So, even though we can see that they are ontologically incomplete, people are tempted to think that there is no need to expand the ontology; we just need to find a way to talk about life and everything we want to explain in terms of the incomplete ontology.
Since ontological understandings can develop incrementally, in practice such a program might develop towards ontologically complete theories anyway, as people felt the need to expand what they mean by their concepts. But that’s an optimistic interpretation, and clearly a see-no-evil approach also has the potential to delay progress.
I have trouble calling this a “world”. The actual world contains consciousness. We can talk about the parts of the actual world that don’t include consciousness. We can talk about the actual world, described in some abstract way which just doesn’t mention consciousness. We can talk about a possible world that doesn’t contain consciousness.
But the way you set things up, it’s as if you’re inviting me to talk about the actual world, using a theoretical framework which doesn’t mention consciousness, and in a way which supposes that consciousness also plays no causal role. It just seems the maximally unproductive way to proceed. Imagine if we tried to talk about gravity in this way: we assume models which don’t contain gravity, and we try to talk about phenomena as if there was no such thing as gravity. That’s not a recipe for understanding gravity, it’s a recipe for entirely dispensing with the concept of gravity. Yet it doesn’t seem you want to do without the concept of consciousness. Instead you want to assume a framework in which consciousness does not appear and plays no role, and then deduce consequences. Given that starting point, it’s hardly surprising that you then reach conclusions like “it is impossible to say anything about conscious mind states”. And yet we all do every day, so something is wrong with your assumptions.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory—“each subjective experience has an objective, MP counterpart”, “sensation … is identical with experience”. So I’m a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don’t understand why it feels like anything?
Originally you said:
but that’s not quite right. The feel of existence is not an ineffable thing about which nothing more can be said, except that it’s “something”. To repeat my current list of problem items, experience includes colors, meanings, time, and a sort of unity. Each one poses a concrete problem. And for each one, we do have some sort of phenomenological access to the thing itself, which permits us to judge whether a given ontological account answers the problem or not. I’m not saying such judgments are infallible or even agreed upon, just that we do possess the resources to bring subjective ontology and physical ontology into contact, for comparison and contrast.
Does this make things any clearer? You are creating problems (impossibility of knowledge of consciousness) and limitations (future theories won’t contain descriptions of subjective experience) by deciding in advance to consider only theories with the same general ontology we have now. Meanwhile, on the side you make a little progress by deciding to think about consciousness as a causal element after all, but then you handicap this progress by insisting on switching back to the no-consciousness ontology as soon as possible.
As a footnote, I would dispute that sensation and behavior, as concepts, contain no reference to subjectivity. A sensation was originally something which occurred in consciousness. A behavior was an act of an organism, partly issuing from its mental state. They originally suppose the ontology of folk psychology. It is possible to describe a behavior without reference to mental states, and it is possible to define sensation or behavior analogously, but to judge whether the entities picked out by such definitions really deserve those names, you have to go back to the mentalistic context in which the words originated and see if you are indeed talking about the same thing.
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that
So which is correct?
Also, I don’t understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by:
If those entities are basic, then they’re like electrons—they can’t be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
About-ness is tricky.
If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing.
If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it’s not clear how we can check if they’re true or not (I see no reason to believe in free will of any kind).
Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the “MP theories” that don’t inculde concsiousness are causally closed.
Here’s what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem.
All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experience as primary.
I said that “all your questions come down to, why does our existence feel like something? and why does it feel the way it does?”
You focus on the second question—you consider different (counterfactual) possible experiences. When ask why we experience colors, you’re implicitly adding “why colors rather than something else?” But to me that kind of question seems meaningless because we can’t ask the more fundamental question of “why do we experience anything at all?”
The core problem is that we can’t imagine or describe lack-of-experience. This is just another way of saying we can’t describe what experience is except by appealing to shared experience.
If we encountered aliens (or some members of LW) and they simply had no idea what we were talking about when we discussed conscious experience—there’s nothing we could say to explain to them what it is, much less why it’s a Hard Problem.
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell’s equations in a vacuum are causally closed, but that theory doesn’t even describe atoms, let alone consciousness.
The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
Let’s take a specific aspect of conscious experience—color vision. For the sake of argument (since the reality is much more complicated than this), let’s suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane.
Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way—as being about color, etc—from the beginning.
What’s the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understanding of what the theory is about, grounded only in abstraction and in the world-of-external-things-in-space part of experience.
Of course, you could arrive at a theory which you initially understood only in the objectified way, but then you managed to make the correct identifications with subjective experience. That’s what proposals for neural correlates of consciousness (e.g. Drescher’s gensyms) are trying to do. When I criticize these proposals, it’s not because I object in principle to proceeding that way, but because of the details—I don’t believe that the specific candidates being offered have the right properties for them to be identical with the elements of consciousness in question.
I don’t think we need to further consider epiphenomenalism, unless you have a special reason to do so. Common sense tells us that experiences are both causes and effects, and that a psychophysical identity theory is the sort of theory of consciousness we should be seeking. I just think that the thing on the physical end of the identity relationship is not at all what people expect.
You close out with the question of “why do we experience anything at all?” That is going to be a hard problem, but maybe we need to be able to say what we feel and what feeling is before we can say why feeling is.
Consider the similar question, “why is there something rather than nothing?” I think Heidegger, for one, was very interested in this question. But he ended up spending his life trying to make progress on what existence is, rather than why existence is.
I like to think that “reverse monism” is a small step in the right direction, even regarding the question “why is there experience”, because it undoes one mode of puzzlement: the property-dualistic one which focuses on the objectified understanding of the MP theory, and then says “why does the existence of those objects feel like something”. If you see the relevant part of the theory as simply being about those feelings to begin with, then the question should collapse to “why do such things exist” rather than “why do those existing things feel like something”. Though that is such a subtle difference, that maybe it’s no difference at all. Mostly, I’m focused on the concrete question of “what would physics have to be like for a psychophysical identity theory to be possible?”
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks.
If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell’s equations) is causally disconnected from the rest of them. They all describe common interacting entities.
Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn’t one of these.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to “consciousness”. And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is “consciousness”?
It needn’t be. Here’s a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness.
Relevant facts: we believe in and report conscious experience even though we can’t define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts—IOW, without the innate feeling we wouldn’t even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically every culture: e.g. belief in spirits, gods, or an afterlife.
Here’s a random mechanism too: cognitive architecture makes regularly think “I am conscious!”. Repeated thoughts, with nothing opposing them (at younger ages at least), become belief (ref: people being brought to believe anything not frowned upon by society, tend to keep believing it).
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed.
Consider the Standard Model of particle physics. It’s an inventory of fundamental particles and forces and how they interact. As a model it’s causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus “wrong”.
I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only “self-awareness” is meant, or all forms of “awareness”. It takes time and care to develop a shared language and understanding here.
However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: “You know how sometimes you’re asleep and sometimes you’re awake, and how the two states are really different? That difference is what I mean by consciousness!” And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we’d be on our way down one path.
The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive changes.
So this is all about the relationship between the first and second paths of inquiry. If on the second path we find nothing called consciousness, that presents one sort of problem. If we do find, on the second path, something we wish to call consciousness, that presents a different problem and a lesser problem, namely, what is its relationship to consciousness as investigated in the first way? Do the two accounts of consciousness match up? If they don’t, how is that to be resolved?
These days, I think most people on the second path do believe in something called consciousness, which has a causal and explanatory role, but they may disagree with some or much of what people on the first path say about it. In that situation, you only face the lesser problem: you agree that consciousness exists, but you have some dispute about its nature. (Of course, the followers of the two paths have their internal disagreements, with their peers, as well. We are not talking about two internally homogeneous factions of opinion.)
If you want to deny that there actually is any such thing as consciousness (saying that there is only a belief in it), you’ll need to define your terms too. It may be that you are not denying consciousness as such, just some particular concept of it. Let’s start with the difference between sleep and wakefulness. Do you agree that there is a subjective difference there?