The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
Appearances do exist even when what they indicate does not exist.
is a fact about complex arrangements of quarks.
Those are facts about my ability to communicate my phenomenology.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
What’s more interesting to think about is the nature of reflective self-awareness. If I’m able to say that I’m seeing , it’s only because, a few steps back, I’m able to “see” that I’m seeing ; there’s reflective awareness within consciousness of consciousness. There’s a causal structure there, but there’s also a non-causal ontological structure, some form of intentionality. It’s this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
Non-causal ontological structure is suspicious.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
but it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
I’ll quote myself: “The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.”
Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way.
The root of this disagreement is your statement that “Facts about your phenomenology are facts about your programming”. Perhaps you’re used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It’s not even just what I think about it; it’s clear that the thought “I am seeing ” arises in response to a that exists before and apart from the thought.
Non-causal ontological structure is suspicious.
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Let’s revisit what this branch of the conversation was about.
I was arguing that it’s possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there’s no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, since you were insisting that “people can’t tell the difference between ontologies”, I tried to pull out a truly absurd example (though one that occasionally gets lip service from mystically minded people) - that only love exists. I would have thought that a moment’s inspection of the world, or of one’s memories of the world, would show that there are things other than love in existence, even if you adopt total Cartesian skepticism about anything beyond immediate experience.
Your riposte was to imagine an advocate of the all-is-love theory who, when asked to provide the details, says “quantum mechanics”. I said it’s rather hard to interpret QM that way, and you pointed out that I’m trying to get experience from QM. That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience. My actual thesis is that conscious experience is the state of some particular type of quantum system, so the emotions do have to be in the theory somewhere. But I don’t think you can even reduce the other emotions to the emotion of love, let alone the non-emotional aspects of the mind, so the whole thing is just silly.
Then you had your advocate go on to speak in favor of the all-is-balloons theory, again with QM providing the details. I think you radically overestimate the freedom one has to interpret a mathematical formalism and still remain plausible or even coherent.
What we say using natural language is not just an irrelevant, interchangeable accessory to what we say using equations. Concepts can still have a meaning even if it’s only expressed informally, and one of the underappreciated errors of 20th-century thought is the belief that formalism validates everything: that you can say anything about a topic and it’s valid to do so, if you’re saying it with a formalism. A very minor example is the idea of a “noncommutative probability”. In quantum theory, we have complex numbers, called probability amplitudes, which appear as an intermediate stage prior to the calculation of numbers that are probabilities in the legitimate sense—lying between 0 and 1, expressing relative frequency of an outcome. There is a formalism of this classical notion of probability, due to Kolmogorov. You can generalize that formalism, so that it is about probability amplitudes, and some people call that a theory of “noncommutative probability”. But it’s not actually a theory of probability any more. A “noncommutative probability” is not a probability; that’s why probability amplitudes are so vexatious to interpret. The designation, “noncommutative probability”, sweeps the problem under the carpet. It tells us that these mysterious non-probabilities are not mysterious; they are probabilities—just … different. There can be a fine line between “thinking like reality” and fooling yourself into thinking that you understand.
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
So divide the particle velocities by temperature or whatever.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
How do you tell what’s redundant complexity and what’s ontologically fundamental? Position or momentum model of quantum mechanics, for instance?
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes.
What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration—the statement that they are not epiphenomenal, but rather, fundamental.
So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Is there anything about your or anyone else’s actions that provides evidence for this hypothesis?
“genuine” causal relations is much weaker than “ontologically fundamental” relations.
Do only pure qualia really exist? Do beliefs, desires, etc. also exist?
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
You can map a set of three quantum states onto a set of {, , }
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
No, it means ontological structure—not structures of things, but the structure of thing’s ontology—that doesn’t say anything about the things themselves, just about their ontology.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
A logical/probabilistic one. There is no evidence for a correlation between the statements “These beings have large-scale quantum entanglement” and “These beings think and talk about consciousness”
That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience
You would have to be saying that to be exactly the same as your character. You’re contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy “the stuff is in this arrangement” and the second guy “the stuff is in this arrangement, and the experiences are in that arrangement”, they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences.
That doesn’t seem at all suspicious to you?
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
You are correct. “balloons” refers to balloons, not to quarks.
I guess what’s going on is that the guy is saying that’s what he believes balloons are.
But thinking about the meaning of words is clarifying.
It seems like the question is almost—“Is ‘experience’ a word like phlogiston or a word like elephant?”
More or less, whatever has been causing us to see all those elephants gets to be called an elephant. Elephants are reductionism-compatible. There are some extreme circumstances—images of elephants I have seen are fabrication, the people who claim to have seen elephants are lying to me—that break this rule. Phlogiston, on the other hand, is a word we give up on much more readily. Heat is particle bouncing around, but the absence of oxygen is not phlogiston—it’s just the absence of oxygen.
You believe that “experience” is fundamentally incompatible with reduction. An experience, to exist at all, must be an ontologically fundamental experience. Thus saying “I see red” makes two claims—one, that the brain is in a certain class of its possible total configuration states, those in which the person is seeing red, and two, that the experience of seeing red is ontologically fundamental.
I see no way to ever get the physical event of people claiming that they experience color correlated with the ontological fundamentalness of their color, as we can investigate the phlogiston hypothesis and stop using it if and only if it turns out to be a bad model.
What is a claim when it’s not correlated with its subject? The whole point of the words within it has been irrevocably lost. It is pure speculation.
I really, really don’t think, that when I say I see red, I’m just speculating.
It’s almost a month since we started this discussion, and it’s a bit of a struggle to remember what’s important and what’s incidental. So first, a back-to-basics statement from me.
Colors do exist, appearances do exist; that’s nonnegotiable. That they do not exist in an ontology of “nothing but particles in space” is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can’t be identified with it.
We aren’t like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with “experiences”, and so on.
Here I’d say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can’t just talk about atomized sensory qualia.
Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That’s because it’s a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can’t be an identity.
All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have something in my mathematical theory of reality which simply is the bearer of conscious states, has the properties and structure of a conscious whole, and is appropriately located in the causal chain. Since the mathematics describing a configuration of particles in space seems very unpromising for such a reinterpretation; and since our physics is quantum mechanics anyway, and the formalism of quantum mechanics contains entangled wavefunctions that can’t be factorized into localized wavefunctions, it’s quite natural to look for these conscious wholes in some form of QM where entanglement is ontological. However, since consciousness is in the brain and causally relevant, this implies that there must be a functionally relevant brain subsystem that is in a quantum coherent state.
That is the argument which leads me from “consciousness is real” to “there’s large-scale quantum entanglement in the brain”. Given the physics we have, it’s the only way I see to avoid property dualism, and it’s still just a starting point, on every level: mathematically, ontologically, and of course neurobiologically. But that is the argument you should be scrutinizing. What’s at stake in some of our specific exchanges may be a little obscure, so I wanted to set down the main argument in one piece, in one place, so you could see what you’re dealing with.
I will lay down the main thing convincing me that you’re correct.
Consider the three statements:
“there’s a large-scale quantum entanglement in the brain”
“consciousness is real”
“Mitchell Porter says that consciousness is real.”
Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated.
However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there’s a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with “existence”, it can be hard to say what “causation” is. But whatever it is, and whether or not we can say something informative about its ontological character, if you’re using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities—dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it—the difference between the elementary situation, where A leads directly to B, and the composite situation, where A “causes” B because A leads directly to A’ which leads directly to A″ … and eventually this chain terminates in B.
Also—and this is germane to the earlier discussion about fuzzy properties and macroscopic states—in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it’s even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of “particle encounters force field causes change in particle’s motion”.
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it’s an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn’t matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I’ve argued two things so far. First, qualia and other features of consciousness aren’t there in the physical ontology, so that’s a problem. Second, a many-to-one mapping is not an identity relation, it’s more suited to property dualism, so that’s also a problem.
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it’s the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there’s a “homunculus fallacy”, where you explain (for example) the experience of seeing as due to a “homunculus” (“little human”) in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
What are the other conditions?
is a fact about complex arrangements of quarks.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
Non-causal ontological structure is suspicious.
but it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
(part 2)
I’ll quote myself: “The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.”
Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way.
The root of this disagreement is your statement that “Facts about your phenomenology are facts about your programming”. Perhaps you’re used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It’s not even just what I think about it; it’s clear that the thought “I am seeing ” arises in response to a that exists before and apart from the thought.
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
Let’s revisit what this branch of the conversation was about.
I was arguing that it’s possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there’s no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, since you were insisting that “people can’t tell the difference between ontologies”, I tried to pull out a truly absurd example (though one that occasionally gets lip service from mystically minded people) - that only love exists. I would have thought that a moment’s inspection of the world, or of one’s memories of the world, would show that there are things other than love in existence, even if you adopt total Cartesian skepticism about anything beyond immediate experience.
Your riposte was to imagine an advocate of the all-is-love theory who, when asked to provide the details, says “quantum mechanics”. I said it’s rather hard to interpret QM that way, and you pointed out that I’m trying to get experience from QM. That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience. My actual thesis is that conscious experience is the state of some particular type of quantum system, so the emotions do have to be in the theory somewhere. But I don’t think you can even reduce the other emotions to the emotion of love, let alone the non-emotional aspects of the mind, so the whole thing is just silly.
Then you had your advocate go on to speak in favor of the all-is-balloons theory, again with QM providing the details. I think you radically overestimate the freedom one has to interpret a mathematical formalism and still remain plausible or even coherent.
What we say using natural language is not just an irrelevant, interchangeable accessory to what we say using equations. Concepts can still have a meaning even if it’s only expressed informally, and one of the underappreciated errors of 20th-century thought is the belief that formalism validates everything: that you can say anything about a topic and it’s valid to do so, if you’re saying it with a formalism. A very minor example is the idea of a “noncommutative probability”. In quantum theory, we have complex numbers, called probability amplitudes, which appear as an intermediate stage prior to the calculation of numbers that are probabilities in the legitimate sense—lying between 0 and 1, expressing relative frequency of an outcome. There is a formalism of this classical notion of probability, due to Kolmogorov. You can generalize that formalism, so that it is about probability amplitudes, and some people call that a theory of “noncommutative probability”. But it’s not actually a theory of probability any more. A “noncommutative probability” is not a probability; that’s why probability amplitudes are so vexatious to interpret. The designation, “noncommutative probability”, sweeps the problem under the carpet. It tells us that these mysterious non-probabilities are not mysterious; they are probabilities—just … different. There can be a fine line between “thinking like reality” and fooling yourself into thinking that you understand.
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
So divide the particle velocities by temperature or whatever.
How do you tell what’s redundant complexity and what’s ontologically fundamental? Position or momentum model of quantum mechanics, for instance?
What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration—the statement that they are not epiphenomenal, but rather, fundamental.
Is there anything about your or anyone else’s actions that provides evidence for this hypothesis?
“genuine” causal relations is much weaker than “ontologically fundamental” relations.
Do only pure qualia really exist? Do beliefs, desires, etc. also exist?
You can map a set of three quantum states onto a set of {, , }
No, it means ontological structure—not structures of things, but the structure of thing’s ontology—that doesn’t say anything about the things themselves, just about their ontology.
A logical/probabilistic one. There is no evidence for a correlation between the statements “These beings have large-scale quantum entanglement” and “These beings think and talk about consciousness”
You would have to be saying that to be exactly the same as your character. You’re contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy “the stuff is in this arrangement” and the second guy “the stuff is in this arrangement, and the experiences are in that arrangement”, they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences.
That doesn’t seem at all suspicious to you?
You are correct. “balloons” refers to balloons, not to quarks.
I guess what’s going on is that the guy is saying that’s what he believes balloons are.
But thinking about the meaning of words is clarifying.
It seems like the question is almost—“Is ‘experience’ a word like phlogiston or a word like elephant?”
More or less, whatever has been causing us to see all those elephants gets to be called an elephant. Elephants are reductionism-compatible. There are some extreme circumstances—images of elephants I have seen are fabrication, the people who claim to have seen elephants are lying to me—that break this rule. Phlogiston, on the other hand, is a word we give up on much more readily. Heat is particle bouncing around, but the absence of oxygen is not phlogiston—it’s just the absence of oxygen.
You believe that “experience” is fundamentally incompatible with reduction. An experience, to exist at all, must be an ontologically fundamental experience. Thus saying “I see red” makes two claims—one, that the brain is in a certain class of its possible total configuration states, those in which the person is seeing red, and two, that the experience of seeing red is ontologically fundamental.
I see no way to ever get the physical event of people claiming that they experience color correlated with the ontological fundamentalness of their color, as we can investigate the phlogiston hypothesis and stop using it if and only if it turns out to be a bad model.
What is a claim when it’s not correlated with its subject? The whole point of the words within it has been irrevocably lost. It is pure speculation.
I really, really don’t think, that when I say I see red, I’m just speculating.
It’s almost a month since we started this discussion, and it’s a bit of a struggle to remember what’s important and what’s incidental. So first, a back-to-basics statement from me.
Colors do exist, appearances do exist; that’s nonnegotiable. That they do not exist in an ontology of “nothing but particles in space” is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can’t be identified with it.
We aren’t like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with “experiences”, and so on.
Here I’d say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can’t just talk about atomized sensory qualia.
Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That’s because it’s a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can’t be an identity.
All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have something in my mathematical theory of reality which simply is the bearer of conscious states, has the properties and structure of a conscious whole, and is appropriately located in the causal chain. Since the mathematics describing a configuration of particles in space seems very unpromising for such a reinterpretation; and since our physics is quantum mechanics anyway, and the formalism of quantum mechanics contains entangled wavefunctions that can’t be factorized into localized wavefunctions, it’s quite natural to look for these conscious wholes in some form of QM where entanglement is ontological. However, since consciousness is in the brain and causally relevant, this implies that there must be a functionally relevant brain subsystem that is in a quantum coherent state.
That is the argument which leads me from “consciousness is real” to “there’s large-scale quantum entanglement in the brain”. Given the physics we have, it’s the only way I see to avoid property dualism, and it’s still just a starting point, on every level: mathematically, ontologically, and of course neurobiologically. But that is the argument you should be scrutinizing. What’s at stake in some of our specific exchanges may be a little obscure, so I wanted to set down the main argument in one piece, in one place, so you could see what you’re dealing with.
I will lay down the main thing convincing me that you’re correct.
Consider the three statements:
“there’s a large-scale quantum entanglement in the brain”
“consciousness is real”
“Mitchell Porter says that consciousness is real.”
Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated.
However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
(part 1)
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there’s a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with “existence”, it can be hard to say what “causation” is. But whatever it is, and whether or not we can say something informative about its ontological character, if you’re using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities—dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it—the difference between the elementary situation, where A leads directly to B, and the composite situation, where A “causes” B because A leads directly to A’ which leads directly to A″ … and eventually this chain terminates in B.
Also—and this is germane to the earlier discussion about fuzzy properties and macroscopic states—in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it’s even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of “particle encounters force field causes change in particle’s motion”.
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it’s an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn’t matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I’ve argued two things so far. First, qualia and other features of consciousness aren’t there in the physical ontology, so that’s a problem. Second, a many-to-one mapping is not an identity relation, it’s more suited to property dualism, so that’s also a problem.
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it’s the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there’s a “homunculus fallacy”, where you explain (for example) the experience of seeing as due to a “homunculus” (“little human”) in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.