You won’t find my answer interesting, but since you asked: I think experiences of color are among the states that particles in space can get into, just as the impulse to blink is a state particles in space can get into, just as a predisposition to generate meaningful English but not German sentences is a state that particles in space can get into, just as an appreciation for 17th-century Romanian literature is a state that particles in space can get into, just as a contagious head cold is a state that particles in space can get into. (Which is not to say that all of those are the same kinds of states.)
We can certainly populate our ontologies with additional entities related to those various things if we wish… color qualia and motor-impulse qualia and English qualia and German qualia and 17th-century Romanian literary qualia and contagious head cold qualia and so forth. I have no problem with that in and of itself, if positing these entities is useful for something.
But before I choose to do so, I want to understand what use those entities have to offer me. Populating my ontology with useless entities is silly.
I understand that this hesitation seems to you absurd, because you believe it ought to seem obvious to me that arrangements of matter simply aren’t the kind of thing that can be an experience of color, just like it should seem obvious that numbers aren’t the kind of thing that can be a rock, just as it seems obvious to Searle that formal rules aren’t the kind of thing that can be an understanding of Chinese, just as it seemed obvious to generations of thinkers that arrangements of matter aren’t the kind of thing that can be an infectious living cell.
These things aren’t, in fact, obvious to me. If you have reasons for believing any of them other than their obviousness, I might find those reasons compelling, but repeated assertions of their obviousness are not.
An arrangement of particles in space can embody a blink reflex with no problems, because blinking is motion, and so it just means they’re changing position in space.
Generating meaningful sentences—here we begin to run into problems, though not so severe as the problem with color. If the sentences are understood to be physical objects, such as sequences of sound waves or sequences of letter-shapes, then they can fit into physical ontology. We might even be able to specify a formal grammar of allowed sentences, and a combinatorial process which only produces physical sentences from that grammar. But meaning per se, like color, is not a physical property as ordinarily understood. (I know I’ll get into extra trouble here, because some people are with me on the color qualia being a problem, but believe that causal theories of reference can reduce meaning to a conjunction of known physical properties. However, so far as I can see, intrinsic meaning is a property only of certain constituents of mental states—the meaning of sentences and all other intersubjective signs is not intrinsic and derives from a shared interpretive code—and the correct ontology of meaning is going to be bound up with the correct ontology of consciousness in general.)
Anyway, you say it’s not obvious to you that “arrangements of matter simply aren’t the kind of thing that can be an experience of color”. Okay. Let’s suppose there is an arrangement of matter in space which is an experience of color. Maybe it’s a trillion particles in a certain arrangement executing a certain type of motion. Now, we can think about progressively simpler arrangements and motions of particles—subtracting one particle at a time from the scenario, if necessary… progressively simpler until we get all the way back to empty space. Somewhere in that conceptual progression we stopped having an experience of color there. Can you give me the faintest, slightest hint of where the magic transition occurs—where we go from “arrangement of particles that’s an experience of color” to “arrangement of particles that’s not an experience of color”?
I could also simply ask for you to indicate where in the magic arrangement of particle the color is. That is, assuming that you agree that one aspect of the existence of an experience of color is that something somewhere actually is that color. If it turns out that, according to you, brain state X is an experience of only because the brain in question outputs the word “red” when queried, or only because a neural network somewhere is making the categorization “red”—then that is eliminativism. There’s no actual , no actual color, just color words or color categories.
The reason it is obvious that there is no color inherently inhabiting an arrangement of particles in space is because it’s easy to see what the available ontological ingredients are, and it’s easy to see what you can and cannot make by combining them. If we include dynamics and a notion of causality, then the ingredients are position, time, and causal dependence. What can you construct from such ingredients? You can make complicated structures; you can make complicated motions; you can make complicated causal dependencies among structures and motions. As you can see, it’s no mystery that such an ontological scheme can encompass something like a blink reflex, which is a type of motion with a specified causal dependency.
With respect to the historical case of vitalism, it’s interesting that what the vitalists posited was a “vital force”. That’s not an objection to the logical possibility of reducing life, and especially replication, to matter in motion. They just didn’t believe that the known forces were capable of producing the right sort of motion, so they felt the need to postulate a new, complicated form of causal interaction, capable of producing the complexly orchestrated motion which must be occurring for living things to take shape. As it turned out, there was no need to postulate a special vital force to do that; the orchestration can be produced by the same forces which are at work in nonliving matter.
I’m emphasizing the way in which the case of vitalism differs from the case of qualia, because it is so often cited as a historical precedent. The vitalists—at least, the ones who talked about vital forces—were not saying that life is not material. They just postulated an extra force; in that respect, they were proposing only a conservative extension to the physical ontology of their time. But the observation that consciousness presents a basic ontological problem, in a universe consisting of nothing but matter in motion through space, has been around for a very long time. Democritus took note of this objection. I think Leibniz stated it in a recognizably modern form. It is an old insight, and it has not gone away just because the physical sciences have been so successful. Celia Green writes that this success actually sharpens the problem: the clearer our conception of material ontology and our causal account of the world becomes, the more obvious it becomes that this concept and this account do not contain the “secondary qualities” like your .
Even at the dawn of modern physical science, in the time of Galileo, there was some discussion as to how these qualities were being put aside, in favor of an exclusive focus on space, time, motion, extension. It’s quite amazing that from humble beginnings like Kepler’s laws, we’ve come as far as quantum mechanics, string theory, molecular biology, all the time maintaining that exclusion. Some new ontological factors did enter the set of ingredients that physical ontology can draw upon, especially probability, but those elementary sensory qualities remain absent from the physical conception of reality. The 20th-century revolution in thought regarding information, communication, and computation goes just a little way towards bringing them back, but in the end it’s nowhere near enough, because when you ask, what are these information states really, you end up having to reduce them to statistical properties of particles in space, because that’s still all that the physical ontology gives you to work with.
I’m probably an idiot for responding at such length on this topic, because all my experience to date suggests that doing so changes nothing fundamentally. Some people get that there’s a problem, but don’t know how to solve it and can only hope that the future does so, or they embrace a fuzzy idea like emergence dualism or panpsychism out of intellectual desperation. Some people don’t get that there’s a problem—don’t perceive, for example, that “what it feels like to be a bat” is an extra new property on top of all the ordinary physical properties that make up a bat—and are happy with a philosophical formula like “thought is computation”.
I believe there is a problem to be solved, a severe problem, a problem of the first order, whose solution will require a change of perspective as big as the one which introduced us to the problem. Once, we had naive realism. The full set of objects and properties which experience reveals to us were considered equally real. They all played a part in the makeup of reality, to which the human mind had a partial but mysteriously direct access. Now, we have physics; ontological atomism, plus calculus. Amazingly, it predicts the behavior of matter with incredible precision, so it’s getting something right. But mind, and everything that is directly experienced, has vanished from the model of reality. It hasn’t vanished in reality; everything we know still comes to us through our minds, and through that same multi-sensory experience which was once naively identified with the world itself, and which we now call conscious experience. The closest approximation within the physical ontology to all of that is computation within the nervous system. But when you ask what neural computation are, physically, it once again reduces to matter in motion through space, and the same mismatch between the apparent character of experience, and the physical character of the brain, recurs. Since denying that experience does have this distinct character is false and therefore hopeless, the only way out must be to somehow reconceive physical ontology so that it contains, by construction, consciousness as it actually is, and so that it preserves the causal structural relations (between fundamental entities whose inner nature is opaque and therefore undetermined by the theory) responsible for the success of quantitative predictions.
I imagine my manifesto there is itself opaque, if you’re one of those people who don’t get the problem to begin with. Nonetheless, I believe that is the principle which has to be followed in order to solve the problem of consciousness. It’s still only the barest of beginnings, you still have to step into darkness and guess which way to turn, many times over, in order to get anywhere, and if my private ideas about how to proceed are right, then you have to take some really big leaps in the darkness. But that’s the kernel of my answer.
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
Let’s try to communicate through intuition pumps:
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn’t be able to tell the difference—your behavior would be the same either way.
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
As I’ve attempted to clarify in the new comment, my problem is not with subsuming consciousness into physics per se, it is specifically with subsuming consciousness into a particular physical ontology, because that ontology does not contain something as basic as perceived color, either fundamentally or combinatorially. To consider that judgement credible, you must believe that there is an epistemic faculty whereby you can tell that color is actually there. Which leads me to your next remark--
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn’t be able to tell the difference—your behavior would be the same either way.
--and so obviously I’m going to object to the assumption that I’m not aware of my qualia. If you performed the swap as described, I wouldn’t know that it had occurred, but I’d still know that and are there and are real; and I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
Doesn’t that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
Doesn’t the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you’re not denying current physics, but just ontologically reorganizing it., then you’re vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We’re all in the same boat.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
No you wouldn’t. People can’t tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can’t have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I’m typing in is based on regularities the size of a transistor. I wouldn’t expect to notice if my images were, really, fundamentally, completely different. I wouldn’t expect to notice if something physical happened—the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
Uniform color and edgeness are as different as night and day.
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you’re not denying current physics, but just ontologically reorganizing it., then you’re vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We’re all in the same boat.
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping—many exact physical states correspond to the same conscious state—then that’s property dualism.
When you say, later on, that your consciousness “is a computation based mainly or entirely on regularities the size of a single neuron or bigger”, that implies dualism or eliminativism, depending on whether you accept that qualia exist. Believe what I quoted, and that qualia exist, and you’re a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn’t really exist, even as appearance), and you’re an eliminativist. This is because a many-to-one mapping isn’t an identity.
“Degrees of existence”, by the way, only makes sense insofar as it really means “degrees of something else”. Existence, like truth, is absolute.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming. Because I prefer the monistic alternative to the dualistic one, and because the program Cyc is definitely “based on regularities the size of a transistor”, I would normally say that Cyc does not and cannot have thoughts, perceptions, beliefs, or other mental properties at all. All those things require consciousness, consciousness is only a property of a physical ontological unity, the computer running Cyc is a causal aggregate of many physical ontological unities, ergo it only has these mentalistic properties because of the imputations of its users, just as the words in a book only have their meanings by convention. When you introduced your original thought-experiment--
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
--maybe I should have gone right away to the question of whether these “perceptions” are actually perceptions, or whether they are just informational states with certain causal roles, and how this differs from true perception. My answer, by the way, is that being an informational state with a causal role is necessary but not sufficient for something to be a perceptual state. I would add that it also has to be “made of qualia” or “be a state of a physical ontological unity”—both these being turns of phrase which are a little imprecise, but which hopefully foreshadow the actual truth. It comes down to what ought to be a tautology: to actually be a perception of , there has to be some actually there. If there isn’t, you just have a simulation.
Just for completeness, I’ll say again that I prefer the monistic alternative, but it does seem to imply that consciousness is to be identified with something fundamental, like a set of quantum numbers, rather than something mesoscopic and semiclassical, like a coarse-grained charge distribution. If that isn’t how it works, the fallback position is an informational property dualism, and what I just wrote would need to be modified accordingly.
Back to your questions about Cyc. Rather than say all that, I countered your original thought-experiment with an anecdote about Douglas Lenat’s Cyc program. The anecdote (as conveyed, for example, in Eliezer’s old essay “GISAI”) is that, according to Lenat, Cyc knows about Cyc, but it doesn’t know that it is Cyc. But then Lenat went and said to Wired that Cyc is self-aware. So I don’t know the finer details of his philosophical position.
What I was trying to demonstrate was the indeterminate nature of machine experience, machine assertions about ontology as based upon experience, and so on. Computation is about behavior and about processes which produce behavior. Consciousness is indeed a process which produces behavior, but that doesn’t define what it is. However, the typical discussion of the supposed thoughts, beliefs, and perceptions of an artificial intelligence breezes right past this point. Specific computational states in the program get dubbed “thoughts”, “desires” and so on, on the basis of a loose structural isomorphism to the real thing, and then the discussion about what the AI feels or wants (and so on) proceeds from there. The loose basis on which these terms are used can easily lead to disagreements—it may even have led Lenat to disagree with himself.
In the absence of a rigorous theory of consciousness it may be impossible to have such discussions without some loose speculation. But my point is that if you take the existence of consciousness seriously, it renders very problematic a lot of the identifications which get made casually. The fact that there is no in physical ontology (or current physical ontology); the fact that from a fundamental perspective these are many-to-one mappings, and a many-to-one mapping can’t be an identity—these facts are simple but they have major implications for theorizing about consciousness.
So, finally answering your questions: 1. yes, it could be programmed to treat itself as something special, and 2. sense data would surely be processed differently, but there’s a difference between implicit and explicit categorizations (see remarks about ontology, below). But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness. And my argument is that the usual position—a casual version of identity theory—is not tenable. Either it’s dualism, or it’s a monism made possible by exotic neurophysics.
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping—many exact physical states correspond to the same conscious state—then that’s property dualism.
Since there’s a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
[this point has low relevance]
Believe what I quoted, and that qualia exist, and you’re a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn’t really exist, even as appearance), and you’re an eliminativist.
It seems like we can cash out the statement “It appears to X that Y” as a fact about an agent X that builds models of the world which have the property Y. It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence or the existence of qualia.
“Degrees of existence”, by the way, only makes sense insofar as it really means “degrees of something else”. Existence, like truth, is absolute.
Degrees of existence come from what is almost certainly a harder philosophical problem about which I am very confused.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming.
Facts about your phenomenology are facts about your programming! If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain. There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
A: “The universe is made out of nothing but love”
B: “What are the properties of ontologically fundamental love?”
A: “[The equations that define the standard model of quantum mechanics]”
B: “I have no evidence to falsify that theory.”
A: “Or balloons. It could be balloons.”
B: “What are the properties of ontologically fundamental balloons?”
A: “[the standard model of quantum theory expressed using different equations]”
B: “There is no evidence that can discriminate between those theories.”
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
I’m a reductive materialist for statements—I don’t see the problem with reading statements about consciousness as statements about quarks. Ontologically I suppose I’m an eliminative materialist.
Since there’s a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
The ontological status of temperature can be investigated by examining a simple ontology where it can be defined exactly, like an ideal gas in a box where the “atoms” interact only through perfectly elastic collisions. In such a situation, the momentum of an individual atom is an exact property with causal relevance. We can construct all sorts of exact composite properties by algebraically combining the momenta, e.g. “the square of the momentum of atom A minus the square root of the momentum of atom B”, which I’ll call property Z. But probably we don’t want to say that property Z exists, in the way that the momentum-property does. The facts about property Z are really just arithmetic facts, facts about the numbers which happen to be the momenta of atoms A and B, and the other numbers they give rise to when combined. Property Z isn’t playing a causal role in the physics, but the momentum property does.
Now, what about temperature? It has an exact definition: the average kinetic energy of an atom. But is it like “property” Z, or like the property of momentum? I think one has to say it’s like property Z—it is a quantitative construct without causal power. It is true that if we know the temperature, we can often make predictions about the gas. But this predictive power appears to arise from logical relations between constructed meta-properties, and not because “temperature” is a physical cause. It’s conceptually much closer than property Z to the level of real causes, but when you say that the temperature caused something, it’s ultimately always a shorthand for what really happened.
When we apply all this to coarse-grained computational states, and their identification with mental states, I actually find myself making, not the argument that I intended (about many-to-one mappings), but another one, an argument against the validity of such an identification, even if it is conceived dualistically. It’s the familiar observation that the mental states become epiphenomenal and not actually causally responsible for anything. Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
So: if you were to insist that temperature is a fundamental physical cause and not just a shorthand for microphysical complexities, then you would not only be a dualist, you would be saying something in contradiction with the causal model of the world offered by physics. It would be a version of phlogiston theory.
As for the “one-to-one mapping between physical states of glasses of water and really long strings”—I assume those are symbol-strings, not super-strings? Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible. If you’re saying that a physical glass of water really is a string of symbols, you’d be bringing up a whole other class of ontological mistakes that we haven’t touched on so far, but which is increasingly endemic in computer-science metaphysics, namely the attempt to treat signs and symbols as ontologically fundamental.
It seems like we can cash out the statement “It appears to X that Y” as a fact about an agent X that builds models of the world which have the property Y.
I actually disagree with this, but thanks for highlighting the idea. The proposed reduction of “appearance” to “modeling” is one of the most common ways in which consciousness is reduced to computation. As a symptom of ontological error, it really deserves a diagnosis more precise than I can provide. But essentially, in such an interpretation, the ontological problem of appearance is just being ignored or thrown out, and all attention directed towards a functionally defined notion of representation; and then this throwing-out of the problem is passed off as an account of what appearance is.
Every appearance has an existence. It’s one of the intriguing pseudo-paradoxes of consciousness that you can see something which isn’t there. That ought to be a contradiction, but what it really means is that there is an appearance in your consciousness which does not correspond to something existing outside of your consciousness. Appearances do exist even when what they indicate does not exist. This is the proof (if such were needed) that appearances do exist. And there is no account of their existential character in a discourse which just talks about an agent’s modeling of the world.
It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence of the existence of qualia.
You are just sabotaging your own ability to think about consciousness, by inventing reasons to ignore appearances.
Facts about your phenomenology are facts about your programming!
No…
If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain.
Those are facts about my ability to communicate my phenomenology.
What’s more interesting to think about is the nature of reflective self-awareness. If I’m able to say that I’m seeing , it’s only because, a few steps back, I’m able to “see” that I’m seeing ; there’s reflective awareness within consciousness of consciousness. There’s a causal structure there, but there’s also a non-causal ontological structure, some form of intentionality. It’s this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
Once again, appearance is being neglected in this passage, this time in favor of belief. To admit that something appears is necessarily to give it some kind of existential status.
B: “What are the properties of ontologically fundamental love?”
A: “[The equations that define the standard model of quantum mechanics]”
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition. But in any case, love also has a subjective appearance, which is different to the subjective appearance of hate, and this is why the experience of hate can falsify the theory that only love exists.
I’m a reductive materialist for statements—I don’t see the problem with reading statements about consciousness as statements about quarks.
Intentionality, qualia, and the unity of consciousness; none of those things exist in the world of quarks as point particles in space.
Ontologically I suppose I’m an eliminative materialist.
The opposite sort of error to religion. In religion, you believe in something that doesn’t exist. Here, you don’t believe in something that does exist.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
Appearances do exist even when what they indicate does not exist.
is a fact about complex arrangements of quarks.
Those are facts about my ability to communicate my phenomenology.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
What’s more interesting to think about is the nature of reflective self-awareness. If I’m able to say that I’m seeing , it’s only because, a few steps back, I’m able to “see” that I’m seeing ; there’s reflective awareness within consciousness of consciousness. There’s a causal structure there, but there’s also a non-causal ontological structure, some form of intentionality. It’s this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
Non-causal ontological structure is suspicious.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
but it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
I’ll quote myself: “The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.”
Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way.
The root of this disagreement is your statement that “Facts about your phenomenology are facts about your programming”. Perhaps you’re used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It’s not even just what I think about it; it’s clear that the thought “I am seeing ” arises in response to a that exists before and apart from the thought.
Non-causal ontological structure is suspicious.
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Let’s revisit what this branch of the conversation was about.
I was arguing that it’s possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there’s no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, since you were insisting that “people can’t tell the difference between ontologies”, I tried to pull out a truly absurd example (though one that occasionally gets lip service from mystically minded people) - that only love exists. I would have thought that a moment’s inspection of the world, or of one’s memories of the world, would show that there are things other than love in existence, even if you adopt total Cartesian skepticism about anything beyond immediate experience.
Your riposte was to imagine an advocate of the all-is-love theory who, when asked to provide the details, says “quantum mechanics”. I said it’s rather hard to interpret QM that way, and you pointed out that I’m trying to get experience from QM. That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience. My actual thesis is that conscious experience is the state of some particular type of quantum system, so the emotions do have to be in the theory somewhere. But I don’t think you can even reduce the other emotions to the emotion of love, let alone the non-emotional aspects of the mind, so the whole thing is just silly.
Then you had your advocate go on to speak in favor of the all-is-balloons theory, again with QM providing the details. I think you radically overestimate the freedom one has to interpret a mathematical formalism and still remain plausible or even coherent.
What we say using natural language is not just an irrelevant, interchangeable accessory to what we say using equations. Concepts can still have a meaning even if it’s only expressed informally, and one of the underappreciated errors of 20th-century thought is the belief that formalism validates everything: that you can say anything about a topic and it’s valid to do so, if you’re saying it with a formalism. A very minor example is the idea of a “noncommutative probability”. In quantum theory, we have complex numbers, called probability amplitudes, which appear as an intermediate stage prior to the calculation of numbers that are probabilities in the legitimate sense—lying between 0 and 1, expressing relative frequency of an outcome. There is a formalism of this classical notion of probability, due to Kolmogorov. You can generalize that formalism, so that it is about probability amplitudes, and some people call that a theory of “noncommutative probability”. But it’s not actually a theory of probability any more. A “noncommutative probability” is not a probability; that’s why probability amplitudes are so vexatious to interpret. The designation, “noncommutative probability”, sweeps the problem under the carpet. It tells us that these mysterious non-probabilities are not mysterious; they are probabilities—just … different. There can be a fine line between “thinking like reality” and fooling yourself into thinking that you understand.
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
So divide the particle velocities by temperature or whatever.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
How do you tell what’s redundant complexity and what’s ontologically fundamental? Position or momentum model of quantum mechanics, for instance?
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes.
What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration—the statement that they are not epiphenomenal, but rather, fundamental.
So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Is there anything about your or anyone else’s actions that provides evidence for this hypothesis?
“genuine” causal relations is much weaker than “ontologically fundamental” relations.
Do only pure qualia really exist? Do beliefs, desires, etc. also exist?
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
You can map a set of three quantum states onto a set of {, , }
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
No, it means ontological structure—not structures of things, but the structure of thing’s ontology—that doesn’t say anything about the things themselves, just about their ontology.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
A logical/probabilistic one. There is no evidence for a correlation between the statements “These beings have large-scale quantum entanglement” and “These beings think and talk about consciousness”
That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience
You would have to be saying that to be exactly the same as your character. You’re contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy “the stuff is in this arrangement” and the second guy “the stuff is in this arrangement, and the experiences are in that arrangement”, they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences.
That doesn’t seem at all suspicious to you?
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
You are correct. “balloons” refers to balloons, not to quarks.
I guess what’s going on is that the guy is saying that’s what he believes balloons are.
But thinking about the meaning of words is clarifying.
It seems like the question is almost—“Is ‘experience’ a word like phlogiston or a word like elephant?”
More or less, whatever has been causing us to see all those elephants gets to be called an elephant. Elephants are reductionism-compatible. There are some extreme circumstances—images of elephants I have seen are fabrication, the people who claim to have seen elephants are lying to me—that break this rule. Phlogiston, on the other hand, is a word we give up on much more readily. Heat is particle bouncing around, but the absence of oxygen is not phlogiston—it’s just the absence of oxygen.
You believe that “experience” is fundamentally incompatible with reduction. An experience, to exist at all, must be an ontologically fundamental experience. Thus saying “I see red” makes two claims—one, that the brain is in a certain class of its possible total configuration states, those in which the person is seeing red, and two, that the experience of seeing red is ontologically fundamental.
I see no way to ever get the physical event of people claiming that they experience color correlated with the ontological fundamentalness of their color, as we can investigate the phlogiston hypothesis and stop using it if and only if it turns out to be a bad model.
What is a claim when it’s not correlated with its subject? The whole point of the words within it has been irrevocably lost. It is pure speculation.
I really, really don’t think, that when I say I see red, I’m just speculating.
It’s almost a month since we started this discussion, and it’s a bit of a struggle to remember what’s important and what’s incidental. So first, a back-to-basics statement from me.
Colors do exist, appearances do exist; that’s nonnegotiable. That they do not exist in an ontology of “nothing but particles in space” is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can’t be identified with it.
We aren’t like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with “experiences”, and so on.
Here I’d say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can’t just talk about atomized sensory qualia.
Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That’s because it’s a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can’t be an identity.
All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have something in my mathematical theory of reality which simply is the bearer of conscious states, has the properties and structure of a conscious whole, and is appropriately located in the causal chain. Since the mathematics describing a configuration of particles in space seems very unpromising for such a reinterpretation; and since our physics is quantum mechanics anyway, and the formalism of quantum mechanics contains entangled wavefunctions that can’t be factorized into localized wavefunctions, it’s quite natural to look for these conscious wholes in some form of QM where entanglement is ontological. However, since consciousness is in the brain and causally relevant, this implies that there must be a functionally relevant brain subsystem that is in a quantum coherent state.
That is the argument which leads me from “consciousness is real” to “there’s large-scale quantum entanglement in the brain”. Given the physics we have, it’s the only way I see to avoid property dualism, and it’s still just a starting point, on every level: mathematically, ontologically, and of course neurobiologically. But that is the argument you should be scrutinizing. What’s at stake in some of our specific exchanges may be a little obscure, so I wanted to set down the main argument in one piece, in one place, so you could see what you’re dealing with.
I will lay down the main thing convincing me that you’re correct.
Consider the three statements:
“there’s a large-scale quantum entanglement in the brain”
“consciousness is real”
“Mitchell Porter says that consciousness is real.”
Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated.
However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there’s a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with “existence”, it can be hard to say what “causation” is. But whatever it is, and whether or not we can say something informative about its ontological character, if you’re using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities—dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it—the difference between the elementary situation, where A leads directly to B, and the composite situation, where A “causes” B because A leads directly to A’ which leads directly to A″ … and eventually this chain terminates in B.
Also—and this is germane to the earlier discussion about fuzzy properties and macroscopic states—in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it’s even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of “particle encounters force field causes change in particle’s motion”.
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it’s an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn’t matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I’ve argued two things so far. First, qualia and other features of consciousness aren’t there in the physical ontology, so that’s a problem. Second, a many-to-one mapping is not an identity relation, it’s more suited to property dualism, so that’s also a problem.
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it’s the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there’s a “homunculus fallacy”, where you explain (for example) the experience of seeing as due to a “homunculus” (“little human”) in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
See next section.
I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
No you wouldn’t. People can’t tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can’t have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code—whatever that corresponds to, in a human being.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but…
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I’m typing in is based on regularities the size of a transistor. I wouldn’t expect to notice if my images were, really, fundamentally, completely different. I wouldn’t expect to notice if something physical happened—the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
Uniform color and edgeness are as different as night and day.
They are, but I was actually talking about the difference between colorness/edgeness and neuronness.
I agree with you that if my experience of red can’t be constructed of matter, then my understanding of a sentence also can’t be. And I agree with you that we don’t have a reliable account of how to construct such things out of matter, and without such an account we can’t rule out the possibility that, as you suggest, such an account is simply not possible. I agree with you that this objection to physicalism has been around for a long time.
I agree with you that insofar as we understand vitalism to be an account of how particular arrangements of matter move around, it is a different sort of thing from the kind of “sentientism” you are talking about. That said, I think that’s a misrepresentation of historical vitalism; I think when the vitalists talked about elan vital being the difference between living and unliving matter, they were also attributing sentience (though not sapience) to elan vital, as well as simple animation.
I don’t equate the experience of red with the tendency to output the word “red” when queried, both in the sense that it’s easy for me to imagine being unable to generate that output while continuing to experience red, and in the sense that it’s easy for me to imagine a system that outputs the word “red” when queried without having an experience of red. Lexicalization is neither necessary nor sufficient for experience.
I don’t equate the experience of red with categorization… it is easy to imagine categorization without experience. It’s harder to imagine experience without categorization, though. Categorization might be necessary, but it certainly isn’t sufficient, for experience.
Like you, I can’t come up with a physical account of sentience. I have little faith in the power of my imagination, though. Put another way: it isn’t easy for me to see what one can and can’t make out of particles. But I agree with you that any such account would be surprising, and that there is a phenomenon there to explain. So I think I fall somewhere in between your two classes of people who are a waste of time to talk to: I get that there’s a problem, but it isn’t obvious to me that the properties that comprise what it feels like to be a bat must be ontologically basic and nonphysical. Which I think still means I’m wasting your time. (I did warn you in the grandparent comment that you won’t find my answer interesting.)
If it turns out that a particular sensation is perfectly correlated with the presence of a particular physical structure, and that disrupting that structure always triggers a disruption of the sensation, and that disrupting the sensation always triggers a disruption of the structure… well, at that point, I’m pretty reluctant to posit a nonphysical sensation. Sure, it might be there, but if I posit it I need to account for why the sensation is so tightly synchronized with the physical structure, and it’s not at all clear that that task is any simpler than identifying one with the other, counterintuitive as that may be.
At the other extreme, if the nonphysical structure makes a difference, demonstrating that difference would make me inclined to posit a nonphysical sensation. For example, if we can transmit sensation without transmitting any physical signal, I’d be strongly inclined to posit a nonphysical structure underlying the sensation. Looking for such a demonstrable difference might be a useful way to start getting somewhere.
Perhaps we are closer to mutual understanding than might have been imagined, then. A crucial point: I wouldn’t talk about the mind as something “nonphysical”. That’s why I said that the problem is with our current physical ontology. The problem is not that we have a model of the world in which events outside our heads are causally connected to events inside our heads via a chain of intermediate events. The problem is that when we try to interpret physics ontologically (and not just operationally), the available frameworks are too sparse and pallid (those are metaphors of course) to produce anything like actual moment-to-moment experience. The dance of particles can produce something isomorphic to sensation and thought, but not identical. Therefore, what we might think of as a dance of particles actually needs to be thought of in some other way.
So I’m actually very close in spirit to the reductionist who wants to think of their experience in terms of neurons firing and so forth, except I say it’s got to be the other way around. Taken literally, that would mean that we need to learn to think of what we now call neurons firing, as being fundamentally—this—moment-to-moment experience, as is happening to you right now. Except, the physical nature of whole neurons I don’t believe plausibly allows such an ontological reinterpretation. If consciousness really is based on mesoscopic-level informational states in neurons, then I’d favor property dualism rather than the reverse monism I just advocated. But I’m going for the existence of a Cartesian theater somewhere in the brain whose physical implementation is based on exact quantum states rather than collective coarse-grained classical ones, quantum states which in our current understanding would look more algebraic than geometric. And the succession of abstract algebraic state transitions in that Cartesian theater is the deracinated mathematical description of what, in reality, is the flow of conscious experience.
If that is the true interior reality of one quantum island in the causal network of the world, it might be anticipated that every little causal nexus has its own inside too—its own subjectivity. The non-geometric, localized, algebraic side of physics would turn out to actually be a description of the local succession of conscious states, and the spatial, geometric aspect of physics would in fact describe the external causal interactions between these islands of consciousness. Except I suspect that the term consciousness is best reserved for a very rare and highly involuted type of state, and that most things count as islands of “being” but not as islands of “experiencing” (at least, not as islands of reflective experiencing).
I should also distinguish this philosophy from the sort which sees mind wherever there is distributed computation—so that the hierarchical structure of classical interaction in the world gets interpreted as a set of minds made of minds made of minds. I would say that the ontological glue of individual consciousness is not causal interaction—it’s something much tighter. The dependence of elements of a state of consciousness on the whole state of consciousness is more like the way that the face of a cube is part of the cube, though even that analogy is nowhere near strong enough, because the face of a cube is a square and a square can have independent existence, though when it’s independent it’s no longer a face. However we end up expressing it, the world is fundamentally made of these logical ontological unities, most of which are very simple and correspond to something like particles, and a few of which have become highly complex—with waking states of consciousness being extremely complex examples of these—and all of these entities interact causally and quasi-locally. These interactions bind them into systems and into systems of systems, but systems themselves are not conscious, because ontologically they are multiplicities, and consciousness is always a property of one of those fundamental physical unities whose binding principle is more than just causal association.
An ontology of physics like that is one where the problem of consciousness might be solved in a nondualistic way. But its viability does seem to require that something like quantum entanglement is found to be relevant to conscious cognition. As I said, if that isn’t borne out, I’ll probably fall back on some form of property dualism, in which there’s a many-to-one mapping between big physical states (like ion concentrations on opposite sides of axonal membranes) and distinct possible states of consciousness. But physical neuroscience has quite a way to go yet, so I’m very far from giving up on the monistic quantum theory of mind.
So, getting back to my original question about what your alternate ontology has to offer…
If I’m understanding you (which is far from clear), while you are mostly concerned with being ontologically correct rather than operationally useful, you do make a falsifiable neurobiological prediction having something I didn’t follow to do with quantum entanglement.
Cool. I approve of falsifiable predictions; they are a useful thing that a way of thinking about the world can offer.
I think you ought to be more interested in what this shows about the severity of the problem of consciousness. See my remarks to William Sawin, about color and about many-to-one mappings, and how they lead to a choice between this peculiar quantum monism (which is indeed difficult to understand at first encounter), and property dualism. While I like my own ideas (about quantum monads and so forth), the difficulties associated with the usual approaches to consciousness matter in their own right.
(nods) I understand that you do; I have from the beginning of this exchange been trying to move forward from that bald assertion into a clarification of why I ought to be… that is, what benefits there are to be gained from channeling my interest as you recommend.
Put another way: let us suppose you’re right that there are aspects of consciousness (e.g., subjective experience/qualia) that cannot be adequately explained by mainstream ontology.
Suppose further that tomorrow we encounter an entity (an isolated group of geniuses working productively on the problem, or an alien civilization with a different ontological tradition, or spirit beings from another dimension, or Omega, or whatever) that has worked out an ontology that does adequately explain it, using quantum monads or something else, to roughly the same level of refinement and practical implementation that we have worked out our own.
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
Or, to ask the question a different way: suppose we encounter an entity that claims to have worked out such an ontology, but won’t show it to us. What properties ought we look for in that entity that provide evidence that their claim is legitimate?
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements. (I may have misunderstood that, in which case I would appreciate clarification.) So I should not expect them to have a superior understanding of behavior that would manifest in various detectable ways. Nor should I expect them to have a superior understanding of physics.
I’m not really sure what I should expect them to have a superior understanding of, though, or what capabilities I should expect such an understanding to entail. Surely there ought to be something, if this branch of knowledge is, as you claim, worth pursuing.
Thus far, I’ve gotten that they ought to be able to make predictions about neurobiological structures that relate to certain kinds of quantum structures. I’m wondering what else.
Because if it’s just about being right about ontology for the sake of being right about ontology when it entails no consequences, then I simply disagree with you that I ought to be more interested.
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
I don’t consider this inability to merely be posited. It’s a matter of understanding what you can and can’t do with the ontological ingredients provided. You have particles, you have non-positional properties of individual particles, you have the motions of particles, you have changes in the non-positional properties. You have causal relations. You have sets of these entities; you have causal chains built from them; you have higher-order quantitative and logical facts deriving from the elementary facts about configuration and causal relationships. That’s basically all you have to work with. An ontology of fields, dynamical geometry, probabilities adds a few twists to this picture, but nothing that changes it fundamentally. So I’m saying there is nothing in this ontology, either fundamental or composite (in a broad sense of composite), which can be identified with—not just correlated with, but identified with—consciousness and its elements. And color offers the clearest and bluntest proof of this.
We can keep going over this fact from different angles, but eventually it comes down to seeing that one thing is indeed different from another. 1 is not 0; is not any specific thing that can be found in the ontology of particles. It reduces to pairwise comparative judgments in which ontologically dissimilar basic entities are perceived to indeed be ontologically dissimilar.
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements.
What are we trying to explain, ultimately? What even gives us something to be explained? It’s conscious experience again; the appearance of a world. Our physical theories describe the behavior of a world which is structurally similar to the world of appearance, but which does not have all its properties. We are happy to say that the world of appearance is just causally connected, in a regularity-preserving way, to an external world, and that these problem properties only exist in the “world of appearance”. That might permit us to regard the “external world” as explained by our physics. But then we have this thing, “world of appearance”, where all the problems remain, and which we are nonetheless trying to assimilate to physics (via neuroscience). However, we know (if we care to think things through), that this assimilation is not possible with the current physical ontology.
So the claim that we can describe the behavior of things is not quite as powerful as it seems, because it turns out that the things we are describing can’t actually be the “things” of direct experience, the appearances themselves. We can get isomorphism here, but not identity. It’s an ontological problem: the things of physical theory need to be reconceived so that some of them can be identified with the things of consciousness, the appearances.
I understand that you aren’t “merely” positing the inability of a set of particles, positions and energy-states to be an experience.
I am.
I also understand that you consider this a foolish insistence on my part on rejecting the obvious facts of experience. As I’ve said several times now, repeatedly belaboring that point isn’t going to progress this discussion further.
You won’t find my answer interesting, but since you asked: I think experiences of color are among the states that particles in space can get into, just as the impulse to blink is a state particles in space can get into, just as a predisposition to generate meaningful English but not German sentences is a state that particles in space can get into, just as an appreciation for 17th-century Romanian literature is a state that particles in space can get into, just as a contagious head cold is a state that particles in space can get into. (Which is not to say that all of those are the same kinds of states.)
We can certainly populate our ontologies with additional entities related to those various things if we wish… color qualia and motor-impulse qualia and English qualia and German qualia and 17th-century Romanian literary qualia and contagious head cold qualia and so forth. I have no problem with that in and of itself, if positing these entities is useful for something.
But before I choose to do so, I want to understand what use those entities have to offer me. Populating my ontology with useless entities is silly.
I understand that this hesitation seems to you absurd, because you believe it ought to seem obvious to me that arrangements of matter simply aren’t the kind of thing that can be an experience of color, just like it should seem obvious that numbers aren’t the kind of thing that can be a rock, just as it seems obvious to Searle that formal rules aren’t the kind of thing that can be an understanding of Chinese, just as it seemed obvious to generations of thinkers that arrangements of matter aren’t the kind of thing that can be an infectious living cell.
These things aren’t, in fact, obvious to me. If you have reasons for believing any of them other than their obviousness, I might find those reasons compelling, but repeated assertions of their obviousness are not.
An arrangement of particles in space can embody a blink reflex with no problems, because blinking is motion, and so it just means they’re changing position in space.
Generating meaningful sentences—here we begin to run into problems, though not so severe as the problem with color. If the sentences are understood to be physical objects, such as sequences of sound waves or sequences of letter-shapes, then they can fit into physical ontology. We might even be able to specify a formal grammar of allowed sentences, and a combinatorial process which only produces physical sentences from that grammar. But meaning per se, like color, is not a physical property as ordinarily understood. (I know I’ll get into extra trouble here, because some people are with me on the color qualia being a problem, but believe that causal theories of reference can reduce meaning to a conjunction of known physical properties. However, so far as I can see, intrinsic meaning is a property only of certain constituents of mental states—the meaning of sentences and all other intersubjective signs is not intrinsic and derives from a shared interpretive code—and the correct ontology of meaning is going to be bound up with the correct ontology of consciousness in general.)
Anyway, you say it’s not obvious to you that “arrangements of matter simply aren’t the kind of thing that can be an experience of color”. Okay. Let’s suppose there is an arrangement of matter in space which is an experience of color. Maybe it’s a trillion particles in a certain arrangement executing a certain type of motion. Now, we can think about progressively simpler arrangements and motions of particles—subtracting one particle at a time from the scenario, if necessary… progressively simpler until we get all the way back to empty space. Somewhere in that conceptual progression we stopped having an experience of color there. Can you give me the faintest, slightest hint of where the magic transition occurs—where we go from “arrangement of particles that’s an experience of color” to “arrangement of particles that’s not an experience of color”?
I could also simply ask for you to indicate where in the magic arrangement of particle the color is. That is, assuming that you agree that one aspect of the existence of an experience of color is that something somewhere actually is that color. If it turns out that, according to you, brain state X is an experience of only because the brain in question outputs the word “red” when queried, or only because a neural network somewhere is making the categorization “red”—then that is eliminativism. There’s no actual , no actual color, just color words or color categories.
The reason it is obvious that there is no color inherently inhabiting an arrangement of particles in space is because it’s easy to see what the available ontological ingredients are, and it’s easy to see what you can and cannot make by combining them. If we include dynamics and a notion of causality, then the ingredients are position, time, and causal dependence. What can you construct from such ingredients? You can make complicated structures; you can make complicated motions; you can make complicated causal dependencies among structures and motions. As you can see, it’s no mystery that such an ontological scheme can encompass something like a blink reflex, which is a type of motion with a specified causal dependency.
With respect to the historical case of vitalism, it’s interesting that what the vitalists posited was a “vital force”. That’s not an objection to the logical possibility of reducing life, and especially replication, to matter in motion. They just didn’t believe that the known forces were capable of producing the right sort of motion, so they felt the need to postulate a new, complicated form of causal interaction, capable of producing the complexly orchestrated motion which must be occurring for living things to take shape. As it turned out, there was no need to postulate a special vital force to do that; the orchestration can be produced by the same forces which are at work in nonliving matter.
I’m emphasizing the way in which the case of vitalism differs from the case of qualia, because it is so often cited as a historical precedent. The vitalists—at least, the ones who talked about vital forces—were not saying that life is not material. They just postulated an extra force; in that respect, they were proposing only a conservative extension to the physical ontology of their time. But the observation that consciousness presents a basic ontological problem, in a universe consisting of nothing but matter in motion through space, has been around for a very long time. Democritus took note of this objection. I think Leibniz stated it in a recognizably modern form. It is an old insight, and it has not gone away just because the physical sciences have been so successful. Celia Green writes that this success actually sharpens the problem: the clearer our conception of material ontology and our causal account of the world becomes, the more obvious it becomes that this concept and this account do not contain the “secondary qualities” like your .
Even at the dawn of modern physical science, in the time of Galileo, there was some discussion as to how these qualities were being put aside, in favor of an exclusive focus on space, time, motion, extension. It’s quite amazing that from humble beginnings like Kepler’s laws, we’ve come as far as quantum mechanics, string theory, molecular biology, all the time maintaining that exclusion. Some new ontological factors did enter the set of ingredients that physical ontology can draw upon, especially probability, but those elementary sensory qualities remain absent from the physical conception of reality. The 20th-century revolution in thought regarding information, communication, and computation goes just a little way towards bringing them back, but in the end it’s nowhere near enough, because when you ask, what are these information states really, you end up having to reduce them to statistical properties of particles in space, because that’s still all that the physical ontology gives you to work with.
I’m probably an idiot for responding at such length on this topic, because all my experience to date suggests that doing so changes nothing fundamentally. Some people get that there’s a problem, but don’t know how to solve it and can only hope that the future does so, or they embrace a fuzzy idea like emergence dualism or panpsychism out of intellectual desperation. Some people don’t get that there’s a problem—don’t perceive, for example, that “what it feels like to be a bat” is an extra new property on top of all the ordinary physical properties that make up a bat—and are happy with a philosophical formula like “thought is computation”.
I believe there is a problem to be solved, a severe problem, a problem of the first order, whose solution will require a change of perspective as big as the one which introduced us to the problem. Once, we had naive realism. The full set of objects and properties which experience reveals to us were considered equally real. They all played a part in the makeup of reality, to which the human mind had a partial but mysteriously direct access. Now, we have physics; ontological atomism, plus calculus. Amazingly, it predicts the behavior of matter with incredible precision, so it’s getting something right. But mind, and everything that is directly experienced, has vanished from the model of reality. It hasn’t vanished in reality; everything we know still comes to us through our minds, and through that same multi-sensory experience which was once naively identified with the world itself, and which we now call conscious experience. The closest approximation within the physical ontology to all of that is computation within the nervous system. But when you ask what neural computation are, physically, it once again reduces to matter in motion through space, and the same mismatch between the apparent character of experience, and the physical character of the brain, recurs. Since denying that experience does have this distinct character is false and therefore hopeless, the only way out must be to somehow reconceive physical ontology so that it contains, by construction, consciousness as it actually is, and so that it preserves the causal structural relations (between fundamental entities whose inner nature is opaque and therefore undetermined by the theory) responsible for the success of quantitative predictions.
I imagine my manifesto there is itself opaque, if you’re one of those people who don’t get the problem to begin with. Nonetheless, I believe that is the principle which has to be followed in order to solve the problem of consciousness. It’s still only the barest of beginnings, you still have to step into darkness and guess which way to turn, many times over, in order to get anywhere, and if my private ideas about how to proceed are right, then you have to take some really big leaps in the darkness. But that’s the kernel of my answer.
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
Let’s try to communicate through intuition pumps:
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn’t be able to tell the difference—your behavior would be the same either way.
Two meditations on an optical illusion: I heard, possibly on lesswrong, that in illusions like this one: http://www.2dorks.com/gallery/2007/1011-illusions/12-kanizsatriangle.jpg your edge-detecting neurons fire at both the real and the fake edges.
Doesn’t that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?
Doesn’t the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?
My latest comment might clarify a few things. Meanwhile,
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
As I’ve attempted to clarify in the new comment, my problem is not with subsuming consciousness into physics per se, it is specifically with subsuming consciousness into a particular physical ontology, because that ontology does not contain something as basic as perceived color, either fundamentally or combinatorially. To consider that judgement credible, you must believe that there is an epistemic faculty whereby you can tell that color is actually there. Which leads me to your next remark--
--and so obviously I’m going to object to the assumption that I’m not aware of my qualia. If you performed the swap as described, I wouldn’t know that it had occurred, but I’d still know that and are there and are real; and I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you’re not denying current physics, but just ontologically reorganizing it., then you’re vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We’re all in the same boat.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
No you wouldn’t. People can’t tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can’t have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I’m typing in is based on regularities the size of a transistor. I wouldn’t expect to notice if my images were, really, fundamentally, completely different. I wouldn’t expect to notice if something physical happened—the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
Uniform color and edgeness are as different as night and day.
(part 1 of reply)
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping—many exact physical states correspond to the same conscious state—then that’s property dualism.
When you say, later on, that your consciousness “is a computation based mainly or entirely on regularities the size of a single neuron or bigger”, that implies dualism or eliminativism, depending on whether you accept that qualia exist. Believe what I quoted, and that qualia exist, and you’re a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn’t really exist, even as appearance), and you’re an eliminativist. This is because a many-to-one mapping isn’t an identity.
“Degrees of existence”, by the way, only makes sense insofar as it really means “degrees of something else”. Existence, like truth, is absolute.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming. Because I prefer the monistic alternative to the dualistic one, and because the program Cyc is definitely “based on regularities the size of a transistor”, I would normally say that Cyc does not and cannot have thoughts, perceptions, beliefs, or other mental properties at all. All those things require consciousness, consciousness is only a property of a physical ontological unity, the computer running Cyc is a causal aggregate of many physical ontological unities, ergo it only has these mentalistic properties because of the imputations of its users, just as the words in a book only have their meanings by convention. When you introduced your original thought-experiment--
--maybe I should have gone right away to the question of whether these “perceptions” are actually perceptions, or whether they are just informational states with certain causal roles, and how this differs from true perception. My answer, by the way, is that being an informational state with a causal role is necessary but not sufficient for something to be a perceptual state. I would add that it also has to be “made of qualia” or “be a state of a physical ontological unity”—both these being turns of phrase which are a little imprecise, but which hopefully foreshadow the actual truth. It comes down to what ought to be a tautology: to actually be a perception of , there has to be some actually there. If there isn’t, you just have a simulation.
Just for completeness, I’ll say again that I prefer the monistic alternative, but it does seem to imply that consciousness is to be identified with something fundamental, like a set of quantum numbers, rather than something mesoscopic and semiclassical, like a coarse-grained charge distribution. If that isn’t how it works, the fallback position is an informational property dualism, and what I just wrote would need to be modified accordingly.
Back to your questions about Cyc. Rather than say all that, I countered your original thought-experiment with an anecdote about Douglas Lenat’s Cyc program. The anecdote (as conveyed, for example, in Eliezer’s old essay “GISAI”) is that, according to Lenat, Cyc knows about Cyc, but it doesn’t know that it is Cyc. But then Lenat went and said to Wired that Cyc is self-aware. So I don’t know the finer details of his philosophical position.
What I was trying to demonstrate was the indeterminate nature of machine experience, machine assertions about ontology as based upon experience, and so on. Computation is about behavior and about processes which produce behavior. Consciousness is indeed a process which produces behavior, but that doesn’t define what it is. However, the typical discussion of the supposed thoughts, beliefs, and perceptions of an artificial intelligence breezes right past this point. Specific computational states in the program get dubbed “thoughts”, “desires” and so on, on the basis of a loose structural isomorphism to the real thing, and then the discussion about what the AI feels or wants (and so on) proceeds from there. The loose basis on which these terms are used can easily lead to disagreements—it may even have led Lenat to disagree with himself.
In the absence of a rigorous theory of consciousness it may be impossible to have such discussions without some loose speculation. But my point is that if you take the existence of consciousness seriously, it renders very problematic a lot of the identifications which get made casually. The fact that there is no in physical ontology (or current physical ontology); the fact that from a fundamental perspective these are many-to-one mappings, and a many-to-one mapping can’t be an identity—these facts are simple but they have major implications for theorizing about consciousness.
So, finally answering your questions: 1. yes, it could be programmed to treat itself as something special, and 2. sense data would surely be processed differently, but there’s a difference between implicit and explicit categorizations (see remarks about ontology, below). But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness. And my argument is that the usual position—a casual version of identity theory—is not tenable. Either it’s dualism, or it’s a monism made possible by exotic neurophysics.
(continued)
Since there’s a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
[this point has low relevance]
It seems like we can cash out the statement “It appears to X that Y” as a fact about an agent X that builds models of the world which have the property Y. It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence or the existence of qualia.
Degrees of existence come from what is almost certainly a harder philosophical problem about which I am very confused.
Facts about your phenomenology are facts about your programming! If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain. There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
A: “The universe is made out of nothing but love”
B: “What are the properties of ontologically fundamental love?”
A: “[The equations that define the standard model of quantum mechanics]”
B: “I have no evidence to falsify that theory.”
A: “Or balloons. It could be balloons.”
B: “What are the properties of ontologically fundamental balloons?”
A: “[the standard model of quantum theory expressed using different equations]”
B: “There is no evidence that can discriminate between those theories.”
I’m a reductive materialist for statements—I don’t see the problem with reading statements about consciousness as statements about quarks. Ontologically I suppose I’m an eliminative materialist.
The ontological status of temperature can be investigated by examining a simple ontology where it can be defined exactly, like an ideal gas in a box where the “atoms” interact only through perfectly elastic collisions. In such a situation, the momentum of an individual atom is an exact property with causal relevance. We can construct all sorts of exact composite properties by algebraically combining the momenta, e.g. “the square of the momentum of atom A minus the square root of the momentum of atom B”, which I’ll call property Z. But probably we don’t want to say that property Z exists, in the way that the momentum-property does. The facts about property Z are really just arithmetic facts, facts about the numbers which happen to be the momenta of atoms A and B, and the other numbers they give rise to when combined. Property Z isn’t playing a causal role in the physics, but the momentum property does.
Now, what about temperature? It has an exact definition: the average kinetic energy of an atom. But is it like “property” Z, or like the property of momentum? I think one has to say it’s like property Z—it is a quantitative construct without causal power. It is true that if we know the temperature, we can often make predictions about the gas. But this predictive power appears to arise from logical relations between constructed meta-properties, and not because “temperature” is a physical cause. It’s conceptually much closer than property Z to the level of real causes, but when you say that the temperature caused something, it’s ultimately always a shorthand for what really happened.
When we apply all this to coarse-grained computational states, and their identification with mental states, I actually find myself making, not the argument that I intended (about many-to-one mappings), but another one, an argument against the validity of such an identification, even if it is conceived dualistically. It’s the familiar observation that the mental states become epiphenomenal and not actually causally responsible for anything. Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
So: if you were to insist that temperature is a fundamental physical cause and not just a shorthand for microphysical complexities, then you would not only be a dualist, you would be saying something in contradiction with the causal model of the world offered by physics. It would be a version of phlogiston theory.
As for the “one-to-one mapping between physical states of glasses of water and really long strings”—I assume those are symbol-strings, not super-strings? Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible. If you’re saying that a physical glass of water really is a string of symbols, you’d be bringing up a whole other class of ontological mistakes that we haven’t touched on so far, but which is increasingly endemic in computer-science metaphysics, namely the attempt to treat signs and symbols as ontologically fundamental.
I actually disagree with this, but thanks for highlighting the idea. The proposed reduction of “appearance” to “modeling” is one of the most common ways in which consciousness is reduced to computation. As a symptom of ontological error, it really deserves a diagnosis more precise than I can provide. But essentially, in such an interpretation, the ontological problem of appearance is just being ignored or thrown out, and all attention directed towards a functionally defined notion of representation; and then this throwing-out of the problem is passed off as an account of what appearance is.
Every appearance has an existence. It’s one of the intriguing pseudo-paradoxes of consciousness that you can see something which isn’t there. That ought to be a contradiction, but what it really means is that there is an appearance in your consciousness which does not correspond to something existing outside of your consciousness. Appearances do exist even when what they indicate does not exist. This is the proof (if such were needed) that appearances do exist. And there is no account of their existential character in a discourse which just talks about an agent’s modeling of the world.
You are just sabotaging your own ability to think about consciousness, by inventing reasons to ignore appearances.
No…
Those are facts about my ability to communicate my phenomenology.
What’s more interesting to think about is the nature of reflective self-awareness. If I’m able to say that I’m seeing , it’s only because, a few steps back, I’m able to “see” that I’m seeing ; there’s reflective awareness within consciousness of consciousness. There’s a causal structure there, but there’s also a non-causal ontological structure, some form of intentionality. It’s this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
Once again, appearance is being neglected in this passage, this time in favor of belief. To admit that something appears is necessarily to give it some kind of existential status.
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition. But in any case, love also has a subjective appearance, which is different to the subjective appearance of hate, and this is why the experience of hate can falsify the theory that only love exists.
Intentionality, qualia, and the unity of consciousness; none of those things exist in the world of quarks as point particles in space.
The opposite sort of error to religion. In religion, you believe in something that doesn’t exist. Here, you don’t believe in something that does exist.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
What are the other conditions?
is a fact about complex arrangements of quarks.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
Non-causal ontological structure is suspicious.
but it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
(part 2)
I’ll quote myself: “The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.”
Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way.
The root of this disagreement is your statement that “Facts about your phenomenology are facts about your programming”. Perhaps you’re used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It’s not even just what I think about it; it’s clear that the thought “I am seeing ” arises in response to a that exists before and apart from the thought.
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
Let’s revisit what this branch of the conversation was about.
I was arguing that it’s possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there’s no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, since you were insisting that “people can’t tell the difference between ontologies”, I tried to pull out a truly absurd example (though one that occasionally gets lip service from mystically minded people) - that only love exists. I would have thought that a moment’s inspection of the world, or of one’s memories of the world, would show that there are things other than love in existence, even if you adopt total Cartesian skepticism about anything beyond immediate experience.
Your riposte was to imagine an advocate of the all-is-love theory who, when asked to provide the details, says “quantum mechanics”. I said it’s rather hard to interpret QM that way, and you pointed out that I’m trying to get experience from QM. That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience. My actual thesis is that conscious experience is the state of some particular type of quantum system, so the emotions do have to be in the theory somewhere. But I don’t think you can even reduce the other emotions to the emotion of love, let alone the non-emotional aspects of the mind, so the whole thing is just silly.
Then you had your advocate go on to speak in favor of the all-is-balloons theory, again with QM providing the details. I think you radically overestimate the freedom one has to interpret a mathematical formalism and still remain plausible or even coherent.
What we say using natural language is not just an irrelevant, interchangeable accessory to what we say using equations. Concepts can still have a meaning even if it’s only expressed informally, and one of the underappreciated errors of 20th-century thought is the belief that formalism validates everything: that you can say anything about a topic and it’s valid to do so, if you’re saying it with a formalism. A very minor example is the idea of a “noncommutative probability”. In quantum theory, we have complex numbers, called probability amplitudes, which appear as an intermediate stage prior to the calculation of numbers that are probabilities in the legitimate sense—lying between 0 and 1, expressing relative frequency of an outcome. There is a formalism of this classical notion of probability, due to Kolmogorov. You can generalize that formalism, so that it is about probability amplitudes, and some people call that a theory of “noncommutative probability”. But it’s not actually a theory of probability any more. A “noncommutative probability” is not a probability; that’s why probability amplitudes are so vexatious to interpret. The designation, “noncommutative probability”, sweeps the problem under the carpet. It tells us that these mysterious non-probabilities are not mysterious; they are probabilities—just … different. There can be a fine line between “thinking like reality” and fooling yourself into thinking that you understand.
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
So divide the particle velocities by temperature or whatever.
How do you tell what’s redundant complexity and what’s ontologically fundamental? Position or momentum model of quantum mechanics, for instance?
What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration—the statement that they are not epiphenomenal, but rather, fundamental.
Is there anything about your or anyone else’s actions that provides evidence for this hypothesis?
“genuine” causal relations is much weaker than “ontologically fundamental” relations.
Do only pure qualia really exist? Do beliefs, desires, etc. also exist?
You can map a set of three quantum states onto a set of {, , }
No, it means ontological structure—not structures of things, but the structure of thing’s ontology—that doesn’t say anything about the things themselves, just about their ontology.
A logical/probabilistic one. There is no evidence for a correlation between the statements “These beings have large-scale quantum entanglement” and “These beings think and talk about consciousness”
You would have to be saying that to be exactly the same as your character. You’re contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy “the stuff is in this arrangement” and the second guy “the stuff is in this arrangement, and the experiences are in that arrangement”, they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences.
That doesn’t seem at all suspicious to you?
You are correct. “balloons” refers to balloons, not to quarks.
I guess what’s going on is that the guy is saying that’s what he believes balloons are.
But thinking about the meaning of words is clarifying.
It seems like the question is almost—“Is ‘experience’ a word like phlogiston or a word like elephant?”
More or less, whatever has been causing us to see all those elephants gets to be called an elephant. Elephants are reductionism-compatible. There are some extreme circumstances—images of elephants I have seen are fabrication, the people who claim to have seen elephants are lying to me—that break this rule. Phlogiston, on the other hand, is a word we give up on much more readily. Heat is particle bouncing around, but the absence of oxygen is not phlogiston—it’s just the absence of oxygen.
You believe that “experience” is fundamentally incompatible with reduction. An experience, to exist at all, must be an ontologically fundamental experience. Thus saying “I see red” makes two claims—one, that the brain is in a certain class of its possible total configuration states, those in which the person is seeing red, and two, that the experience of seeing red is ontologically fundamental.
I see no way to ever get the physical event of people claiming that they experience color correlated with the ontological fundamentalness of their color, as we can investigate the phlogiston hypothesis and stop using it if and only if it turns out to be a bad model.
What is a claim when it’s not correlated with its subject? The whole point of the words within it has been irrevocably lost. It is pure speculation.
I really, really don’t think, that when I say I see red, I’m just speculating.
It’s almost a month since we started this discussion, and it’s a bit of a struggle to remember what’s important and what’s incidental. So first, a back-to-basics statement from me.
Colors do exist, appearances do exist; that’s nonnegotiable. That they do not exist in an ontology of “nothing but particles in space” is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can’t be identified with it.
We aren’t like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with “experiences”, and so on.
Here I’d say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can’t just talk about atomized sensory qualia.
Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That’s because it’s a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can’t be an identity.
All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have something in my mathematical theory of reality which simply is the bearer of conscious states, has the properties and structure of a conscious whole, and is appropriately located in the causal chain. Since the mathematics describing a configuration of particles in space seems very unpromising for such a reinterpretation; and since our physics is quantum mechanics anyway, and the formalism of quantum mechanics contains entangled wavefunctions that can’t be factorized into localized wavefunctions, it’s quite natural to look for these conscious wholes in some form of QM where entanglement is ontological. However, since consciousness is in the brain and causally relevant, this implies that there must be a functionally relevant brain subsystem that is in a quantum coherent state.
That is the argument which leads me from “consciousness is real” to “there’s large-scale quantum entanglement in the brain”. Given the physics we have, it’s the only way I see to avoid property dualism, and it’s still just a starting point, on every level: mathematically, ontologically, and of course neurobiologically. But that is the argument you should be scrutinizing. What’s at stake in some of our specific exchanges may be a little obscure, so I wanted to set down the main argument in one piece, in one place, so you could see what you’re dealing with.
I will lay down the main thing convincing me that you’re correct.
Consider the three statements:
“there’s a large-scale quantum entanglement in the brain”
“consciousness is real”
“Mitchell Porter says that consciousness is real.”
Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated.
However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
(part 1)
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there’s a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with “existence”, it can be hard to say what “causation” is. But whatever it is, and whether or not we can say something informative about its ontological character, if you’re using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities—dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it—the difference between the elementary situation, where A leads directly to B, and the composite situation, where A “causes” B because A leads directly to A’ which leads directly to A″ … and eventually this chain terminates in B.
Also—and this is germane to the earlier discussion about fuzzy properties and macroscopic states—in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it’s even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of “particle encounters force field causes change in particle’s motion”.
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it’s an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn’t matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I’ve argued two things so far. First, qualia and other features of consciousness aren’t there in the physical ontology, so that’s a problem. Second, a many-to-one mapping is not an identity relation, it’s more suited to property dualism, so that’s also a problem.
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it’s the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there’s a “homunculus fallacy”, where you explain (for example) the experience of seeing as due to a “homunculus” (“little human”) in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
(part 2 of reply)
See next section.
We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code—whatever that corresponds to, in a human being.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but…
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
They are, but I was actually talking about the difference between colorness/edgeness and neuronness.
A few thoughts in response:
I agree with you that if my experience of red can’t be constructed of matter, then my understanding of a sentence also can’t be. And I agree with you that we don’t have a reliable account of how to construct such things out of matter, and without such an account we can’t rule out the possibility that, as you suggest, such an account is simply not possible. I agree with you that this objection to physicalism has been around for a long time.
I agree with you that insofar as we understand vitalism to be an account of how particular arrangements of matter move around, it is a different sort of thing from the kind of “sentientism” you are talking about. That said, I think that’s a misrepresentation of historical vitalism; I think when the vitalists talked about elan vital being the difference between living and unliving matter, they were also attributing sentience (though not sapience) to elan vital, as well as simple animation.
I don’t equate the experience of red with the tendency to output the word “red” when queried, both in the sense that it’s easy for me to imagine being unable to generate that output while continuing to experience red, and in the sense that it’s easy for me to imagine a system that outputs the word “red” when queried without having an experience of red. Lexicalization is neither necessary nor sufficient for experience.
I don’t equate the experience of red with categorization… it is easy to imagine categorization without experience. It’s harder to imagine experience without categorization, though. Categorization might be necessary, but it certainly isn’t sufficient, for experience.
Like you, I can’t come up with a physical account of sentience. I have little faith in the power of my imagination, though. Put another way: it isn’t easy for me to see what one can and can’t make out of particles. But I agree with you that any such account would be surprising, and that there is a phenomenon there to explain. So I think I fall somewhere in between your two classes of people who are a waste of time to talk to: I get that there’s a problem, but it isn’t obvious to me that the properties that comprise what it feels like to be a bat must be ontologically basic and nonphysical. Which I think still means I’m wasting your time. (I did warn you in the grandparent comment that you won’t find my answer interesting.)
If it turns out that a particular sensation is perfectly correlated with the presence of a particular physical structure, and that disrupting that structure always triggers a disruption of the sensation, and that disrupting the sensation always triggers a disruption of the structure… well, at that point, I’m pretty reluctant to posit a nonphysical sensation. Sure, it might be there, but if I posit it I need to account for why the sensation is so tightly synchronized with the physical structure, and it’s not at all clear that that task is any simpler than identifying one with the other, counterintuitive as that may be.
At the other extreme, if the nonphysical structure makes a difference, demonstrating that difference would make me inclined to posit a nonphysical sensation. For example, if we can transmit sensation without transmitting any physical signal, I’d be strongly inclined to posit a nonphysical structure underlying the sensation. Looking for such a demonstrable difference might be a useful way to start getting somewhere.
Perhaps we are closer to mutual understanding than might have been imagined, then. A crucial point: I wouldn’t talk about the mind as something “nonphysical”. That’s why I said that the problem is with our current physical ontology. The problem is not that we have a model of the world in which events outside our heads are causally connected to events inside our heads via a chain of intermediate events. The problem is that when we try to interpret physics ontologically (and not just operationally), the available frameworks are too sparse and pallid (those are metaphors of course) to produce anything like actual moment-to-moment experience. The dance of particles can produce something isomorphic to sensation and thought, but not identical. Therefore, what we might think of as a dance of particles actually needs to be thought of in some other way.
So I’m actually very close in spirit to the reductionist who wants to think of their experience in terms of neurons firing and so forth, except I say it’s got to be the other way around. Taken literally, that would mean that we need to learn to think of what we now call neurons firing, as being fundamentally—this—moment-to-moment experience, as is happening to you right now. Except, the physical nature of whole neurons I don’t believe plausibly allows such an ontological reinterpretation. If consciousness really is based on mesoscopic-level informational states in neurons, then I’d favor property dualism rather than the reverse monism I just advocated. But I’m going for the existence of a Cartesian theater somewhere in the brain whose physical implementation is based on exact quantum states rather than collective coarse-grained classical ones, quantum states which in our current understanding would look more algebraic than geometric. And the succession of abstract algebraic state transitions in that Cartesian theater is the deracinated mathematical description of what, in reality, is the flow of conscious experience.
If that is the true interior reality of one quantum island in the causal network of the world, it might be anticipated that every little causal nexus has its own inside too—its own subjectivity. The non-geometric, localized, algebraic side of physics would turn out to actually be a description of the local succession of conscious states, and the spatial, geometric aspect of physics would in fact describe the external causal interactions between these islands of consciousness. Except I suspect that the term consciousness is best reserved for a very rare and highly involuted type of state, and that most things count as islands of “being” but not as islands of “experiencing” (at least, not as islands of reflective experiencing).
I should also distinguish this philosophy from the sort which sees mind wherever there is distributed computation—so that the hierarchical structure of classical interaction in the world gets interpreted as a set of minds made of minds made of minds. I would say that the ontological glue of individual consciousness is not causal interaction—it’s something much tighter. The dependence of elements of a state of consciousness on the whole state of consciousness is more like the way that the face of a cube is part of the cube, though even that analogy is nowhere near strong enough, because the face of a cube is a square and a square can have independent existence, though when it’s independent it’s no longer a face. However we end up expressing it, the world is fundamentally made of these logical ontological unities, most of which are very simple and correspond to something like particles, and a few of which have become highly complex—with waking states of consciousness being extremely complex examples of these—and all of these entities interact causally and quasi-locally. These interactions bind them into systems and into systems of systems, but systems themselves are not conscious, because ontologically they are multiplicities, and consciousness is always a property of one of those fundamental physical unities whose binding principle is more than just causal association.
An ontology of physics like that is one where the problem of consciousness might be solved in a nondualistic way. But its viability does seem to require that something like quantum entanglement is found to be relevant to conscious cognition. As I said, if that isn’t borne out, I’ll probably fall back on some form of property dualism, in which there’s a many-to-one mapping between big physical states (like ion concentrations on opposite sides of axonal membranes) and distinct possible states of consciousness. But physical neuroscience has quite a way to go yet, so I’m very far from giving up on the monistic quantum theory of mind.
So, getting back to my original question about what your alternate ontology has to offer…
If I’m understanding you (which is far from clear), while you are mostly concerned with being ontologically correct rather than operationally useful, you do make a falsifiable neurobiological prediction having something I didn’t follow to do with quantum entanglement.
Cool. I approve of falsifiable predictions; they are a useful thing that a way of thinking about the world can offer.
Anything else?
I think you ought to be more interested in what this shows about the severity of the problem of consciousness. See my remarks to William Sawin, about color and about many-to-one mappings, and how they lead to a choice between this peculiar quantum monism (which is indeed difficult to understand at first encounter), and property dualism. While I like my own ideas (about quantum monads and so forth), the difficulties associated with the usual approaches to consciousness matter in their own right.
(nods) I understand that you do; I have from the beginning of this exchange been trying to move forward from that bald assertion into a clarification of why I ought to be… that is, what benefits there are to be gained from channeling my interest as you recommend.
Put another way: let us suppose you’re right that there are aspects of consciousness (e.g., subjective experience/qualia) that cannot be adequately explained by mainstream ontology.
Suppose further that tomorrow we encounter an entity (an isolated group of geniuses working productively on the problem, or an alien civilization with a different ontological tradition, or spirit beings from another dimension, or Omega, or whatever) that has worked out an ontology that does adequately explain it, using quantum monads or something else, to roughly the same level of refinement and practical implementation that we have worked out our own.
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
Or, to ask the question a different way: suppose we encounter an entity that claims to have worked out such an ontology, but won’t show it to us. What properties ought we look for in that entity that provide evidence that their claim is legitimate?
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements. (I may have misunderstood that, in which case I would appreciate clarification.) So I should not expect them to have a superior understanding of behavior that would manifest in various detectable ways. Nor should I expect them to have a superior understanding of physics.
I’m not really sure what I should expect them to have a superior understanding of, though, or what capabilities I should expect such an understanding to entail. Surely there ought to be something, if this branch of knowledge is, as you claim, worth pursuing.
Thus far, I’ve gotten that they ought to be able to make predictions about neurobiological structures that relate to certain kinds of quantum structures. I’m wondering what else.
Because if it’s just about being right about ontology for the sake of being right about ontology when it entails no consequences, then I simply disagree with you that I ought to be more interested.
I don’t consider this inability to merely be posited. It’s a matter of understanding what you can and can’t do with the ontological ingredients provided. You have particles, you have non-positional properties of individual particles, you have the motions of particles, you have changes in the non-positional properties. You have causal relations. You have sets of these entities; you have causal chains built from them; you have higher-order quantitative and logical facts deriving from the elementary facts about configuration and causal relationships. That’s basically all you have to work with. An ontology of fields, dynamical geometry, probabilities adds a few twists to this picture, but nothing that changes it fundamentally. So I’m saying there is nothing in this ontology, either fundamental or composite (in a broad sense of composite), which can be identified with—not just correlated with, but identified with—consciousness and its elements. And color offers the clearest and bluntest proof of this.
We can keep going over this fact from different angles, but eventually it comes down to seeing that one thing is indeed different from another. 1 is not 0; is not any specific thing that can be found in the ontology of particles. It reduces to pairwise comparative judgments in which ontologically dissimilar basic entities are perceived to indeed be ontologically dissimilar.
What are we trying to explain, ultimately? What even gives us something to be explained? It’s conscious experience again; the appearance of a world. Our physical theories describe the behavior of a world which is structurally similar to the world of appearance, but which does not have all its properties. We are happy to say that the world of appearance is just causally connected, in a regularity-preserving way, to an external world, and that these problem properties only exist in the “world of appearance”. That might permit us to regard the “external world” as explained by our physics. But then we have this thing, “world of appearance”, where all the problems remain, and which we are nonetheless trying to assimilate to physics (via neuroscience). However, we know (if we care to think things through), that this assimilation is not possible with the current physical ontology.
So the claim that we can describe the behavior of things is not quite as powerful as it seems, because it turns out that the things we are describing can’t actually be the “things” of direct experience, the appearances themselves. We can get isomorphism here, but not identity. It’s an ontological problem: the things of physical theory need to be reconceived so that some of them can be identified with the things of consciousness, the appearances.
I understand that you aren’t “merely” positing the inability of a set of particles, positions and energy-states to be an experience.
I am.
I also understand that you consider this a foolish insistence on my part on rejecting the obvious facts of experience. As I’ve said several times now, repeatedly belaboring that point isn’t going to progress this discussion further.