Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
Let’s try to communicate through intuition pumps:
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn’t be able to tell the difference—your behavior would be the same either way.
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
As I’ve attempted to clarify in the new comment, my problem is not with subsuming consciousness into physics per se, it is specifically with subsuming consciousness into a particular physical ontology, because that ontology does not contain something as basic as perceived color, either fundamentally or combinatorially. To consider that judgement credible, you must believe that there is an epistemic faculty whereby you can tell that color is actually there. Which leads me to your next remark--
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn’t be able to tell the difference—your behavior would be the same either way.
--and so obviously I’m going to object to the assumption that I’m not aware of my qualia. If you performed the swap as described, I wouldn’t know that it had occurred, but I’d still know that and are there and are real; and I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
Doesn’t that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
Doesn’t the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you’re not denying current physics, but just ontologically reorganizing it., then you’re vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We’re all in the same boat.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
No you wouldn’t. People can’t tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can’t have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I’m typing in is based on regularities the size of a transistor. I wouldn’t expect to notice if my images were, really, fundamentally, completely different. I wouldn’t expect to notice if something physical happened—the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
Uniform color and edgeness are as different as night and day.
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you’re not denying current physics, but just ontologically reorganizing it., then you’re vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We’re all in the same boat.
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping—many exact physical states correspond to the same conscious state—then that’s property dualism.
When you say, later on, that your consciousness “is a computation based mainly or entirely on regularities the size of a single neuron or bigger”, that implies dualism or eliminativism, depending on whether you accept that qualia exist. Believe what I quoted, and that qualia exist, and you’re a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn’t really exist, even as appearance), and you’re an eliminativist. This is because a many-to-one mapping isn’t an identity.
“Degrees of existence”, by the way, only makes sense insofar as it really means “degrees of something else”. Existence, like truth, is absolute.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming. Because I prefer the monistic alternative to the dualistic one, and because the program Cyc is definitely “based on regularities the size of a transistor”, I would normally say that Cyc does not and cannot have thoughts, perceptions, beliefs, or other mental properties at all. All those things require consciousness, consciousness is only a property of a physical ontological unity, the computer running Cyc is a causal aggregate of many physical ontological unities, ergo it only has these mentalistic properties because of the imputations of its users, just as the words in a book only have their meanings by convention. When you introduced your original thought-experiment--
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
--maybe I should have gone right away to the question of whether these “perceptions” are actually perceptions, or whether they are just informational states with certain causal roles, and how this differs from true perception. My answer, by the way, is that being an informational state with a causal role is necessary but not sufficient for something to be a perceptual state. I would add that it also has to be “made of qualia” or “be a state of a physical ontological unity”—both these being turns of phrase which are a little imprecise, but which hopefully foreshadow the actual truth. It comes down to what ought to be a tautology: to actually be a perception of , there has to be some actually there. If there isn’t, you just have a simulation.
Just for completeness, I’ll say again that I prefer the monistic alternative, but it does seem to imply that consciousness is to be identified with something fundamental, like a set of quantum numbers, rather than something mesoscopic and semiclassical, like a coarse-grained charge distribution. If that isn’t how it works, the fallback position is an informational property dualism, and what I just wrote would need to be modified accordingly.
Back to your questions about Cyc. Rather than say all that, I countered your original thought-experiment with an anecdote about Douglas Lenat’s Cyc program. The anecdote (as conveyed, for example, in Eliezer’s old essay “GISAI”) is that, according to Lenat, Cyc knows about Cyc, but it doesn’t know that it is Cyc. But then Lenat went and said to Wired that Cyc is self-aware. So I don’t know the finer details of his philosophical position.
What I was trying to demonstrate was the indeterminate nature of machine experience, machine assertions about ontology as based upon experience, and so on. Computation is about behavior and about processes which produce behavior. Consciousness is indeed a process which produces behavior, but that doesn’t define what it is. However, the typical discussion of the supposed thoughts, beliefs, and perceptions of an artificial intelligence breezes right past this point. Specific computational states in the program get dubbed “thoughts”, “desires” and so on, on the basis of a loose structural isomorphism to the real thing, and then the discussion about what the AI feels or wants (and so on) proceeds from there. The loose basis on which these terms are used can easily lead to disagreements—it may even have led Lenat to disagree with himself.
In the absence of a rigorous theory of consciousness it may be impossible to have such discussions without some loose speculation. But my point is that if you take the existence of consciousness seriously, it renders very problematic a lot of the identifications which get made casually. The fact that there is no in physical ontology (or current physical ontology); the fact that from a fundamental perspective these are many-to-one mappings, and a many-to-one mapping can’t be an identity—these facts are simple but they have major implications for theorizing about consciousness.
So, finally answering your questions: 1. yes, it could be programmed to treat itself as something special, and 2. sense data would surely be processed differently, but there’s a difference between implicit and explicit categorizations (see remarks about ontology, below). But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness. And my argument is that the usual position—a casual version of identity theory—is not tenable. Either it’s dualism, or it’s a monism made possible by exotic neurophysics.
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping—many exact physical states correspond to the same conscious state—then that’s property dualism.
Since there’s a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
[this point has low relevance]
Believe what I quoted, and that qualia exist, and you’re a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn’t really exist, even as appearance), and you’re an eliminativist.
It seems like we can cash out the statement “It appears to X that Y” as a fact about an agent X that builds models of the world which have the property Y. It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence or the existence of qualia.
“Degrees of existence”, by the way, only makes sense insofar as it really means “degrees of something else”. Existence, like truth, is absolute.
Degrees of existence come from what is almost certainly a harder philosophical problem about which I am very confused.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming.
Facts about your phenomenology are facts about your programming! If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain. There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
A: “The universe is made out of nothing but love”
B: “What are the properties of ontologically fundamental love?”
A: “[The equations that define the standard model of quantum mechanics]”
B: “I have no evidence to falsify that theory.”
A: “Or balloons. It could be balloons.”
B: “What are the properties of ontologically fundamental balloons?”
A: “[the standard model of quantum theory expressed using different equations]”
B: “There is no evidence that can discriminate between those theories.”
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
I’m a reductive materialist for statements—I don’t see the problem with reading statements about consciousness as statements about quarks. Ontologically I suppose I’m an eliminative materialist.
Since there’s a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
The ontological status of temperature can be investigated by examining a simple ontology where it can be defined exactly, like an ideal gas in a box where the “atoms” interact only through perfectly elastic collisions. In such a situation, the momentum of an individual atom is an exact property with causal relevance. We can construct all sorts of exact composite properties by algebraically combining the momenta, e.g. “the square of the momentum of atom A minus the square root of the momentum of atom B”, which I’ll call property Z. But probably we don’t want to say that property Z exists, in the way that the momentum-property does. The facts about property Z are really just arithmetic facts, facts about the numbers which happen to be the momenta of atoms A and B, and the other numbers they give rise to when combined. Property Z isn’t playing a causal role in the physics, but the momentum property does.
Now, what about temperature? It has an exact definition: the average kinetic energy of an atom. But is it like “property” Z, or like the property of momentum? I think one has to say it’s like property Z—it is a quantitative construct without causal power. It is true that if we know the temperature, we can often make predictions about the gas. But this predictive power appears to arise from logical relations between constructed meta-properties, and not because “temperature” is a physical cause. It’s conceptually much closer than property Z to the level of real causes, but when you say that the temperature caused something, it’s ultimately always a shorthand for what really happened.
When we apply all this to coarse-grained computational states, and their identification with mental states, I actually find myself making, not the argument that I intended (about many-to-one mappings), but another one, an argument against the validity of such an identification, even if it is conceived dualistically. It’s the familiar observation that the mental states become epiphenomenal and not actually causally responsible for anything. Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
So: if you were to insist that temperature is a fundamental physical cause and not just a shorthand for microphysical complexities, then you would not only be a dualist, you would be saying something in contradiction with the causal model of the world offered by physics. It would be a version of phlogiston theory.
As for the “one-to-one mapping between physical states of glasses of water and really long strings”—I assume those are symbol-strings, not super-strings? Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible. If you’re saying that a physical glass of water really is a string of symbols, you’d be bringing up a whole other class of ontological mistakes that we haven’t touched on so far, but which is increasingly endemic in computer-science metaphysics, namely the attempt to treat signs and symbols as ontologically fundamental.
It seems like we can cash out the statement “It appears to X that Y” as a fact about an agent X that builds models of the world which have the property Y.
I actually disagree with this, but thanks for highlighting the idea. The proposed reduction of “appearance” to “modeling” is one of the most common ways in which consciousness is reduced to computation. As a symptom of ontological error, it really deserves a diagnosis more precise than I can provide. But essentially, in such an interpretation, the ontological problem of appearance is just being ignored or thrown out, and all attention directed towards a functionally defined notion of representation; and then this throwing-out of the problem is passed off as an account of what appearance is.
Every appearance has an existence. It’s one of the intriguing pseudo-paradoxes of consciousness that you can see something which isn’t there. That ought to be a contradiction, but what it really means is that there is an appearance in your consciousness which does not correspond to something existing outside of your consciousness. Appearances do exist even when what they indicate does not exist. This is the proof (if such were needed) that appearances do exist. And there is no account of their existential character in a discourse which just talks about an agent’s modeling of the world.
It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence of the existence of qualia.
You are just sabotaging your own ability to think about consciousness, by inventing reasons to ignore appearances.
Facts about your phenomenology are facts about your programming!
No…
If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain.
Those are facts about my ability to communicate my phenomenology.
What’s more interesting to think about is the nature of reflective self-awareness. If I’m able to say that I’m seeing , it’s only because, a few steps back, I’m able to “see” that I’m seeing ; there’s reflective awareness within consciousness of consciousness. There’s a causal structure there, but there’s also a non-causal ontological structure, some form of intentionality. It’s this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
Once again, appearance is being neglected in this passage, this time in favor of belief. To admit that something appears is necessarily to give it some kind of existential status.
B: “What are the properties of ontologically fundamental love?”
A: “[The equations that define the standard model of quantum mechanics]”
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition. But in any case, love also has a subjective appearance, which is different to the subjective appearance of hate, and this is why the experience of hate can falsify the theory that only love exists.
I’m a reductive materialist for statements—I don’t see the problem with reading statements about consciousness as statements about quarks.
Intentionality, qualia, and the unity of consciousness; none of those things exist in the world of quarks as point particles in space.
Ontologically I suppose I’m an eliminative materialist.
The opposite sort of error to religion. In religion, you believe in something that doesn’t exist. Here, you don’t believe in something that does exist.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
Appearances do exist even when what they indicate does not exist.
is a fact about complex arrangements of quarks.
Those are facts about my ability to communicate my phenomenology.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
What’s more interesting to think about is the nature of reflective self-awareness. If I’m able to say that I’m seeing , it’s only because, a few steps back, I’m able to “see” that I’m seeing ; there’s reflective awareness within consciousness of consciousness. There’s a causal structure there, but there’s also a non-causal ontological structure, some form of intentionality. It’s this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
Non-causal ontological structure is suspicious.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
but it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
I’ll quote myself: “The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.”
Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way.
The root of this disagreement is your statement that “Facts about your phenomenology are facts about your programming”. Perhaps you’re used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It’s not even just what I think about it; it’s clear that the thought “I am seeing ” arises in response to a that exists before and apart from the thought.
Non-causal ontological structure is suspicious.
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Let’s revisit what this branch of the conversation was about.
I was arguing that it’s possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there’s no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, since you were insisting that “people can’t tell the difference between ontologies”, I tried to pull out a truly absurd example (though one that occasionally gets lip service from mystically minded people) - that only love exists. I would have thought that a moment’s inspection of the world, or of one’s memories of the world, would show that there are things other than love in existence, even if you adopt total Cartesian skepticism about anything beyond immediate experience.
Your riposte was to imagine an advocate of the all-is-love theory who, when asked to provide the details, says “quantum mechanics”. I said it’s rather hard to interpret QM that way, and you pointed out that I’m trying to get experience from QM. That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience. My actual thesis is that conscious experience is the state of some particular type of quantum system, so the emotions do have to be in the theory somewhere. But I don’t think you can even reduce the other emotions to the emotion of love, let alone the non-emotional aspects of the mind, so the whole thing is just silly.
Then you had your advocate go on to speak in favor of the all-is-balloons theory, again with QM providing the details. I think you radically overestimate the freedom one has to interpret a mathematical formalism and still remain plausible or even coherent.
What we say using natural language is not just an irrelevant, interchangeable accessory to what we say using equations. Concepts can still have a meaning even if it’s only expressed informally, and one of the underappreciated errors of 20th-century thought is the belief that formalism validates everything: that you can say anything about a topic and it’s valid to do so, if you’re saying it with a formalism. A very minor example is the idea of a “noncommutative probability”. In quantum theory, we have complex numbers, called probability amplitudes, which appear as an intermediate stage prior to the calculation of numbers that are probabilities in the legitimate sense—lying between 0 and 1, expressing relative frequency of an outcome. There is a formalism of this classical notion of probability, due to Kolmogorov. You can generalize that formalism, so that it is about probability amplitudes, and some people call that a theory of “noncommutative probability”. But it’s not actually a theory of probability any more. A “noncommutative probability” is not a probability; that’s why probability amplitudes are so vexatious to interpret. The designation, “noncommutative probability”, sweeps the problem under the carpet. It tells us that these mysterious non-probabilities are not mysterious; they are probabilities—just … different. There can be a fine line between “thinking like reality” and fooling yourself into thinking that you understand.
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
So divide the particle velocities by temperature or whatever.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
How do you tell what’s redundant complexity and what’s ontologically fundamental? Position or momentum model of quantum mechanics, for instance?
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes.
What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration—the statement that they are not epiphenomenal, but rather, fundamental.
So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Is there anything about your or anyone else’s actions that provides evidence for this hypothesis?
“genuine” causal relations is much weaker than “ontologically fundamental” relations.
Do only pure qualia really exist? Do beliefs, desires, etc. also exist?
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
You can map a set of three quantum states onto a set of {, , }
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
No, it means ontological structure—not structures of things, but the structure of thing’s ontology—that doesn’t say anything about the things themselves, just about their ontology.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
A logical/probabilistic one. There is no evidence for a correlation between the statements “These beings have large-scale quantum entanglement” and “These beings think and talk about consciousness”
That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience
You would have to be saying that to be exactly the same as your character. You’re contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy “the stuff is in this arrangement” and the second guy “the stuff is in this arrangement, and the experiences are in that arrangement”, they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences.
That doesn’t seem at all suspicious to you?
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
You are correct. “balloons” refers to balloons, not to quarks.
I guess what’s going on is that the guy is saying that’s what he believes balloons are.
But thinking about the meaning of words is clarifying.
It seems like the question is almost—“Is ‘experience’ a word like phlogiston or a word like elephant?”
More or less, whatever has been causing us to see all those elephants gets to be called an elephant. Elephants are reductionism-compatible. There are some extreme circumstances—images of elephants I have seen are fabrication, the people who claim to have seen elephants are lying to me—that break this rule. Phlogiston, on the other hand, is a word we give up on much more readily. Heat is particle bouncing around, but the absence of oxygen is not phlogiston—it’s just the absence of oxygen.
You believe that “experience” is fundamentally incompatible with reduction. An experience, to exist at all, must be an ontologically fundamental experience. Thus saying “I see red” makes two claims—one, that the brain is in a certain class of its possible total configuration states, those in which the person is seeing red, and two, that the experience of seeing red is ontologically fundamental.
I see no way to ever get the physical event of people claiming that they experience color correlated with the ontological fundamentalness of their color, as we can investigate the phlogiston hypothesis and stop using it if and only if it turns out to be a bad model.
What is a claim when it’s not correlated with its subject? The whole point of the words within it has been irrevocably lost. It is pure speculation.
I really, really don’t think, that when I say I see red, I’m just speculating.
It’s almost a month since we started this discussion, and it’s a bit of a struggle to remember what’s important and what’s incidental. So first, a back-to-basics statement from me.
Colors do exist, appearances do exist; that’s nonnegotiable. That they do not exist in an ontology of “nothing but particles in space” is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can’t be identified with it.
We aren’t like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with “experiences”, and so on.
Here I’d say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can’t just talk about atomized sensory qualia.
Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That’s because it’s a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can’t be an identity.
All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have something in my mathematical theory of reality which simply is the bearer of conscious states, has the properties and structure of a conscious whole, and is appropriately located in the causal chain. Since the mathematics describing a configuration of particles in space seems very unpromising for such a reinterpretation; and since our physics is quantum mechanics anyway, and the formalism of quantum mechanics contains entangled wavefunctions that can’t be factorized into localized wavefunctions, it’s quite natural to look for these conscious wholes in some form of QM where entanglement is ontological. However, since consciousness is in the brain and causally relevant, this implies that there must be a functionally relevant brain subsystem that is in a quantum coherent state.
That is the argument which leads me from “consciousness is real” to “there’s large-scale quantum entanglement in the brain”. Given the physics we have, it’s the only way I see to avoid property dualism, and it’s still just a starting point, on every level: mathematically, ontologically, and of course neurobiologically. But that is the argument you should be scrutinizing. What’s at stake in some of our specific exchanges may be a little obscure, so I wanted to set down the main argument in one piece, in one place, so you could see what you’re dealing with.
I will lay down the main thing convincing me that you’re correct.
Consider the three statements:
“there’s a large-scale quantum entanglement in the brain”
“consciousness is real”
“Mitchell Porter says that consciousness is real.”
Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated.
However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there’s a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with “existence”, it can be hard to say what “causation” is. But whatever it is, and whether or not we can say something informative about its ontological character, if you’re using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities—dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it—the difference between the elementary situation, where A leads directly to B, and the composite situation, where A “causes” B because A leads directly to A’ which leads directly to A″ … and eventually this chain terminates in B.
Also—and this is germane to the earlier discussion about fuzzy properties and macroscopic states—in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it’s even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of “particle encounters force field causes change in particle’s motion”.
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it’s an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn’t matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I’ve argued two things so far. First, qualia and other features of consciousness aren’t there in the physical ontology, so that’s a problem. Second, a many-to-one mapping is not an identity relation, it’s more suited to property dualism, so that’s also a problem.
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it’s the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there’s a “homunculus fallacy”, where you explain (for example) the experience of seeing as due to a “homunculus” (“little human”) in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
See next section.
I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
No you wouldn’t. People can’t tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can’t have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code—whatever that corresponds to, in a human being.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but…
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I’m typing in is based on regularities the size of a transistor. I wouldn’t expect to notice if my images were, really, fundamentally, completely different. I wouldn’t expect to notice if something physical happened—the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
Uniform color and edgeness are as different as night and day.
They are, but I was actually talking about the difference between colorness/edgeness and neuronness.
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
Let’s try to communicate through intuition pumps:
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses—they had to be, in addition, the colors of pixels.
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn’t be able to tell the difference—your behavior would be the same either way.
Two meditations on an optical illusion: I heard, possibly on lesswrong, that in illusions like this one: http://www.2dorks.com/gallery/2007/1011-illusions/12-kanizsatriangle.jpg your edge-detecting neurons fire at both the real and the fake edges.
Doesn’t that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?
Doesn’t the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?
My latest comment might clarify a few things. Meanwhile,
No-one’s telling me that a heap of sand has an “inside”. It’s a fuzzy concept and the fuzziness doesn’t cause any problems because it’s just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren’t it, so in a physical ontology it has to correspond to a hard-edged concept.
Consider Cyc. Isn’t one of the problems of Cyc that it can’t distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
As I’ve attempted to clarify in the new comment, my problem is not with subsuming consciousness into physics per se, it is specifically with subsuming consciousness into a particular physical ontology, because that ontology does not contain something as basic as perceived color, either fundamentally or combinatorially. To consider that judgement credible, you must believe that there is an epistemic faculty whereby you can tell that color is actually there. Which leads me to your next remark--
--and so obviously I’m going to object to the assumption that I’m not aware of my qualia. If you performed the swap as described, I wouldn’t know that it had occurred, but I’d still know that and are there and are real; and I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you’re not denying current physics, but just ontologically reorganizing it., then you’re vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We’re all in the same boat.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
No you wouldn’t. People can’t tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can’t have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I’m typing in is based on regularities the size of a transistor. I wouldn’t expect to notice if my images were, really, fundamentally, completely different. I wouldn’t expect to notice if something physical happened—the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
Uniform color and edgeness are as different as night and day.
(part 1 of reply)
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping—many exact physical states correspond to the same conscious state—then that’s property dualism.
When you say, later on, that your consciousness “is a computation based mainly or entirely on regularities the size of a single neuron or bigger”, that implies dualism or eliminativism, depending on whether you accept that qualia exist. Believe what I quoted, and that qualia exist, and you’re a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn’t really exist, even as appearance), and you’re an eliminativist. This is because a many-to-one mapping isn’t an identity.
“Degrees of existence”, by the way, only makes sense insofar as it really means “degrees of something else”. Existence, like truth, is absolute.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming. Because I prefer the monistic alternative to the dualistic one, and because the program Cyc is definitely “based on regularities the size of a transistor”, I would normally say that Cyc does not and cannot have thoughts, perceptions, beliefs, or other mental properties at all. All those things require consciousness, consciousness is only a property of a physical ontological unity, the computer running Cyc is a causal aggregate of many physical ontological unities, ergo it only has these mentalistic properties because of the imputations of its users, just as the words in a book only have their meanings by convention. When you introduced your original thought-experiment--
--maybe I should have gone right away to the question of whether these “perceptions” are actually perceptions, or whether they are just informational states with certain causal roles, and how this differs from true perception. My answer, by the way, is that being an informational state with a causal role is necessary but not sufficient for something to be a perceptual state. I would add that it also has to be “made of qualia” or “be a state of a physical ontological unity”—both these being turns of phrase which are a little imprecise, but which hopefully foreshadow the actual truth. It comes down to what ought to be a tautology: to actually be a perception of , there has to be some actually there. If there isn’t, you just have a simulation.
Just for completeness, I’ll say again that I prefer the monistic alternative, but it does seem to imply that consciousness is to be identified with something fundamental, like a set of quantum numbers, rather than something mesoscopic and semiclassical, like a coarse-grained charge distribution. If that isn’t how it works, the fallback position is an informational property dualism, and what I just wrote would need to be modified accordingly.
Back to your questions about Cyc. Rather than say all that, I countered your original thought-experiment with an anecdote about Douglas Lenat’s Cyc program. The anecdote (as conveyed, for example, in Eliezer’s old essay “GISAI”) is that, according to Lenat, Cyc knows about Cyc, but it doesn’t know that it is Cyc. But then Lenat went and said to Wired that Cyc is self-aware. So I don’t know the finer details of his philosophical position.
What I was trying to demonstrate was the indeterminate nature of machine experience, machine assertions about ontology as based upon experience, and so on. Computation is about behavior and about processes which produce behavior. Consciousness is indeed a process which produces behavior, but that doesn’t define what it is. However, the typical discussion of the supposed thoughts, beliefs, and perceptions of an artificial intelligence breezes right past this point. Specific computational states in the program get dubbed “thoughts”, “desires” and so on, on the basis of a loose structural isomorphism to the real thing, and then the discussion about what the AI feels or wants (and so on) proceeds from there. The loose basis on which these terms are used can easily lead to disagreements—it may even have led Lenat to disagree with himself.
In the absence of a rigorous theory of consciousness it may be impossible to have such discussions without some loose speculation. But my point is that if you take the existence of consciousness seriously, it renders very problematic a lot of the identifications which get made casually. The fact that there is no in physical ontology (or current physical ontology); the fact that from a fundamental perspective these are many-to-one mappings, and a many-to-one mapping can’t be an identity—these facts are simple but they have major implications for theorizing about consciousness.
So, finally answering your questions: 1. yes, it could be programmed to treat itself as something special, and 2. sense data would surely be processed differently, but there’s a difference between implicit and explicit categorizations (see remarks about ontology, below). But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness. And my argument is that the usual position—a casual version of identity theory—is not tenable. Either it’s dualism, or it’s a monism made possible by exotic neurophysics.
(continued)
Since there’s a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
[this point has low relevance]
It seems like we can cash out the statement “It appears to X that Y” as a fact about an agent X that builds models of the world which have the property Y. It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence or the existence of qualia.
Degrees of existence come from what is almost certainly a harder philosophical problem about which I am very confused.
Facts about your phenomenology are facts about your programming! If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain. There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
A: “The universe is made out of nothing but love”
B: “What are the properties of ontologically fundamental love?”
A: “[The equations that define the standard model of quantum mechanics]”
B: “I have no evidence to falsify that theory.”
A: “Or balloons. It could be balloons.”
B: “What are the properties of ontologically fundamental balloons?”
A: “[the standard model of quantum theory expressed using different equations]”
B: “There is no evidence that can discriminate between those theories.”
I’m a reductive materialist for statements—I don’t see the problem with reading statements about consciousness as statements about quarks. Ontologically I suppose I’m an eliminative materialist.
The ontological status of temperature can be investigated by examining a simple ontology where it can be defined exactly, like an ideal gas in a box where the “atoms” interact only through perfectly elastic collisions. In such a situation, the momentum of an individual atom is an exact property with causal relevance. We can construct all sorts of exact composite properties by algebraically combining the momenta, e.g. “the square of the momentum of atom A minus the square root of the momentum of atom B”, which I’ll call property Z. But probably we don’t want to say that property Z exists, in the way that the momentum-property does. The facts about property Z are really just arithmetic facts, facts about the numbers which happen to be the momenta of atoms A and B, and the other numbers they give rise to when combined. Property Z isn’t playing a causal role in the physics, but the momentum property does.
Now, what about temperature? It has an exact definition: the average kinetic energy of an atom. But is it like “property” Z, or like the property of momentum? I think one has to say it’s like property Z—it is a quantitative construct without causal power. It is true that if we know the temperature, we can often make predictions about the gas. But this predictive power appears to arise from logical relations between constructed meta-properties, and not because “temperature” is a physical cause. It’s conceptually much closer than property Z to the level of real causes, but when you say that the temperature caused something, it’s ultimately always a shorthand for what really happened.
When we apply all this to coarse-grained computational states, and their identification with mental states, I actually find myself making, not the argument that I intended (about many-to-one mappings), but another one, an argument against the validity of such an identification, even if it is conceived dualistically. It’s the familiar observation that the mental states become epiphenomenal and not actually causally responsible for anything. Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
So: if you were to insist that temperature is a fundamental physical cause and not just a shorthand for microphysical complexities, then you would not only be a dualist, you would be saying something in contradiction with the causal model of the world offered by physics. It would be a version of phlogiston theory.
As for the “one-to-one mapping between physical states of glasses of water and really long strings”—I assume those are symbol-strings, not super-strings? Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible. If you’re saying that a physical glass of water really is a string of symbols, you’d be bringing up a whole other class of ontological mistakes that we haven’t touched on so far, but which is increasingly endemic in computer-science metaphysics, namely the attempt to treat signs and symbols as ontologically fundamental.
I actually disagree with this, but thanks for highlighting the idea. The proposed reduction of “appearance” to “modeling” is one of the most common ways in which consciousness is reduced to computation. As a symptom of ontological error, it really deserves a diagnosis more precise than I can provide. But essentially, in such an interpretation, the ontological problem of appearance is just being ignored or thrown out, and all attention directed towards a functionally defined notion of representation; and then this throwing-out of the problem is passed off as an account of what appearance is.
Every appearance has an existence. It’s one of the intriguing pseudo-paradoxes of consciousness that you can see something which isn’t there. That ought to be a contradiction, but what it really means is that there is an appearance in your consciousness which does not correspond to something existing outside of your consciousness. Appearances do exist even when what they indicate does not exist. This is the proof (if such were needed) that appearances do exist. And there is no account of their existential character in a discourse which just talks about an agent’s modeling of the world.
You are just sabotaging your own ability to think about consciousness, by inventing reasons to ignore appearances.
No…
Those are facts about my ability to communicate my phenomenology.
What’s more interesting to think about is the nature of reflective self-awareness. If I’m able to say that I’m seeing , it’s only because, a few steps back, I’m able to “see” that I’m seeing ; there’s reflective awareness within consciousness of consciousness. There’s a causal structure there, but there’s also a non-causal ontological structure, some form of intentionality. It’s this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
Once again, appearance is being neglected in this passage, this time in favor of belief. To admit that something appears is necessarily to give it some kind of existential status.
The word “love” already has a meaning, which is not exactly easy to map onto the proposed definition. But in any case, love also has a subjective appearance, which is different to the subjective appearance of hate, and this is why the experience of hate can falsify the theory that only love exists.
Intentionality, qualia, and the unity of consciousness; none of those things exist in the world of quarks as point particles in space.
The opposite sort of error to religion. In religion, you believe in something that doesn’t exist. Here, you don’t believe in something that does exist.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it’s very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn’t ontologically fundamental, you aren’t doing so on the basis of evidence.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of “everything else constant” wrt mental states, we’re done. We certainly can construct one wrt temperature (linearly scale the velocities.)
What are the other conditions?
is a fact about complex arrangements of quarks.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
Non-causal ontological structure is suspicious.
but it’s not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
(part 2)
I’ll quote myself: “The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it’s not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.”
Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way.
The root of this disagreement is your statement that “Facts about your phenomenology are facts about your programming”. Perhaps you’re used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It’s not even just what I think about it; it’s clear that the thought “I am seeing ” arises in response to a that exists before and apart from the thought.
This doesn’t mean ontological structure that has no causal relations; it means ontological structure that isn’t made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it’s going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It’s a spatial structure, not a causal structure.
Could you revisit this point in the light of what I’ve now said? What sort of disconnection are you talking about?
Let’s revisit what this branch of the conversation was about.
I was arguing that it’s possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there’s no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, since you were insisting that “people can’t tell the difference between ontologies”, I tried to pull out a truly absurd example (though one that occasionally gets lip service from mystically minded people) - that only love exists. I would have thought that a moment’s inspection of the world, or of one’s memories of the world, would show that there are things other than love in existence, even if you adopt total Cartesian skepticism about anything beyond immediate experience.
Your riposte was to imagine an advocate of the all-is-love theory who, when asked to provide the details, says “quantum mechanics”. I said it’s rather hard to interpret QM that way, and you pointed out that I’m trying to get experience from QM. That’s clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience. My actual thesis is that conscious experience is the state of some particular type of quantum system, so the emotions do have to be in the theory somewhere. But I don’t think you can even reduce the other emotions to the emotion of love, let alone the non-emotional aspects of the mind, so the whole thing is just silly.
Then you had your advocate go on to speak in favor of the all-is-balloons theory, again with QM providing the details. I think you radically overestimate the freedom one has to interpret a mathematical formalism and still remain plausible or even coherent.
What we say using natural language is not just an irrelevant, interchangeable accessory to what we say using equations. Concepts can still have a meaning even if it’s only expressed informally, and one of the underappreciated errors of 20th-century thought is the belief that formalism validates everything: that you can say anything about a topic and it’s valid to do so, if you’re saying it with a formalism. A very minor example is the idea of a “noncommutative probability”. In quantum theory, we have complex numbers, called probability amplitudes, which appear as an intermediate stage prior to the calculation of numbers that are probabilities in the legitimate sense—lying between 0 and 1, expressing relative frequency of an outcome. There is a formalism of this classical notion of probability, due to Kolmogorov. You can generalize that formalism, so that it is about probability amplitudes, and some people call that a theory of “noncommutative probability”. But it’s not actually a theory of probability any more. A “noncommutative probability” is not a probability; that’s why probability amplitudes are so vexatious to interpret. The designation, “noncommutative probability”, sweeps the problem under the carpet. It tells us that these mysterious non-probabilities are not mysterious; they are probabilities—just … different. There can be a fine line between “thinking like reality” and fooling yourself into thinking that you understand.
All that’s a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
So divide the particle velocities by temperature or whatever.
How do you tell what’s redundant complexity and what’s ontologically fundamental? Position or momentum model of quantum mechanics, for instance?
What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration—the statement that they are not epiphenomenal, but rather, fundamental.
Is there anything about your or anyone else’s actions that provides evidence for this hypothesis?
“genuine” causal relations is much weaker than “ontologically fundamental” relations.
Do only pure qualia really exist? Do beliefs, desires, etc. also exist?
You can map a set of three quantum states onto a set of {, , }
No, it means ontological structure—not structures of things, but the structure of thing’s ontology—that doesn’t say anything about the things themselves, just about their ontology.
A logical/probabilistic one. There is no evidence for a correlation between the statements “These beings have large-scale quantum entanglement” and “These beings think and talk about consciousness”
You would have to be saying that to be exactly the same as your character. You’re contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy “the stuff is in this arrangement” and the second guy “the stuff is in this arrangement, and the experiences are in that arrangement”, they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences.
That doesn’t seem at all suspicious to you?
You are correct. “balloons” refers to balloons, not to quarks.
I guess what’s going on is that the guy is saying that’s what he believes balloons are.
But thinking about the meaning of words is clarifying.
It seems like the question is almost—“Is ‘experience’ a word like phlogiston or a word like elephant?”
More or less, whatever has been causing us to see all those elephants gets to be called an elephant. Elephants are reductionism-compatible. There are some extreme circumstances—images of elephants I have seen are fabrication, the people who claim to have seen elephants are lying to me—that break this rule. Phlogiston, on the other hand, is a word we give up on much more readily. Heat is particle bouncing around, but the absence of oxygen is not phlogiston—it’s just the absence of oxygen.
You believe that “experience” is fundamentally incompatible with reduction. An experience, to exist at all, must be an ontologically fundamental experience. Thus saying “I see red” makes two claims—one, that the brain is in a certain class of its possible total configuration states, those in which the person is seeing red, and two, that the experience of seeing red is ontologically fundamental.
I see no way to ever get the physical event of people claiming that they experience color correlated with the ontological fundamentalness of their color, as we can investigate the phlogiston hypothesis and stop using it if and only if it turns out to be a bad model.
What is a claim when it’s not correlated with its subject? The whole point of the words within it has been irrevocably lost. It is pure speculation.
I really, really don’t think, that when I say I see red, I’m just speculating.
It’s almost a month since we started this discussion, and it’s a bit of a struggle to remember what’s important and what’s incidental. So first, a back-to-basics statement from me.
Colors do exist, appearances do exist; that’s nonnegotiable. That they do not exist in an ontology of “nothing but particles in space” is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can’t be identified with it.
We aren’t like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with “experiences”, and so on.
Here I’d say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can’t just talk about atomized sensory qualia.
Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That’s because it’s a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can’t be an identity.
All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have something in my mathematical theory of reality which simply is the bearer of conscious states, has the properties and structure of a conscious whole, and is appropriately located in the causal chain. Since the mathematics describing a configuration of particles in space seems very unpromising for such a reinterpretation; and since our physics is quantum mechanics anyway, and the formalism of quantum mechanics contains entangled wavefunctions that can’t be factorized into localized wavefunctions, it’s quite natural to look for these conscious wholes in some form of QM where entanglement is ontological. However, since consciousness is in the brain and causally relevant, this implies that there must be a functionally relevant brain subsystem that is in a quantum coherent state.
That is the argument which leads me from “consciousness is real” to “there’s large-scale quantum entanglement in the brain”. Given the physics we have, it’s the only way I see to avoid property dualism, and it’s still just a starting point, on every level: mathematically, ontologically, and of course neurobiologically. But that is the argument you should be scrutinizing. What’s at stake in some of our specific exchanges may be a little obscure, so I wanted to set down the main argument in one piece, in one place, so you could see what you’re dealing with.
I will lay down the main thing convincing me that you’re correct.
Consider the three statements:
“there’s a large-scale quantum entanglement in the brain”
“consciousness is real”
“Mitchell Porter says that consciousness is real.”
Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated.
However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
(part 1)
Temperature is an average. All individual information about the particles is lost, so you can’t invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there’s a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with “existence”, it can be hard to say what “causation” is. But whatever it is, and whether or not we can say something informative about its ontological character, if you’re using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities—dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it—the difference between the elementary situation, where A leads directly to B, and the composite situation, where A “causes” B because A leads directly to A’ which leads directly to A″ … and eventually this chain terminates in B.
Also—and this is germane to the earlier discussion about fuzzy properties and macroscopic states—in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it’s even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of “particle encounters force field causes change in particle’s motion”.
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it’s an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn’t matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I’ve argued two things so far. First, qualia and other features of consciousness aren’t there in the physical ontology, so that’s a problem. Second, a many-to-one mapping is not an identity relation, it’s more suited to property dualism, so that’s also a problem.
Now I’d add that the derived nature of macroscopic “causes” is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it’s the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there’s a “homunculus fallacy”, where you explain (for example) the experience of seeing as due to a “homunculus” (“little human”) in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a “Cartesian theater”, a place where the seeing actually happens and where consciousness is located; it’s the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a “quantum system”, not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
That’s way too hard, so I’ll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn’t let you deduce that a dog is a donkey.
(part 2 of reply)
See next section.
We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code—whatever that corresponds to, in a human being.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but…
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
They are, but I was actually talking about the difference between colorness/edgeness and neuronness.