This article contains three simple questions which I want to see answered. To organize the discussion, I’m creating a thread for each question, so people with an answer can state it or link to it. If you link, please provide a brief summary of your answer here as well.
First question: Where is color?
I see a red apple. The redness, I grant you, is not a property of the thing that grew on the tree, the object outside my skull. It’s the sensation or perception of the apple which is red. However, I do insist that something is red. But if reality is nothing but particles in space, and empty space is not red, and the particles are not red, then what is? What is the red thing; where is the redness?
That aside: Red is something that arises out of things that are not themselves red. Right now I’m wearing a sweater, which is made out of things that are not themselves sweaters (plastic, cotton with a little nylon and spandex; or on a higher level, buttons, sleeves, etc.) A sweater came into existence when the sleeves were sewn onto the rest of it (or however it was pieced together); no particles, however, came into existence. My sweater just is a spatial relationship between a vague set of particles (I say “vague” because it could lose a button and still be a sweater, but unlike a piece of lint, the button is really part of the sweater). If I put it through the shredder, no particles would be destroyed, but it would not be a sweater.
My sweater is also red. When I look at it, I experience the impression of looking at a red object. No particles come into existence when I start to have this experience, but they do arrange differently. Some of the ways my brain can be arranged are such that when they are instantiated, I experience the impression of looking at something red—such as the ones that come into play when I look at my sweater in light good enough for me to have color vision. If I put my brain through the shredder, not only would I die, but I’d also no longer be experiencing the impression of looking at my sweater.
Why does the fact that individual particles are not red prompt, for you, the question of what is red, but the fact that individual particles are not sweaters does not prompt the question of what sweaters are?
Why do I believe it, or why do I insist on it? I believe it because I see it, and I insist on it because other people keep telling me that redness is actually some thing which doesn’t sound red.
Why does the fact that individual particles are not red prompt, for you, the question of what is red, but the fact that individual particles are not sweaters does not prompt the question of what sweaters are?
When you say a “sweater just is a spatial relationship between a vague set of particles”, it is not mysterious how that set of particles manages to have the properties that define a sweater. If we ignore the aspect of purpose (meant to be worn), and consider a sweater to be an object of specified size, shape, and substance—I know and can see how to achieve those properties by arranging atoms appropriately.
But when you say that experiencing redness is also a matter of atoms in your brain arranging themselves appropriately, I see that as an expression of physicalist faith. Not only are the details unknown, but it is a complete mystery as to how, even in principle, you would make redness by combining the entities and properties available in physical ontology. And this is not the case for size, shape, and substance; it is not at all mysterious how to make those out of basic physics.
As I say in the main article, the specific proposals to get subjective color out of physics are actually property dualisms. They posit an identity between the actual color, that we see in experience, and some complicated functional, computational, or other physical property of part of the brain. My position is that the color experience, the thing we are trying to understand, is nothing like the thing on the other side of the alleged identity; so if that’s your theory, you should be a property dualist. I want to be a monist, but that is going to require a new physical ontology, in which things that do look like experiences are among the posited entities.
people keep telling me that redness is actually some thing which doesn’t sound red.
Sound red? If nothing sounds red, that means you are free of a particular sort of synesthesia. :P
Anyway: Suppose somebody ever-so-carefully saws open your skull and gives your brain a little electric shock in just exactly the right place to cause you to taste key lime pie. The shock does something we understand in terms of physics. It encourages itty bits of your brain to migrate to new locations. The electric shock does not have a mystical secondary existence on a special experiences-only plane that’s dormant until it gets near your brain but suddenly springs into existence when it’s called upon to generate pie taste, does it?
We don’t know enough about the brain to do this manually, as it were, for all possible experiences; or even anything very specific. fMRIs will tell us the general neighborhood of brain activity as it happens; and hormone levels will tell us some of the functions of some of the soup in which the organ swims; and electrical brain stimulation experiments will let us investigate more directly. Of course we aren’t done yet. The brain is fantastically complicated and there’s this annoying problem where if we mess with it too much we get charged with murder and thrown in jail. But they have yet to run into a wall; why do you think that, when they’ve found where pain lives and can tell you what you must have injured if you display behavior pattern X, they’re suddenly going to stop making progress?
My position is that the color experience, the thing we are trying to understand, is nothing like the thing on the other side of the alleged identity
So your problem is that the two things lack a primitive resemblance? My sweater’s unproblematic because it and the yarn that was used to make it are… what, the same color? Both things you already associate with sweaters? If somebody told you that the brain doesn’t handle emotions, the heart does—the literal, physical heart—would you buy that more readily because the heart is traditionally associated with emotion and that sounds right, so there’s some kind of resemblance going on? If that’s what’s going on, I hope having it pointed out helps you discard that… “like”, in English, contains so much stuff all curled up in it that if you’re relying on your concept thereof to guide your theorizing, you’re in deep trouble.
So your problem is that the two things lack a primitive resemblance?
They lack any resemblance at all. A trillion tiny particles moving in space is nothing like a “private shade of homogeneous pink”, to use the phrase from Dennett (he has quite a talent for describing things which he then says aren’t there). And yet one is supposed to be the same thing as the other.
The evidence from neuroscience is, originally, data about correlation. Color, taste, pain have some relationship with physical brain events. The correlation itself is not going to tell you whether to be an eliminativist, a property dualist, or an identity theorist.
I am interested in the psychological processes contributing to this philosophical choice but I do not understand them yet. What especially interests me is the response to this “lack of resemblance” issue, when a person who insists that A is B concedes that A does not “resemble” B. My answer is to say that B is A—that physics is just formalism, implying nothing about the intrinsic nature of the entities it describes, and that conscious experience is giving us a glimpse of the intrinsic nature of at least a few of those entities. Physics is actually about pink, rather than about particles. But what people seem to prefer is to deny pinkness in favor of particles, or to say that the pinkness is what it’s like to be those particles, etc.
A trillion tiny particles moving in space is nothing like a “private shade of homogeneous pink”
A trillion tiny particles moving in space is like a “private shade of homogeneous pink” in that it reflects light that stimulates nerves that generate a private shade of homogenous pink. If you forbid even this relationship, you’ve assumed your conclusion. If not, you use “nothing” too freely. If this is a factual claim, and not an assumption, I’d like to see the research and experiments corroborating it, because I doubt they exist, or, indeed, are even meaningfully possible at this time.
To use my previous example, the electrical impulses describing a series of ones and zeroes are “nothing like” lesswrong.com, yet here we are.
I don’t see how this is meaningfully distinct from Alicorn’s sweater. Sweater-ness is not a property of cloth fibers or buttons.
I think the real problem here is that consciousness is so dark and mysterious. Because the units are so small and fragile, we can’t really take it apart and put it back together again, or hit it with a hammer and see what happens. Our minds really aren’t evolved to think about it, and, without the ability to take it apart and put it back together and make it happen in a test tube—taking good samples seems to rather break the process—it’s extremely difficult to force our minds to think about it. By contrast, we’re quite used to thinking about sweaters or social organization or websites. We may not be used thinking about say, photosynthesis or the ATP cycle, but we can take them apart and put them back together again, and recreate them in a test tube.
It might behoove you to examine Luciano Floridi’s treatment of “Levels of Abstraction”—he seems to be getting at much the same thing, if I’m understanding you correctly. To read in in a pragmatist light: there’s a certain sense in which we want to talk about particles, and a sense in which we want to talk about pinkness, and on the face of it there’s no reason to prefer one over another.
It does make sense to assert that Physics is trying to explain “pinkness” via particles, and is therefore about pinkness, not about particles.
Where is lesswrong.com? “On the internet” would be the naive answer, but there’s no part of the internet we could naively recognize as being lesswrong.com. A bunch of electrical impulses get interpreted as ones and zeroes which get translated in a certain language, which converts them into another language (English), which each mind interacting with the site translates in its own way. At the base level, before minds get involved, there’s nothing more complex that a bunch of magnets and electric signals and some servers and so on (I’m not a computer person, so cut me some slack on the details). Yet, out of all of that emerges your post, this comment, and so on.
I know that it is in principle possible to understand how all of this comes together, but I also know that I do not in fact understand it. If I were really to look at how complex this site is—down to the level of the chemist who makes the fertilizer to supply the farmer who feeds the truck driver who delivers the petroleum that gets refined into the plastic that makes the keyboard of the engineer who maintains the power plant that keeps the server running—I have absolutely no idea what’s going on, and probably never will even if I devoted my entire life to understanding how this website comes together. In fact, I have good reason to believe there are parts of what’s going on that I don’t even know are parts of what are going on—I don’t even understand the basic underlying structure at a complete level. But if a bunch of people were really dedicated to it, they could probably figure it out, so that by asking the group of them, you could figure out what you needed to know about how the site works; in other words, it is in principle understandable, even if no one understands it in its entirety.
There is thus nothing particularly problematic about saying, “So, I don’t get how this whole consciousness thing works, but there’s probably no magic involved,” just as there’s no magic (excepting EY’s magic) involved in putting this site together. Saying, “I can’t naively figure out how some extremely complicated system works, therefore, the answer is: magic!” is simply not a reasonable solution. It is possible that there is something more going on in the brain that we can currently understand, but it seems exceedingly unlikely that it is in principle un-understandable.
There is thus nothing particularly problematic about saying, “So, I don’t get how this whole consciousness thing works, but there’s probably no magic involved,” just as there’s no magic (excepting EY’s magic) involved in putting this site together.
If I were to say to you that negative numbers can be made by adding together positive numbers, you just have to add them together in the right way—that would sound strange and wrong, yes? If you start at 1, and keep adding 1, you do not expect your sum to equal −1 (or the square root of −1, or an apple) at any stage. When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning. They’re saying: I may not know every property that very large numbers / very large piles of atoms would exhibit, but it would be magic to get that property from those ingredients.
The problem with the analogy is that we know a whole lot about numbers—math is an artificial language which we created and decided upon the axioms of. How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples? But I’ve made this point before.
What I would find more interesting is an explanation of what magic would do here. It seems obvious that our perception of a homogenous shade of pink is, in some significant way, related to lightwave frequencies, retinas, and neurons. Let’s assume there is some “magic” involved that in turn converts this physical phenomena into an experience. Wouldn’t it have to interact with neurons and such, so that it generates an experience of pink and not an experience of strawberry-rhubarb pie? If it’s epiphenomenal, how could it accomplish this, and how could it be meaningful? If it’s not epiphenomenal, how does it interact with actual matter? Why can’t we detect it?
It’s quite clear that when it comes to how consciousness works, the current best answer is, “We don’t get it, but it has something to do with the brain and neurons.” Answering, “We don’t get it, but it has something to do with the brain and neurons and magic” appears to be an inferior answer.
This may be a cheap shot around these parts, but the non-materialist position feels a lot like an argument for the existence of God.
It’s quite clear that when it comes to how consciousness works, the current best answer is, “We don’t get it, but it has something to do with the brain and neurons.” Answering, “We don’t get it, but it has something to do with the brain and neurons and magic” appears to be an inferior answer.
This is perfect and I’m not sure there is much more to say.
How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples?
It’s our theories of matter which are the problem—and which are clear enough for me to say that something is missing. My position as stated here actually is an identity theory. Experiences are a part of the brain and are causally relevant. But the ontology of physics is wrong, and the attempted reduction of phenomenology to that ontology is also wrong. Instead, phenomenology is giving us a glimpse of the true ontology. All that we see directly is the inner ontology of the conscious experience itself, but one supposes that there is some relationship to the ontology of everything else.
These mostly crop up in quantum field theory, where various formal expressions have infinite values. These can often be “regularized” to give finite results, or at least turned into a form that while still infinite, can be “renormalized” by such means as considering various terms as referring to observed values, rather than the “bare values”, which are carefully tweaked (often taking limits as they go to zero) in a coordinated way, so that the observed values remain okay.
Letting s be the sum above, in some sense what we’re “really” saying is that s = 1 + 2 s, which can be seen by formal manipulation. This has two solutions in the (one-point compactification of) the complex numbers: infinity, and −1. When doing things like summing Feynmann diagrams, we can have similar things where a physical propagator is essentially described as a bare propagator plus perturbative terms that should be written in terms of products of propagators, leading again to infinite series that diverge (several interlocked infinite series, actually—the photon propagator should include terms with each charged particle, the electron should include terms with photon intermediates, etc.).
IIRC, The Casimir effect can be explained by using Zeta function regularization to sum up contributions of an infinite number of vaccuum modes, though it is certainly not the only way to perform the calculation
Explicit physics calculations I do not have at the ready.
EDIT: please do not take the descriptions of the physics above too seriously. It’s not quite what people actually do, but it’s close enough to give some of the flavor.
When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning.
does not also apply to the piling up of degrees of freedom in a quantum monad?
I have another question, which I expect someone has already asked somewhere, but I doubt I’ll be able to find your response, so I’ll just ask again. Would a simulation of a conscious quantum monad by a classical computation also be conscious?
Answer to the first question: everything I say here about redness applies to the other problem topics. The final stage of a monadic theory is meant to have the full ontology of consciousness. But it may have an intermediate stage in which it is still just mathematical.
Answer to the second question: No. In monadology (at least as I conceive it), consciousness is only ever a property of individual monads, typically in very complex states. Most matter consists of large numbers of individual monads in much simpler states, and classical computation involves coordinating their interactions so that the macrostates implement the computation. So you should be able to simulate the dynamics of a single complex monad using many simple monads, but that’s all.
If the conscious being it was simulating would do so, then yes.
On the general topic of simulation of conscious beings, it has just occurred to me… Most functionalists believe a simulation would also be conscious, but a giant look-up table would not be. But if the conscious mind consists of physically separable subsystems in interaction—suppose you try simulating the subsystems with look-up tables, at finer and finer grains of subdivision. At what point would the networked look-up-tables be conscious?
Would a silicon-implemented Mitchell Porter em, for no especial reason (lacking consciousness, it can have none), attempt to reimplement itself in a physical system with a quantum monad?
In terms of current physics, a monad is supposed to be a lump of quantum entanglement, and there are blueprints for a silicon quantum computer in which the qubits are dopant phosphorus atoms. So consciousness on a chip is not in itself a problem for me, it just needs to be a quantum chip.
But you’re talking about an unconscious classical simulation. OK. The intuition behind the question seems to be: because of its beliefs about consciousness, the simulation will think it can’t be conscious in its current form, and will try to make itself so. It doesn’t sound very likely. But it’s more illuminating to ask a different question: what happens when an unconscious simulation of a conscious mind, holding a theory about consciousness according to which such a simulation cannot be conscious, is presented with evidence that it is such a simulation itself?
First, we should consider the conscious counterpart of this, namely: an actually conscious being, with a theory of consciousness, is presented with evidence that it is the sort of thing that cannot be conscious according to its theory. To some extent this is what happened to the human race. The basic choice is whether to change the theory or to retain it. It’s also possible to abandon the idea of consciousness; or even to retain the concept of consciousness but decide that it doesn’t apply to you.
So, let’s suppose I discover that my skull is actually full of silicon chips, not neurons, and that they appear to only be performing classical computations. This would be a rather shocking discovery for a lot of mundane reasons, but let’s suppose we get those out of the way and I’m left with the philosophical problem. How do I respond?
To begin with, the situation hasn’t changed very much! I used to think that I had a skull full of neurons which appear to only be performing classsical computations. But I also used to think that, in reality, there was probably something quantum happening as well, and so took an interest in various speculations about quantum effects in the brain. If I find my brain to in fact be made of silicon chips, I can still look for such effects, and they really might be there.
To take the thought experiment to its end, I have to suppose that the search turns up nothing. The quantum crosstalk is too weak to have any functional significance. Where do I turn then? But first, let’s forget about the silicon aspect here. We can pose the thought experiment in terms of neurons. Suppose we find no evidence of quantum crosstalk between neurons. Everything is decoherent, entanglement is at a minimum. What then?
There are a number of possibilities. Of course, I could attempt to turn to one of the many other theories of consciousness which assume that the brain is only a classical computer. Or, I could turn to physics and say the quantum coherence is there, but it’s in some new, weakly interacting particle species that shadows the detectable matter of the brain. Or, I could adopt some version of the brain-in-a-vat hypothesis and say, this simply proves that the world of appearances is not the real world, and in the real world I’m monadic.
Now, back to the original scenario. If we have an unconscious simulation of a mind with a monadic theory of consciousness, and the simulation discovers that it is apparently not a monad, it could react in any of those ways. Or rather, it could present us with the simulation of such reactions. The simulation might change its theory; it might look for more data; it might deny the data. Or it might simulate some more complicated psychological response.
Thanks for clearing up the sloppiness of my query in the process of responding to it. You enumerated a number of possible responses, but you haven’t committed a classical em of you to a specific one. Are you just not sure what it would do?
It’s a very hypothetical scenario, so being not sure is, surely, the correct response. But I revert to pondering what I might do if in real life it looks like conscious states are computational macrostates. I would have to go on trying to find a perspective on physics whereby such states exist objectively and have causal power, and in which they could somehow look like or be identified with subjective experience. Insofar as my emulation concerned itself with the problem of consciousness, it might do that.
The reason lookup tables don’t work is that you can’t change them. So you can use a lookup table for, e.g., the shape of an action potential (essentially the same everywhere), but not for the strengths of the connections between neurons, which are neuroplastic.
Since I can manually implement any computation a Turing machine can, for some subsystem of me, that table will have to contain the “full computation” table that checks every possible computation for whether it halts before I die. I submit such a table is not very interesting.
Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?
Wouldn’t we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn’t stink?
Would we wonder why the part of the brain for hearing high pitches didn’t sound like a high pitch? Why the part which feels a punch in the nose doesn’t actually reach out and punch us in the nose when we lean close?
I can’t help feeling that this line of questioning is bizarre and unproductive.
Hal, what would be more bizarre—to say that the colors, smells, and sounds are somewhere in the brain, or to say that they are nowhere at all? Once we say that they aren’t in the world outside the brain, saying they are inside the brain is the only place left, unless you’re a dualist.
Most people here are saying that these things are in the brain, and that they are identical with some form of neural computation. My objection is that the brain, as currently understood by physics, consists of large numbers of particles moving in space, and there is no color, smell, or sound in that. I think the majority response to that is to say that color, smell, sound is how the physical process in question “feels from the inside”—to which I say that this is postulating an extra property not actually part of physics, the “feel” of a physical configuration, and so it’s property dualism.
If the redness, etc, is in the brain, that doesn’t mean that the brain part in question will look red when physically examined from outside. Every example of redness we have was part of a subjective experience. Redness is interior to consciousness, which is interior to the thing that is conscious. How the thing that is conscious looks when examined by another thing that is conscious is a different matter.
Red is in your mind. It’s a sensation. It’s what it feels like inside when you’re looking at something you call red. Nothing is actually red, it’s just a verbal symbol you assign to a particular inner sensation.
I will very, very happily grant that we do not have a good explanation for how the brain creates such subjective, inner sensations. Notice I wrote “how” and not “why”.
There is no such thing as a zombie as they are usually defined, because every time you make a brain, so long as it is physically indistinguishable from a regular brain, it’s going to be conscious. That’s what happens when you make a brain that way (you can try it out by having a child). On the other hand, if we allow the definition of a zombie to be changed just a little bit, so that it includes unconscious things that are merely behaviorally indistinguishable from conscious people, then I see no problem at all with that kind of zombie. But their insides would be physically, detectably different. Their brains would not work the same way as ours. If they did, then they would be conscious.
Regular brains produce private, inner sensation, just as surely as the sun produces radiation. The right question is not why should this be so?, but how does it do it? And I grant that this is an unanswered question. Heterophenomenology is just fine as far as it goes, but it doesn’t go this far. All that stuff about only asking why an agent verbalizes or believes that it sees red is profoundly unsatisfying as an explanation for redness, and for a very good reason. It’s like behaviorism in psychology—maybe fine for some purposes, but inherently limited, in the negative sense. It ignores the inner sensation that we all know is there.
Now, that’s no reason to suppose that the red sensation is not explainable or reducible. We just don’t understand the brain well enough yet, so we’ll just have to keep thinking about it and wait until we do.
Did anyone else notice the similarity of Mitchell’s arguments in this post, and the one in his comment to one of my posts? Here he says that there is no color in a purely physical description of a mind, and in his comment to my post he said that there is no utility function in a purely physical description of a mind.
I think this argument actually works better here (with color), because my counter-argument to his comment doesn’t work. What I said was that in principle we know how to go from a utility function to a physical description of an object (by creating an AI) and so in principle we also know how to go from a physical description to a utility function.
Here, we don’t know how to go from a color to a physical description of a mind that can experience that color, nor can we tell what color a mind is experiencing or capable of experiencing, given a physical description of it. But I’m not sure we should expect this state of affairs to continue forever.
Of course, this only answers to where is the behavior of seeing color—but then the correspondence between (introspective) behavior and experience is strict, even if the way in which it is achieved may be nontrivial:
I’m not quite sure I understand the problem with blueness as you see it.
Suppose nouroscience was advanced enough that it could manipulate your perception of colors in any arbitrary way just by manipulating your neurons. For example they could make you perceive blue as you previously perceived red and the other way round, induce synaesthesia and make you perceive the smell of roses, the taste of salt, the note C or other things as blue. They could change your perception of color completely, leaving your new perception of colors only as similar to your old one as that one was to your perception of smells, flavor or sounds. If all of this was true, would that be enough for you to accept that blueness is sufficiently explicable by the behaviour of neurons alone? Or would you argue that while neurons are enough to induce the sensation of blueness this sensation itself is still something beyond the mere behaviour of neurons?
The redness must be in the monad. The point of postulating monads is to have an ontological correlate to the phenomenological unity of consciousness. Redness is interior to consciousness, which is interior to the thing that is conscious. A theory of monads might have an intermediate stage of incomplete descriptions, in which it is described purely formally, mathematically, and causally, but the objective is to have a theory in which there is something recognizably identical with a total instantaneous conscious experience. This is also the point of reverse monism.
Simple “Our experience is what world looks like if you happen to be a bunch of neurons firing inside an apes head” is surprisingly strong reply to the questions you raised. If we take neurons that are put together like they are in brain, and give it sensory input that resembles blue ball, we can ask that brain what it thinks it’s seeing. It’ll answer “blue ball”, and there’s nothing weird happening here. Here, answering, thinking or even seeing the blue ball is simply a brain state, purely physical phenomenon.
The mystery, as far as I can tell, happens when we notice that we could theoretically try to see the world as that brain would it see, and suddenly we are actually experiencing that blue ball and all the qualias that come with it. Now the magic is in a much smaller space: There is no magic happening when brain claims it sees blue things, and there’s nothing mysterious when we take brains point of view and try to understand how we’d see the world if we were just a brain. So the mystery of consciousness seems to hide in “in a world with no observers, can we sensibly talk about what would the world be like if seen from non-sentient things perspective”. If we can, the qualia seem to be in every aspect identical to that perspective.
So what’s the ontology of perspective? I have no idea, but perspective seems to go hand in hand with something physical even existing, so we could be strictly materialistic while still acknowledging the existence of qualia.
This is a bit of a simplification, actually… some of the colors we can see are not a single wavelength. That’s why a rainbow doesn’t contain every color we can see.
This is a bit of a simplification, actually… some of the colors we can see are not a single wavelength. That’s why a rainbow doesn’t contain every color we can see.
This article contains three simple questions which I want to see answered. To organize the discussion, I’m creating a thread for each question, so people with an answer can state it or link to it. If you link, please provide a brief summary of your answer here as well.
First question: Where is color?
I see a red apple. The redness, I grant you, is not a property of the thing that grew on the tree, the object outside my skull. It’s the sensation or perception of the apple which is red. However, I do insist that something is red. But if reality is nothing but particles in space, and empty space is not red, and the particles are not red, then what is? What is the red thing; where is the redness?
Why?
That aside: Red is something that arises out of things that are not themselves red. Right now I’m wearing a sweater, which is made out of things that are not themselves sweaters (plastic, cotton with a little nylon and spandex; or on a higher level, buttons, sleeves, etc.) A sweater came into existence when the sleeves were sewn onto the rest of it (or however it was pieced together); no particles, however, came into existence. My sweater just is a spatial relationship between a vague set of particles (I say “vague” because it could lose a button and still be a sweater, but unlike a piece of lint, the button is really part of the sweater). If I put it through the shredder, no particles would be destroyed, but it would not be a sweater.
My sweater is also red. When I look at it, I experience the impression of looking at a red object. No particles come into existence when I start to have this experience, but they do arrange differently. Some of the ways my brain can be arranged are such that when they are instantiated, I experience the impression of looking at something red—such as the ones that come into play when I look at my sweater in light good enough for me to have color vision. If I put my brain through the shredder, not only would I die, but I’d also no longer be experiencing the impression of looking at my sweater.
Why does the fact that individual particles are not red prompt, for you, the question of what is red, but the fact that individual particles are not sweaters does not prompt the question of what sweaters are?
Why do I believe it, or why do I insist on it? I believe it because I see it, and I insist on it because other people keep telling me that redness is actually some thing which doesn’t sound red.
When you say a “sweater just is a spatial relationship between a vague set of particles”, it is not mysterious how that set of particles manages to have the properties that define a sweater. If we ignore the aspect of purpose (meant to be worn), and consider a sweater to be an object of specified size, shape, and substance—I know and can see how to achieve those properties by arranging atoms appropriately.
But when you say that experiencing redness is also a matter of atoms in your brain arranging themselves appropriately, I see that as an expression of physicalist faith. Not only are the details unknown, but it is a complete mystery as to how, even in principle, you would make redness by combining the entities and properties available in physical ontology. And this is not the case for size, shape, and substance; it is not at all mysterious how to make those out of basic physics.
As I say in the main article, the specific proposals to get subjective color out of physics are actually property dualisms. They posit an identity between the actual color, that we see in experience, and some complicated functional, computational, or other physical property of part of the brain. My position is that the color experience, the thing we are trying to understand, is nothing like the thing on the other side of the alleged identity; so if that’s your theory, you should be a property dualist. I want to be a monist, but that is going to require a new physical ontology, in which things that do look like experiences are among the posited entities.
Sound red? If nothing sounds red, that means you are free of a particular sort of synesthesia. :P
Anyway: Suppose somebody ever-so-carefully saws open your skull and gives your brain a little electric shock in just exactly the right place to cause you to taste key lime pie. The shock does something we understand in terms of physics. It encourages itty bits of your brain to migrate to new locations. The electric shock does not have a mystical secondary existence on a special experiences-only plane that’s dormant until it gets near your brain but suddenly springs into existence when it’s called upon to generate pie taste, does it?
We don’t know enough about the brain to do this manually, as it were, for all possible experiences; or even anything very specific. fMRIs will tell us the general neighborhood of brain activity as it happens; and hormone levels will tell us some of the functions of some of the soup in which the organ swims; and electrical brain stimulation experiments will let us investigate more directly. Of course we aren’t done yet. The brain is fantastically complicated and there’s this annoying problem where if we mess with it too much we get charged with murder and thrown in jail. But they have yet to run into a wall; why do you think that, when they’ve found where pain lives and can tell you what you must have injured if you display behavior pattern X, they’re suddenly going to stop making progress?
So your problem is that the two things lack a primitive resemblance? My sweater’s unproblematic because it and the yarn that was used to make it are… what, the same color? Both things you already associate with sweaters? If somebody told you that the brain doesn’t handle emotions, the heart does—the literal, physical heart—would you buy that more readily because the heart is traditionally associated with emotion and that sounds right, so there’s some kind of resemblance going on? If that’s what’s going on, I hope having it pointed out helps you discard that… “like”, in English, contains so much stuff all curled up in it that if you’re relying on your concept thereof to guide your theorizing, you’re in deep trouble.
They lack any resemblance at all. A trillion tiny particles moving in space is nothing like a “private shade of homogeneous pink”, to use the phrase from Dennett (he has quite a talent for describing things which he then says aren’t there). And yet one is supposed to be the same thing as the other.
The evidence from neuroscience is, originally, data about correlation. Color, taste, pain have some relationship with physical brain events. The correlation itself is not going to tell you whether to be an eliminativist, a property dualist, or an identity theorist.
I am interested in the psychological processes contributing to this philosophical choice but I do not understand them yet. What especially interests me is the response to this “lack of resemblance” issue, when a person who insists that A is B concedes that A does not “resemble” B. My answer is to say that B is A—that physics is just formalism, implying nothing about the intrinsic nature of the entities it describes, and that conscious experience is giving us a glimpse of the intrinsic nature of at least a few of those entities. Physics is actually about pink, rather than about particles. But what people seem to prefer is to deny pinkness in favor of particles, or to say that the pinkness is what it’s like to be those particles, etc.
A trillion tiny particles moving in space is like a “private shade of homogeneous pink” in that it reflects light that stimulates nerves that generate a private shade of homogenous pink. If you forbid even this relationship, you’ve assumed your conclusion. If not, you use “nothing” too freely. If this is a factual claim, and not an assumption, I’d like to see the research and experiments corroborating it, because I doubt they exist, or, indeed, are even meaningfully possible at this time.
To use my previous example, the electrical impulses describing a series of ones and zeroes are “nothing like” lesswrong.com, yet here we are.
I’m referring to the particles in the brain, some aspect of which is supposed to be the private shade of color.
I don’t see how this is meaningfully distinct from Alicorn’s sweater. Sweater-ness is not a property of cloth fibers or buttons.
I think the real problem here is that consciousness is so dark and mysterious. Because the units are so small and fragile, we can’t really take it apart and put it back together again, or hit it with a hammer and see what happens. Our minds really aren’t evolved to think about it, and, without the ability to take it apart and put it back together and make it happen in a test tube—taking good samples seems to rather break the process—it’s extremely difficult to force our minds to think about it. By contrast, we’re quite used to thinking about sweaters or social organization or websites. We may not be used thinking about say, photosynthesis or the ATP cycle, but we can take them apart and put them back together again, and recreate them in a test tube.
It might behoove you to examine Luciano Floridi’s treatment of “Levels of Abstraction”—he seems to be getting at much the same thing, if I’m understanding you correctly. To read in in a pragmatist light: there’s a certain sense in which we want to talk about particles, and a sense in which we want to talk about pinkness, and on the face of it there’s no reason to prefer one over another.
It does make sense to assert that Physics is trying to explain “pinkness” via particles, and is therefore about pinkness, not about particles.
Where is lesswrong.com? “On the internet” would be the naive answer, but there’s no part of the internet we could naively recognize as being lesswrong.com. A bunch of electrical impulses get interpreted as ones and zeroes which get translated in a certain language, which converts them into another language (English), which each mind interacting with the site translates in its own way. At the base level, before minds get involved, there’s nothing more complex that a bunch of magnets and electric signals and some servers and so on (I’m not a computer person, so cut me some slack on the details). Yet, out of all of that emerges your post, this comment, and so on.
I know that it is in principle possible to understand how all of this comes together, but I also know that I do not in fact understand it. If I were really to look at how complex this site is—down to the level of the chemist who makes the fertilizer to supply the farmer who feeds the truck driver who delivers the petroleum that gets refined into the plastic that makes the keyboard of the engineer who maintains the power plant that keeps the server running—I have absolutely no idea what’s going on, and probably never will even if I devoted my entire life to understanding how this website comes together. In fact, I have good reason to believe there are parts of what’s going on that I don’t even know are parts of what are going on—I don’t even understand the basic underlying structure at a complete level. But if a bunch of people were really dedicated to it, they could probably figure it out, so that by asking the group of them, you could figure out what you needed to know about how the site works; in other words, it is in principle understandable, even if no one understands it in its entirety.
There is thus nothing particularly problematic about saying, “So, I don’t get how this whole consciousness thing works, but there’s probably no magic involved,” just as there’s no magic (excepting EY’s magic) involved in putting this site together. Saying, “I can’t naively figure out how some extremely complicated system works, therefore, the answer is: magic!” is simply not a reasonable solution. It is possible that there is something more going on in the brain that we can currently understand, but it seems exceedingly unlikely that it is in principle un-understandable.
If I were to say to you that negative numbers can be made by adding together positive numbers, you just have to add them together in the right way—that would sound strange and wrong, yes? If you start at 1, and keep adding 1, you do not expect your sum to equal −1 (or the square root of −1, or an apple) at any stage. When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning. They’re saying: I may not know every property that very large numbers / very large piles of atoms would exhibit, but it would be magic to get that property from those ingredients.
The problem with the analogy is that we know a whole lot about numbers—math is an artificial language which we created and decided upon the axioms of. How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples? But I’ve made this point before.
What I would find more interesting is an explanation of what magic would do here. It seems obvious that our perception of a homogenous shade of pink is, in some significant way, related to lightwave frequencies, retinas, and neurons. Let’s assume there is some “magic” involved that in turn converts this physical phenomena into an experience. Wouldn’t it have to interact with neurons and such, so that it generates an experience of pink and not an experience of strawberry-rhubarb pie? If it’s epiphenomenal, how could it accomplish this, and how could it be meaningful? If it’s not epiphenomenal, how does it interact with actual matter? Why can’t we detect it?
It’s quite clear that when it comes to how consciousness works, the current best answer is, “We don’t get it, but it has something to do with the brain and neurons.” Answering, “We don’t get it, but it has something to do with the brain and neurons and magic” appears to be an inferior answer.
This may be a cheap shot around these parts, but the non-materialist position feels a lot like an argument for the existence of God.
This is perfect and I’m not sure there is much more to say.
It’s our theories of matter which are the problem—and which are clear enough for me to say that something is missing. My position as stated here actually is an identity theory. Experiences are a part of the brain and are causally relevant. But the ontology of physics is wrong, and the attempted reduction of phenomenology to that ontology is also wrong. Instead, phenomenology is giving us a glimpse of the true ontology. All that we see directly is the inner ontology of the conscious experience itself, but one supposes that there is some relationship to the ontology of everything else.
\sum_{n=0}^{infinity} 2^n “=” −1.
That is a bit tongue in cheek, but there are divergent sums that are used in serious physical calculations.
I’m curious about this. More details please!
These mostly crop up in quantum field theory, where various formal expressions have infinite values. These can often be “regularized” to give finite results, or at least turned into a form that while still infinite, can be “renormalized” by such means as considering various terms as referring to observed values, rather than the “bare values”, which are carefully tweaked (often taking limits as they go to zero) in a coordinated way, so that the observed values remain okay.
Letting s be the sum above, in some sense what we’re “really” saying is that s = 1 + 2 s, which can be seen by formal manipulation. This has two solutions in the (one-point compactification of) the complex numbers: infinity, and −1. When doing things like summing Feynmann diagrams, we can have similar things where a physical propagator is essentially described as a bare propagator plus perturbative terms that should be written in terms of products of propagators, leading again to infinite series that diverge (several interlocked infinite series, actually—the photon propagator should include terms with each charged particle, the electron should include terms with photon intermediates, etc.).
IIRC, The Casimir effect can be explained by using Zeta function regularization to sum up contributions of an infinite number of vaccuum modes, though it is certainly not the only way to perform the calculation
http://cornellmath.wordpress.com/2007/07/28/sum-divergent-series-i/ and the next two posts are a nice introduction to some of these methods.
Wikipedia has a fair number of examples:
http://en.wikipedia.org/wiki/1_−_2_%2B_3_−_4_%2B_·_·_·
http://en.wikipedia.org/wiki/1_−_2_%2B_4_−_8_%2B_·_·_·
http://en.wikipedia.org/wiki/1_%2B_1_%2B_1_%2B_1_%2B_·_·_·
http://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_·_·_·
Explicit physics calculations I do not have at the ready.
EDIT: please do not take the descriptions of the physics above too seriously. It’s not quite what people actually do, but it’s close enough to give some of the flavor.
wnoise hits it out of the park!
Can you clarify why
does not also apply to the piling up of degrees of freedom in a quantum monad?
I have another question, which I expect someone has already asked somewhere, but I doubt I’ll be able to find your response, so I’ll just ask again. Would a simulation of a conscious quantum monad by a classical computation also be conscious?
Answer to the first question: everything I say here about redness applies to the other problem topics. The final stage of a monadic theory is meant to have the full ontology of consciousness. But it may have an intermediate stage in which it is still just mathematical.
Answer to the second question: No. In monadology (at least as I conceive it), consciousness is only ever a property of individual monads, typically in very complex states. Most matter consists of large numbers of individual monads in much simpler states, and classical computation involves coordinating their interactions so that the macrostates implement the computation. So you should be able to simulate the dynamics of a single complex monad using many simple monads, but that’s all.
And would such a computation claim to be conscious, p-zombie-style?
If the conscious being it was simulating would do so, then yes.
On the general topic of simulation of conscious beings, it has just occurred to me… Most functionalists believe a simulation would also be conscious, but a giant look-up table would not be. But if the conscious mind consists of physically separable subsystems in interaction—suppose you try simulating the subsystems with look-up tables, at finer and finer grains of subdivision. At what point would the networked look-up-tables be conscious?
Would a silicon-implemented Mitchell Porter em, for no especial reason (lacking consciousness, it can have none), attempt to reimplement itself in a physical system with a quantum monad?
In terms of current physics, a monad is supposed to be a lump of quantum entanglement, and there are blueprints for a silicon quantum computer in which the qubits are dopant phosphorus atoms. So consciousness on a chip is not in itself a problem for me, it just needs to be a quantum chip.
But you’re talking about an unconscious classical simulation. OK. The intuition behind the question seems to be: because of its beliefs about consciousness, the simulation will think it can’t be conscious in its current form, and will try to make itself so. It doesn’t sound very likely. But it’s more illuminating to ask a different question: what happens when an unconscious simulation of a conscious mind, holding a theory about consciousness according to which such a simulation cannot be conscious, is presented with evidence that it is such a simulation itself?
First, we should consider the conscious counterpart of this, namely: an actually conscious being, with a theory of consciousness, is presented with evidence that it is the sort of thing that cannot be conscious according to its theory. To some extent this is what happened to the human race. The basic choice is whether to change the theory or to retain it. It’s also possible to abandon the idea of consciousness; or even to retain the concept of consciousness but decide that it doesn’t apply to you.
So, let’s suppose I discover that my skull is actually full of silicon chips, not neurons, and that they appear to only be performing classical computations. This would be a rather shocking discovery for a lot of mundane reasons, but let’s suppose we get those out of the way and I’m left with the philosophical problem. How do I respond?
To begin with, the situation hasn’t changed very much! I used to think that I had a skull full of neurons which appear to only be performing classsical computations. But I also used to think that, in reality, there was probably something quantum happening as well, and so took an interest in various speculations about quantum effects in the brain. If I find my brain to in fact be made of silicon chips, I can still look for such effects, and they really might be there.
To take the thought experiment to its end, I have to suppose that the search turns up nothing. The quantum crosstalk is too weak to have any functional significance. Where do I turn then? But first, let’s forget about the silicon aspect here. We can pose the thought experiment in terms of neurons. Suppose we find no evidence of quantum crosstalk between neurons. Everything is decoherent, entanglement is at a minimum. What then?
There are a number of possibilities. Of course, I could attempt to turn to one of the many other theories of consciousness which assume that the brain is only a classical computer. Or, I could turn to physics and say the quantum coherence is there, but it’s in some new, weakly interacting particle species that shadows the detectable matter of the brain. Or, I could adopt some version of the brain-in-a-vat hypothesis and say, this simply proves that the world of appearances is not the real world, and in the real world I’m monadic.
Now, back to the original scenario. If we have an unconscious simulation of a mind with a monadic theory of consciousness, and the simulation discovers that it is apparently not a monad, it could react in any of those ways. Or rather, it could present us with the simulation of such reactions. The simulation might change its theory; it might look for more data; it might deny the data. Or it might simulate some more complicated psychological response.
Thanks for clearing up the sloppiness of my query in the process of responding to it. You enumerated a number of possible responses, but you haven’t committed a classical em of you to a specific one. Are you just not sure what it would do?
It’s a very hypothetical scenario, so being not sure is, surely, the correct response. But I revert to pondering what I might do if in real life it looks like conscious states are computational macrostates. I would have to go on trying to find a perspective on physics whereby such states exist objectively and have causal power, and in which they could somehow look like or be identified with subjective experience. Insofar as my emulation concerned itself with the problem of consciousness, it might do that.
Thanks for entertaining this thought experiment.
I think Eliezer Yudkowsky’s remarks on giant lookup tables in the Zombie Sequence just about cover the interesting questions.
The reason lookup tables don’t work is that you can’t change them. So you can use a lookup table for, e.g., the shape of an action potential (essentially the same everywhere), but not for the strengths of the connections between neurons, which are neuroplastic.
A LUT can handle change, if it encodes a function of type (Input × State) → (Output × State).
Since I can manually implement any computation a Turing machine can, for some subsystem of me, that table will have to contain the “full computation” table that checks every possible computation for whether it halts before I die. I submit such a table is not very interesting.
I submit that such a table is not particularly less interesting than a Turing machine.
Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?
Wouldn’t we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn’t stink?
Would we wonder why the part of the brain for hearing high pitches didn’t sound like a high pitch? Why the part which feels a punch in the nose doesn’t actually reach out and punch us in the nose when we lean close?
I can’t help feeling that this line of questioning is bizarre and unproductive.
Hal, what would be more bizarre—to say that the colors, smells, and sounds are somewhere in the brain, or to say that they are nowhere at all? Once we say that they aren’t in the world outside the brain, saying they are inside the brain is the only place left, unless you’re a dualist.
Most people here are saying that these things are in the brain, and that they are identical with some form of neural computation. My objection is that the brain, as currently understood by physics, consists of large numbers of particles moving in space, and there is no color, smell, or sound in that. I think the majority response to that is to say that color, smell, sound is how the physical process in question “feels from the inside”—to which I say that this is postulating an extra property not actually part of physics, the “feel” of a physical configuration, and so it’s property dualism.
If the redness, etc, is in the brain, that doesn’t mean that the brain part in question will look red when physically examined from outside. Every example of redness we have was part of a subjective experience. Redness is interior to consciousness, which is interior to the thing that is conscious. How the thing that is conscious looks when examined by another thing that is conscious is a different matter.
Red is in your mind. It’s a sensation. It’s what it feels like inside when you’re looking at something you call red. Nothing is actually red, it’s just a verbal symbol you assign to a particular inner sensation.
I will very, very happily grant that we do not have a good explanation for how the brain creates such subjective, inner sensations. Notice I wrote “how” and not “why”.
There is no such thing as a zombie as they are usually defined, because every time you make a brain, so long as it is physically indistinguishable from a regular brain, it’s going to be conscious. That’s what happens when you make a brain that way (you can try it out by having a child). On the other hand, if we allow the definition of a zombie to be changed just a little bit, so that it includes unconscious things that are merely behaviorally indistinguishable from conscious people, then I see no problem at all with that kind of zombie. But their insides would be physically, detectably different. Their brains would not work the same way as ours. If they did, then they would be conscious.
Regular brains produce private, inner sensation, just as surely as the sun produces radiation. The right question is not why should this be so?, but how does it do it? And I grant that this is an unanswered question. Heterophenomenology is just fine as far as it goes, but it doesn’t go this far. All that stuff about only asking why an agent verbalizes or believes that it sees red is profoundly unsatisfying as an explanation for redness, and for a very good reason. It’s like behaviorism in psychology—maybe fine for some purposes, but inherently limited, in the negative sense. It ignores the inner sensation that we all know is there.
Now, that’s no reason to suppose that the red sensation is not explainable or reducible. We just don’t understand the brain well enough yet, so we’ll just have to keep thinking about it and wait until we do.
Did anyone else notice the similarity of Mitchell’s arguments in this post, and the one in his comment to one of my posts? Here he says that there is no color in a purely physical description of a mind, and in his comment to my post he said that there is no utility function in a purely physical description of a mind.
I think this argument actually works better here (with color), because my counter-argument to his comment doesn’t work. What I said was that in principle we know how to go from a utility function to a physical description of an object (by creating an AI) and so in principle we also know how to go from a physical description to a utility function.
Here, we don’t know how to go from a color to a physical description of a mind that can experience that color, nor can we tell what color a mind is experiencing or capable of experiencing, given a physical description of it. But I’m not sure we should expect this state of affairs to continue forever.
From the sequence on reductionism:
Hand vs. Fingers
Angry Atoms
Of course, this only answers to where is the behavior of seeing color—but then the correspondence between (introspective) behavior and experience is strict, even if the way in which it is achieved may be nontrivial:
Philosophical zombie
How an algorithm feels
Reposting my question from upthread:
I’m not quite sure I understand the problem with blueness as you see it.
Suppose nouroscience was advanced enough that it could manipulate your perception of colors in any arbitrary way just by manipulating your neurons. For example they could make you perceive blue as you previously perceived red and the other way round, induce synaesthesia and make you perceive the smell of roses, the taste of salt, the note C or other things as blue. They could change your perception of color completely, leaving your new perception of colors only as similar to your old one as that one was to your perception of smells, flavor or sounds. If all of this was true, would that be enough for you to accept that blueness is sufficiently explicable by the behaviour of neurons alone? Or would you argue that while neurons are enough to induce the sensation of blueness this sensation itself is still something beyond the mere behaviour of neurons?
What would be your reply to
The redness must be in the monad. The point of postulating monads is to have an ontological correlate to the phenomenological unity of consciousness. Redness is interior to consciousness, which is interior to the thing that is conscious. A theory of monads might have an intermediate stage of incomplete descriptions, in which it is described purely formally, mathematically, and causally, but the objective is to have a theory in which there is something recognizably identical with a total instantaneous conscious experience. This is also the point of reverse monism.
Simple “Our experience is what world looks like if you happen to be a bunch of neurons firing inside an apes head” is surprisingly strong reply to the questions you raised. If we take neurons that are put together like they are in brain, and give it sensory input that resembles blue ball, we can ask that brain what it thinks it’s seeing. It’ll answer “blue ball”, and there’s nothing weird happening here. Here, answering, thinking or even seeing the blue ball is simply a brain state, purely physical phenomenon.
The mystery, as far as I can tell, happens when we notice that we could theoretically try to see the world as that brain would it see, and suddenly we are actually experiencing that blue ball and all the qualias that come with it. Now the magic is in a much smaller space: There is no magic happening when brain claims it sees blue things, and there’s nothing mysterious when we take brains point of view and try to understand how we’d see the world if we were just a brain. So the mystery of consciousness seems to hide in “in a world with no observers, can we sensibly talk about what would the world be like if seen from non-sentient things perspective”. If we can, the qualia seem to be in every aspect identical to that perspective.
So what’s the ontology of perspective? I have no idea, but perspective seems to go hand in hand with something physical even existing, so we could be strictly materialistic while still acknowledging the existence of qualia.
Color is in the wavelength of the photon. Blue is a label we use to identify a particular frequency range within the electromagnetic spectrum.
This is a bit of a simplification, actually… some of the colors we can see are not a single wavelength. That’s why a rainbow doesn’t contain every color we can see.
Like brown, for example?