We can say no such thing about the mapping between physics and subjective experience.
The wavelength of light maps pretty straightforwardly onto our perception of color. We can trace the activation of cones in our eyes to patterns of neuron firing in the optic nerve to neurons firing in the visual cortex. “Redness” isn’t magic. “Redness” is a particular configuration (or, more properly, a set of configurations) of neurons. The only reason it seems special to you is because you are experiencing the algorithm from the inside. Consciousness is what thinking feels like, not magic.
Sure… I’m with you until you get to the part where some (all?) configurations of matter have experiences from the inside, which nobody can detect or describe, and the only evidence that these “experiences” exist is that people say they can feel them… isn’t this exactly the kind of thinking we ought to dismiss as crazy? But on the other hand, I think I feel experiences too!
You’re making this more mysterious than it needs to be. No matter what our experiences felt like, we’d still call them qualia. No matter how we used our senses to acquire information about the world, we’d still call that process experience.
I wouldn’t feel comfortable making that claim until I’d tested it on a couple of non-human agents, and in any case I wouldn’t call it mysterious.
Really all I have is the suspicion that consciousness is much more normal than people tend to think. The only thing I’m confident of is that explaining consciousness won’t require magic or special exceptions to the laws of physics.
What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery. Even after we become able to model human brains on computers and after we discover which parts of brain are responsible for each exact feeling, I can’t imagine how this knowledge stops people wonder about qualia, zombies and Chinese rooms.
What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery.
I didn’t mean my question as a Kelvinian declaration that we will never understand. I was only curious whether WrongBot has some more specific idea what sort of answer can destroy the feeling of confusion when thinking about qualia. I am even not sure whether there is a question to be answered.
Right. I apologize, I didn’t read your comment very clearly. The Kelvin case offers some hope, though—after all, the New Age life-is-energy meme is a lot weaker than elan vital was.
I haven’t yet encountered a sufficiently precise definition of qualia (or consciousness, for that matter) to be able to say what exactly the confusion is, much less where it’s coming from or how it can be destroyed. The hard problem of consciousness is a wrong question, and I suspect that for any given untangling of it, the answer will be trivial.
“Redness” is a particular configuration (or, more properly, a set of configurations) of neurons.
You’re missing the basic problem: ‘neurons’ are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes. What’s the set of configurations of quarks which feels from the inside like thinking or like the color red? How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
I usually find Occam’s Razor to be sufficient. You are misapplying reductionism: if consciousness maps to a set of configurations of neurons, and neurons map to quarks, spacetime, and probability amplitudes, then we have no need of mysteriously specific exceptions to physical laws. Indeed, such hypothetical and entirely unsupported exceptions have no explanatory power at all.
What’s the set of configurations of quarks which feels from the inside like thinking or like the color red?
Why, the set of configurations of quarks which describe any member of the set of neurons which feel from the inside like thinking or like the color red, of course.
You’re missing the basic problem: ‘neurons’ are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes.
No, he’s not. Neurons are part of the territory. They are composed of other parts of the territory which are composed of quarks, spacetime, etc. But that doesn’t make a neuron not part of the territory. Just because something is ontologically reducible doesn’t mean it isn’t part of the territory. It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn’t.
Fine, substitute “not ontologically fundamental” for “not part of the territory” if you must.
It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn’t.
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to it, simply because it is foundational enough to you and anyone else with subjective experience. There is a reasonable argument to be made that “the way it feels from the inside” is just as fundamental as the basic physics of how the world works.
This does not imply that the two are necessarily related (for instance, P-zombies or robots can be unconscious yet physically talk about subjective experience). It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to subjective experience, simply because it is foundational enough to you and anyone else with subjective experience.
Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn’t give me any decent reason for thinking it has any such actually foundational status. Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn’t obvious to most humans that we use different sections of our brains to add and multiply. But, there’s a lot of evidence for this. For example, fMRI scans show different areas lighting up, with areas corresponding to memory lighting up for multiplication and areas corresponding to reasoning lighting up for addition. Similarly, there are stroke victims who only lose the ability to do one or the other operation. And this is but one example of how humans fail. Relying on human feelings to get an idea about how anything in the world, especially our own mind, works is not a good idea.
It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
I don’t follow this logic at all. I’m not completely sure what you are trying to do here but it sounds suspiciously like the theistic argument that God is a simple hypothesis. Just because I can posit something as a single, irreducible entity does not make that thing simple. (Also, can you expand on what you mean by a spooky superintelligence running debugging sessions since I can’t parse this is in any coherent way)
Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn’t give me any decent reason for thinking it has any such actually foundational status.
Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn’t obvious to most humans that we use different sections of our brains to add and multiply.
What this is actually saying is that phenomenology (the stuff we can access by introspection) cannot directly map physical areas of the brain of the kind which might get damaged in a stroke. In itself, this is not evidence that humans “suck” at introspection; especially if our consciousness really is a quantum state with $bignum degrees of freedom, rather than a classical system with spatially separate subparts.
it sounds suspiciously like the theistic argument that God is a simple hypothesis.
God is not a simple hypothesis, but “this was affected by an optimization process which cares about X or something like it” is simpler than “this configuration which happens to be near-optimal for X arose by sheer luck”. Which is pretty much what one would have to posit in order to explain our subjective experience of the extremely complicated physical systems we call “brains”. There are other avenues such as the anthropic principle, but ISTM that at some point one would start to run into circularities.
it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
What else can it depend on? Your original claim was that it has to do something with quantum superpositions, so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure; this is contrary to what actual explanations are.
this configuration which happens to be near-optimal for X arose by sheer luck
The “naive physicalists” don’t maintain anything like that. Evolution isn’t sheer luck.
so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure
I’m not trying to explain why qualia occur, just seeking a sensible physical description of them. Given the requirement that qualia should be actually experienced in some sense, a “black box” system which clearly matches these mysterious experiences is better than a complicated classical configuration plus a lengthy description of how this configuration is felt from the inside.
The “naive physicalists” don’t maintain anything like that. Evolution isn’t sheer luck.
Indeed it’s not: it’s an optimization process! But why would evolution care about qualia? In fact, many physicalist philosophers think qualia exist as epiphenomena, and an epiphenomenon cannot be naturally selected for.
I’m not trying to explain why qualia occur, just seeking a sensible physical description of them.
I use description and explanation as synonyms most of the time. Black box description is not much of a description, it’s rather lack of one. What information is contained in “qualia work like a black box”, or in a little more fancy language, “qualia work due to still unknown physical mechanism”? These are not description of qualia; the only non-vacuous interpretation of such sentences is “the contemporary physics is not going to explain qualia”, which may be true, but still is a statement about our current knowledge, not about qualia.
But why would evolution care about qualia?
Well, you are probably right in that, even if we are getting dangerously close to the philosophical zombies’ realm.
What information is contained in “qualia work like a black box”, or in a little more fancy language, “qualia work due to still unknown physical mechanism”?
Very little, but this is not a real description of qualia, just a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, “black box” quantum states, and perhaps how that physical system interacts with known neural correlates of subjective experiences. Unfortunately, we’re nowhere near that level yet.
even if we are getting dangerously close to the philosophical zombies’ realm.
Dangerously close? Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?
[J]ust a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, “black box” quantum states [...]
What makes this avenue different from investigation of neuron configurations? New physical laws were never discovered after rejecting the old ones, saying that they couldn’t possibly work. All discoveries of new physics happened after conducting research using the old paradigm and realising anomalies. I mean, if there is something strangely quantum going on in the brains, we will not miss it even if we use the conventional approach.
Or said differently, I still have no idea what light quantumness can bring into the question.
Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?
I fear talking about things that aren’t connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.
What makes this avenue different from investigation of neuron configurations?
Not much. It’s still neuroscience, but it takes reports of subjective experience a bit more seriously, and tries to explain them by using existing physics, rather than treating them as meaningless or as magical and unexplainable.
I fear talking about thing that aren’t connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.
Look, it’s not that complicated. I’m not the only person who talks about the Cartesian theater and claims that we can somehow feel brain algorithms from the inside. If subjective experience is not an observable fact to you, then your psychology is radically different from that of many other people.
I should have written objective observable facts or something like that. I can observe that I am not a P-zombie, however the beauty of the whole P-zombie business is that such observation is, sort of, insufficient. I would need to observe whether you are a P-zombie, and that I can’t.
It is perhaps more economical and Occam-razorish for me to expect that other people are no P-zombies either, but even if they were zombies, I would have no way to realise that, and this renders the zombie question quite uninteresting.
Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
Do you question the consensus that you see using your eyes? Because the eye is a blatantly complicated mechanism directly in the middle of one of the direct experiences of the world you stake your theory on.
I’m not questioning the fact that complicated mechanisms are involved in creating your subjective experience; I question the physical description of that subjective experience as an incredibly complicated configuration in the brain. If your qualia are at all real in some sense, they should correspond to something far simpler than that on Occam’s Razor grounds. Alternately, you might just be a P-zombie. But then you’d have serious problems experiencing how your brain feels from the inside, although your brain would definitely be talking about its internal experiences.
I’m not questioning the fact that complicated mechanisms are involved in creating your subjective experience;
Why aren’t you? You just said that “[qualia] should correspond to something far simpler than that”. If a (say) visual quale is simple, then why does the human system need a complicated mechanism to capture large numbers of photons such that they form a coherent image on a surface coated with photosensitive neurons, which are wired so as to cause large-scale effects on other parts of the neural (and glial) system of the brain, starting with the visual cortex and spreading from there … to cause something simple? Light was simple to start with! If you expect things to be simple at the Cartesian theater, the visual system moves the wrong way.
Light is simple, but evolved organisms care very little about the fundamental qualities of light. They care a lot about running efficient computations using various inputs, including the excitation of photosensitive neurons. This is probably why the Cartesian theather feels very much like computation on high-level inputs and outputs, rather than objectively fundamental things such as wavelengths of light. And the computations which transform low-level data like excitation of sensory neurons into high-level inputs are probably unconscious because they are qualitatively different from conscious computation.
I would expect optimization for efficiency to be something evolution does—but I am compelled to note that I mentioned “the Cartesian theater” as a reference to Daniel Dennett’s Consciousness Explained, where he strenuously refuted the idea of the Cartesian theater. By Dennett’s argument—and even when Consciousness Explained came out, he had a lot of research data to work from—the collocation of all sensory data in a single channel to run past some homunculus recording our conscious experience is unlikely. After all, there already is a data-processing entity right there to collect all the sensory data—that’s the entire brain. So within the brain, it should not be surprising that different conscious experiences are saved to memory from different parts. Particularly since the brain is patently a parallel computer anyway.
Daniel Dennett’s “refutation” of the Cartesian theater has been widely criticized. Basically, he relies on perceptual illusions such as discrete motion being perceived as continuous, arguing that there should be a fact of the matter as to whether “the motion in the Cartesian theater” is continuous or not. But phenomenology is far simpler (or more complicated) than that: the fact that we perceive the quale of continuous_motion does not imply that a homunculous somewhere is seeing the object in an intermediate position at each given moment in time. It is a strawman argument.
There is a reasonable argument to be made that “the way it feels from the inside” is just as fundamental as the basic physics of how the world works.
Well, what is it, then?
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to subjective experience, simply because it is foundational enough to you and anyone else with subjective experience.
Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly… circular.
It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
Configurations of neurons are not complex. They are complicated, but they can still be explained by the same physics as everything else in the world. You are proposing a more complex universe. Or possibly a god. They are equally implausible without supporting evidence.
Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly… circular.
Feel free to run garbage collection on that circularity. You’ll find out what it feels like to subjectively vanish in a puff of logic.
You are proposing a more complex universe.
Not really, since both subjective experience and quantum mechanics are part of our universe already. Perhaps one could say that I’m proposing more complicated brains, but that adds little or nothng to the overall complexity budget given what we know about quantum biology, biophysics, evolution etc.
Not really, since both subjective experience and quantum mechanics are part of our universe already.
No, you are proposing a more complicated universe. Quantum mechanical systems can be simulated on a classical computer given a source of randomness. The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory for a classical system to simulate these runs than a quantum system would. If the complexity hierarchy exhibits partial collapse with say BQP being equal to P, then even this would in some sense not be true and we’d then have quantum computers as just classical machines with a source of random bits. Now, most comp sci people don’t believe that, but the thrust of this argument just requires the fact that classical machines with randomness can simulate quantum machines given extra time and space. since that is the case, in order to assert that quantum mechanics has any chance of causing things like qualia and consciousness would require that there are fundamental gaps in our understanding of quantum mechanics. It would also likely violate many forms of the Church-Turing thesis. So you’d have to basic failings in our understanding of QM and theoretical comp sci for this sort of approach to be even have a chance at working.
Quantum mechanical systems can be simulated on a classical computer given a source of randomness.
This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be “felt from the inside”. In this theory, qualia and consciousness are not caused by quantum mechanics; they are what some extremely complex quantum states feel like.
The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory
If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.
This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be “felt from the inside”. Qualia and consciousness are not caused by quantum mechanics, they are what some extremely complex quantum states feel like.
At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?
If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.
That’s actually the best argument I’ve heard for supposing that there’s a quantum mechanical aspect to our processing. Thank you for bringing it to my attention. It does make a QM aspect more plausible. However, it is still a very weak argument since a) evolution would only do this if it had an easy way of keeping things in coherence that didn’t take up too much resources b) It seems unlikely that there’s a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild. I don’t think for example that humans needed to factor large integers in our hunter gatherer societies. This does lead to the idea of deliberately evolving beings that actually use quantum mechanics in their thought processes by selecting for ones that are good at algorithms that do have speedups in a QM system.
At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?
Quantum systems have much nicer properties from this point of view. An internally entangled quantum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics.
evolution would only do this if it had an easy way of keeping things in coherence that didn’t take up too much resources
Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.
If there’s a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild.
Why wouldn’t there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.
Quantum systems have much nicer properties from this point of view. An internally entangled quntum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics
That makes some sense, although I don’t see why a classical simulation of the same wouldn’t feel identical.
Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.
This may be true in the same sense that sending a probe to Betelgeuse is easier than sending a probe to the Andromeda galaxy. You are still talking about fantastically difficult things to keep in coherence. We’re still talking about systems kept below at most 5 kelvin or so (being generous). It is noteworthy that so far we’ve actually had far more success implementing standard quantum computers than we have with topological quantum computers.
Why wouldn’t there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.
There’s no evidence of any process we associate as part of “intelligence” as being sped-up or made more efficient by quantum computation. I’d also be very interested in seeing citations for the claim that there are “many proposals for artificial general intelligence (AGI) using quantum computation.”
What’s the set of configurations of quarks which feels from the inside like thinking or like the color red?
Do you demand the exact wave function?
How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
I was never much comfortable with “consciousness is how thinking feels from inside” explanation, since it hardly explains anything. However, the alternatives are non-explanations even more. Unless the hypothesis predicts something testable, it is useless. The position that no non-standard physics is involved is a kind of default which is held whenever there are no clear reasons to think otherwise, that’s all.
The wavelength of light maps pretty straightforwardly onto our perception of color. We can trace the activation of cones in our eyes to patterns of neuron firing in the optic nerve to neurons firing in the visual cortex. “Redness” isn’t magic. “Redness” is a particular configuration (or, more properly, a set of configurations) of neurons. The only reason it seems special to you is because you are experiencing the algorithm from the inside. Consciousness is what thinking feels like, not magic.
Sure… I’m with you until you get to the part where some (all?) configurations of matter have experiences from the inside, which nobody can detect or describe, and the only evidence that these “experiences” exist is that people say they can feel them… isn’t this exactly the kind of thinking we ought to dismiss as crazy? But on the other hand, I think I feel experiences too!
You’re making this more mysterious than it needs to be. No matter what our experiences felt like, we’d still call them qualia. No matter how we used our senses to acquire information about the world, we’d still call that process experience.
Are you claiming that any sufficiently complex agent will report a mysterious feeling of consciousness? That can’t be right.
I wouldn’t feel comfortable making that claim until I’d tested it on a couple of non-human agents, and in any case I wouldn’t call it mysterious.
Really all I have is the suspicion that consciousness is much more normal than people tend to think. The only thing I’m confident of is that explaining consciousness won’t require magic or special exceptions to the laws of physics.
What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery. Even after we become able to model human brains on computers and after we discover which parts of brain are responsible for each exact feeling, I can’t imagine how this knowledge stops people wonder about qualia, zombies and Chinese rooms.
I imagine Lord Kelvin felt similarly when he thought of the elan vital. It didn’t work for that, and it didn’t work for a very good reason: your ignorance of the realm of possibilities is not good evidence. An inability to come up with alternatives may be better support for a claim than showing that you have not yet been compelled to admit defeat, but it’s still nearly worthless.
I didn’t mean my question as a Kelvinian declaration that we will never understand. I was only curious whether WrongBot has some more specific idea what sort of answer can destroy the feeling of confusion when thinking about qualia. I am even not sure whether there is a question to be answered.
Right. I apologize, I didn’t read your comment very clearly. The Kelvin case offers some hope, though—after all, the New Age life-is-energy meme is a lot weaker than elan vital was.
I haven’t yet encountered a sufficiently precise definition of qualia (or consciousness, for that matter) to be able to say what exactly the confusion is, much less where it’s coming from or how it can be destroyed. The hard problem of consciousness is a wrong question, and I suspect that for any given untangling of it, the answer will be trivial.
You’re missing the basic problem: ‘neurons’ are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes. What’s the set of configurations of quarks which feels from the inside like thinking or like the color red? How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
I usually find Occam’s Razor to be sufficient. You are misapplying reductionism: if consciousness maps to a set of configurations of neurons, and neurons map to quarks, spacetime, and probability amplitudes, then we have no need of mysteriously specific exceptions to physical laws. Indeed, such hypothetical and entirely unsupported exceptions have no explanatory power at all.
Why, the set of configurations of quarks which describe any member of the set of neurons which feel from the inside like thinking or like the color red, of course.
No, he’s not. Neurons are part of the territory. They are composed of other parts of the territory which are composed of quarks, spacetime, etc. But that doesn’t make a neuron not part of the territory. Just because something is ontologically reducible doesn’t mean it isn’t part of the territory. It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn’t.
Fine, substitute “not ontologically fundamental” for “not part of the territory” if you must.
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to it, simply because it is foundational enough to you and anyone else with subjective experience. There is a reasonable argument to be made that “the way it feels from the inside” is just as fundamental as the basic physics of how the world works.
This does not imply that the two are necessarily related (for instance, P-zombies or robots can be unconscious yet physically talk about subjective experience). It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn’t give me any decent reason for thinking it has any such actually foundational status. Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn’t obvious to most humans that we use different sections of our brains to add and multiply. But, there’s a lot of evidence for this. For example, fMRI scans show different areas lighting up, with areas corresponding to memory lighting up for multiplication and areas corresponding to reasoning lighting up for addition. Similarly, there are stroke victims who only lose the ability to do one or the other operation. And this is but one example of how humans fail. Relying on human feelings to get an idea about how anything in the world, especially our own mind, works is not a good idea.
I don’t follow this logic at all. I’m not completely sure what you are trying to do here but it sounds suspiciously like the theistic argument that God is a simple hypothesis. Just because I can posit something as a single, irreducible entity does not make that thing simple. (Also, can you expand on what you mean by a spooky superintelligence running debugging sessions since I can’t parse this is in any coherent way)
Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
What this is actually saying is that phenomenology (the stuff we can access by introspection) cannot directly map physical areas of the brain of the kind which might get damaged in a stroke. In itself, this is not evidence that humans “suck” at introspection; especially if our consciousness really is a quantum state with $bignum degrees of freedom, rather than a classical system with spatially separate subparts.
God is not a simple hypothesis, but “this was affected by an optimization process which cares about X or something like it” is simpler than “this configuration which happens to be near-optimal for X arose by sheer luck”. Which is pretty much what one would have to posit in order to explain our subjective experience of the extremely complicated physical systems we call “brains”. There are other avenues such as the anthropic principle, but ISTM that at some point one would start to run into circularities.
What else can it depend on? Your original claim was that it has to do something with quantum superpositions, so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure; this is contrary to what actual explanations are.
The “naive physicalists” don’t maintain anything like that. Evolution isn’t sheer luck.
I’m not trying to explain why qualia occur, just seeking a sensible physical description of them. Given the requirement that qualia should be actually experienced in some sense, a “black box” system which clearly matches these mysterious experiences is better than a complicated classical configuration plus a lengthy description of how this configuration is felt from the inside.
Indeed it’s not: it’s an optimization process! But why would evolution care about qualia? In fact, many physicalist philosophers think qualia exist as epiphenomena, and an epiphenomenon cannot be naturally selected for.
I use description and explanation as synonyms most of the time. Black box description is not much of a description, it’s rather lack of one. What information is contained in “qualia work like a black box”, or in a little more fancy language, “qualia work due to still unknown physical mechanism”? These are not description of qualia; the only non-vacuous interpretation of such sentences is “the contemporary physics is not going to explain qualia”, which may be true, but still is a statement about our current knowledge, not about qualia.
Well, you are probably right in that, even if we are getting dangerously close to the philosophical zombies’ realm.
Very little, but this is not a real description of qualia, just a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, “black box” quantum states, and perhaps how that physical system interacts with known neural correlates of subjective experiences. Unfortunately, we’re nowhere near that level yet.
Dangerously close? Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?
What makes this avenue different from investigation of neuron configurations? New physical laws were never discovered after rejecting the old ones, saying that they couldn’t possibly work. All discoveries of new physics happened after conducting research using the old paradigm and realising anomalies. I mean, if there is something strangely quantum going on in the brains, we will not miss it even if we use the conventional approach.
Or said differently, I still have no idea what light quantumness can bring into the question.
I fear talking about things that aren’t connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.
Not much. It’s still neuroscience, but it takes reports of subjective experience a bit more seriously, and tries to explain them by using existing physics, rather than treating them as meaningless or as magical and unexplainable.
Look, it’s not that complicated. I’m not the only person who talks about the Cartesian theater and claims that we can somehow feel brain algorithms from the inside. If subjective experience is not an observable fact to you, then your psychology is radically different from that of many other people.
I should have written objective observable facts or something like that. I can observe that I am not a P-zombie, however the beauty of the whole P-zombie business is that such observation is, sort of, insufficient. I would need to observe whether you are a P-zombie, and that I can’t.
It is perhaps more economical and Occam-razorish for me to expect that other people are no P-zombies either, but even if they were zombies, I would have no way to realise that, and this renders the zombie question quite uninteresting.
Do you question the consensus that you see using your eyes? Because the eye is a blatantly complicated mechanism directly in the middle of one of the direct experiences of the world you stake your theory on.
I’m not questioning the fact that complicated mechanisms are involved in creating your subjective experience; I question the physical description of that subjective experience as an incredibly complicated configuration in the brain. If your qualia are at all real in some sense, they should correspond to something far simpler than that on Occam’s Razor grounds. Alternately, you might just be a P-zombie. But then you’d have serious problems experiencing how your brain feels from the inside, although your brain would definitely be talking about its internal experiences.
Why aren’t you? You just said that “[qualia] should correspond to something far simpler than that”. If a (say) visual quale is simple, then why does the human system need a complicated mechanism to capture large numbers of photons such that they form a coherent image on a surface coated with photosensitive neurons, which are wired so as to cause large-scale effects on other parts of the neural (and glial) system of the brain, starting with the visual cortex and spreading from there … to cause something simple? Light was simple to start with! If you expect things to be simple at the Cartesian theater, the visual system moves the wrong way.
Light is simple, but evolved organisms care very little about the fundamental qualities of light. They care a lot about running efficient computations using various inputs, including the excitation of photosensitive neurons. This is probably why the Cartesian theather feels very much like computation on high-level inputs and outputs, rather than objectively fundamental things such as wavelengths of light. And the computations which transform low-level data like excitation of sensory neurons into high-level inputs are probably unconscious because they are qualitatively different from conscious computation.
I would expect optimization for efficiency to be something evolution does—but I am compelled to note that I mentioned “the Cartesian theater” as a reference to Daniel Dennett’s Consciousness Explained, where he strenuously refuted the idea of the Cartesian theater. By Dennett’s argument—and even when Consciousness Explained came out, he had a lot of research data to work from—the collocation of all sensory data in a single channel to run past some homunculus recording our conscious experience is unlikely. After all, there already is a data-processing entity right there to collect all the sensory data—that’s the entire brain. So within the brain, it should not be surprising that different conscious experiences are saved to memory from different parts. Particularly since the brain is patently a parallel computer anyway.
Daniel Dennett’s “refutation” of the Cartesian theater has been widely criticized. Basically, he relies on perceptual illusions such as discrete motion being perceived as continuous, arguing that there should be a fact of the matter as to whether “the motion in the Cartesian theater” is continuous or not. But phenomenology is far simpler (or more complicated) than that: the fact that we perceive the quale of continuous_motion does not imply that a homunculous somewhere is seeing the object in an intermediate position at each given moment in time. It is a strawman argument.
Before I respond: are we actually getting anywhere in this discussion? I have this sinking feeling that I’m asking the wrong questions.
Well, what is it, then?
Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly… circular.
Configurations of neurons are not complex. They are complicated, but they can still be explained by the same physics as everything else in the world. You are proposing a more complex universe. Or possibly a god. They are equally implausible without supporting evidence.
Feel free to run garbage collection on that circularity. You’ll find out what it feels like to subjectively vanish in a puff of logic.
Not really, since both subjective experience and quantum mechanics are part of our universe already. Perhaps one could say that I’m proposing more complicated brains, but that adds little or nothng to the overall complexity budget given what we know about quantum biology, biophysics, evolution etc.
No, you are proposing a more complicated universe. Quantum mechanical systems can be simulated on a classical computer given a source of randomness. The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory for a classical system to simulate these runs than a quantum system would. If the complexity hierarchy exhibits partial collapse with say BQP being equal to P, then even this would in some sense not be true and we’d then have quantum computers as just classical machines with a source of random bits. Now, most comp sci people don’t believe that, but the thrust of this argument just requires the fact that classical machines with randomness can simulate quantum machines given extra time and space. since that is the case, in order to assert that quantum mechanics has any chance of causing things like qualia and consciousness would require that there are fundamental gaps in our understanding of quantum mechanics. It would also likely violate many forms of the Church-Turing thesis. So you’d have to basic failings in our understanding of QM and theoretical comp sci for this sort of approach to be even have a chance at working.
This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be “felt from the inside”. In this theory, qualia and consciousness are not caused by quantum mechanics; they are what some extremely complex quantum states feel like.
If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.
At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?
That’s actually the best argument I’ve heard for supposing that there’s a quantum mechanical aspect to our processing. Thank you for bringing it to my attention. It does make a QM aspect more plausible. However, it is still a very weak argument since a) evolution would only do this if it had an easy way of keeping things in coherence that didn’t take up too much resources b) It seems unlikely that there’s a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild. I don’t think for example that humans needed to factor large integers in our hunter gatherer societies. This does lead to the idea of deliberately evolving beings that actually use quantum mechanics in their thought processes by selecting for ones that are good at algorithms that do have speedups in a QM system.
Quantum systems have much nicer properties from this point of view. An internally entangled quantum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics.
Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.
Why wouldn’t there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.
That makes some sense, although I don’t see why a classical simulation of the same wouldn’t feel identical.
This may be true in the same sense that sending a probe to Betelgeuse is easier than sending a probe to the Andromeda galaxy. You are still talking about fantastically difficult things to keep in coherence. We’re still talking about systems kept below at most 5 kelvin or so (being generous). It is noteworthy that so far we’ve actually had far more success implementing standard quantum computers than we have with topological quantum computers.
There’s no evidence of any process we associate as part of “intelligence” as being sped-up or made more efficient by quantum computation. I’d also be very interested in seeing citations for the claim that there are “many proposals for artificial general intelligence (AGI) using quantum computation.”
Do you demand the exact wave function?
I was never much comfortable with “consciousness is how thinking feels from inside” explanation, since it hardly explains anything. However, the alternatives are non-explanations even more. Unless the hypothesis predicts something testable, it is useless. The position that no non-standard physics is involved is a kind of default which is held whenever there are no clear reasons to think otherwise, that’s all.