and regulation of blood flow: all important, but mostly things only a biologist could love.
I’d argue that people who like designing computer architectures should be interested in this as well.
Ignoring glia seems to me to have been an (mis-)application of assuming the simplest explanation consistent with the facts, when people weren’t in a position to fully explain the brain. I.e. people knew that you needed neurons to explain brain function, but because they couldn’t predict how the brain functioned, they didn’t know that a neural explanation was insufficient.
It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven’t been convincing).
It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven’t been convincing).
I’d agree—I think the reasonable position at this point is to say that we shouldn’t privilege the hypothesis. Most of the argumentation along those lines that I have seen cited seems to be permissive, rather than compelling, towards the claim.
I’d agree—I think the reasonable position at this point is to say that we shouldn’t privilege the hypothesis. Most of the argumentation along those lines that I have seen cited seems to be permissive, rather than compelling, towards the claim.
But the fact that we directly experience phenomenal qualia (or at least, you do) is compelling evidence that some fairly exotic physics is happening in the brain. Mesoscopic quantum superpositions is actually the least weird hypothesis in this respect. I think that people who dismiss this problem are biased to think that biology should be simple; they don’t understand that evolutions can come up with incredibly clever stuff. It’s the same mistake that leads people to dismiss the possible role of glial cells in cognition.
Mesoscopic quantum superpositions is actually the least weird hypothesis in this respect.
I don’t know about it being the least weird hypothesis but it certainly isn’t a useful one. I have yet to hear anything resembling a coherent explanation of what consciousness has to do with superposition. And I have even less of an easy time seeing how the presence of qualia have anything to do with this. (This may be connected to the fact that I don’t see qualia as a big deal needing some deep explanation.)
And I have even less of an easy time seeing how the presence of qualia have anything to do with this.
The real issue is not the “presence of qualia”, it’s what qualia should map to in the underlying physics. Saying that e.g. the color blue is an incredibly complex pattern in the classical physical system corresponding to the human visual cortex—which actually differs physically from human to human—is just not a tenable position.
The real issue is not the “presence of qualia”, it’s what qualia should map to in the underlying physics. Saying that e.g. the color blue is an incredibly complex pattern in the classical physical system corresponding to the human visual cortex—which actually differs physically from human to human—is just not a tenable position.
So how is this at all distinct than the fact that words like “good” or “puppy” have complicated mappings onto our brain structure? These present just as much of a difficulty as qualia. And just because we can’t precisely map those now doesn’t make those positions untenable. Why posit new physical laws for a set of phenomena that we understand better and better with no sign of stopping? If we ran into some apparent wall in understanding how these function then after a while it might make sense to look at new physics that might explain things. But as it is now, we’ve been making steady progress on these issues for about a hundred years. We now can use electromagnetic stimulation to make people experience specific classes of feelings, and we can use electrodes more directly to trigger direct responses. We can see emotions and sensations actively in the brain by fMRI and other methods. There’s no need for spooky suppositions.
So how is this at all distinct than the fact that words like “good” or “puppy” have complicated mappings onto our brain structure?
It’s different because we know for certain that the mapping of words such as “good” and “puppy” onto our basic phenomenology is culturally dependent, learned throughout our childhood, etc. We can say no such thing about the mapping between physics and subjective experience. And in the former case, some drastic simplifications can actually be made: see e.g. the work of George Lakoff and other cognitive linguists about the linkages between “abstract” semantics and basic phenomenology.
And just because we can’t precisely map those now doesn’t make those positions untenable.
It’s not because we can’t precisely map them; it’s because the possiblity of there even being a mapping is so weird and complicated that spooky, exotic physics looks good by comparison. (Basically you would be forced to argue that the mapping between classical physics and subjective perceptions was picked by an optimizing agent, which is far more spooky.)
We can see emotions and sensations actively in the brain by fMRI and other methods.
Bzzzzzt. We can see macroscopic correlates of emotions and sensations in an fMRI. Ths does not mean that the emotion and sensation is the same thing as the change in fMRI. (In fact, all fMRI does is measure changes in blood flow.)
It’s different because we know for certain that the mapping of words such as “good” and “puppy” onto our basic phenomenology is culturally dependent, learned throughout our childhood, etc. We can say no such thing about the mapping between physics and subjective experience.
I’m missing something here. How does the fact that this correlation isn’t as culturally dependent imply something spooky is going on?
It’s not because we can’t precisely map them; it’s because the possiblity of there even being a mapping is so weird and complicated that spooky, exotic physics looks good by comparison.
Again, I don’t follow your logic. What would be weird and complicated about such a mapping?
Basically you would be forced to argue that the mapping between classical physics and subjective perceptions was picked by an optimizing agent, which is far more spooky.)
Why? What need is there for an optimizing agent? What do you think this optimizing agent would have done? I’m not sure what you are trying to say here but it almost seems to some sort of argument that if one wants to reject theism one needs spooky physics. I don’t know how to respond to that.
We can see macroscopic correlates of emotions and sensations in an fMRI. Ths does not mean that the emotion and sensation is the same thing as the change in fMRI. (In fact, all fMRI does is measure changes in blood flow.)
You might notice that I said “fMRI and other methods.” We can for example, use deep brain stimulation to directly stimulate emotions (this is in fact a cutting edge treatment for people with severe depression and is being investigated for use in treating other illnesses). We can see which parts of the brain are being used for what emotions and sensations and we can stimulate those regions to duplicate those emotions and sensations.
More generally, it seems like you may be confusing the map with the territory. A blank or poorly drawn area of a map doesn’t tell us about the territory. It is true that repeated failure to get a good map of an area of territory can tell us that our mapping method has a problem or that another section of our map has issues. That’s essentially what happened with Copernicus and Kepler; the repeated failures to get accurate models of the heavens forced a redrawing of fundamental sections of the map. But in order to justify that, one needs to have repeated problems over a long period of time with trying to get a good map of an area. If your map keeps getting more and more precise, that’s not useful. Finally, a question if you don’t mind: what hypothetical evidence would convince you that qualia can be explained by our current laws of physics?
We can for example, use deep brain stimulation to directly stimulate emotions
We can stimulate emotions, yet we are nowhere near a satisfactory explanation of why each emotion has the psychological effects it does. It’s quite clear that we can only play with the brain at a very coarse level.
Finally, a question if you don’t mind: what hypothetical evidence would convince you that qualia can be explained by our current laws of physics?
Reliable brain simulation would be solid evidence here. Others have pointed out that we probably won’t be able to revive cryopreserved patients without a thorough understanding of brain physics.
We can stimulate emotions, yet we are nowhere near a satisfactory explanation of why each emotion has the psychological effects it does. It’s quite clear that we can only play with the brain at a very coarse level.
Sure, but who cares? The point is that our ability to do this has been steadily improving and there’s no indication that any part of our coarse play has turned up any evidence of any special physics at work.
We can say no such thing about the mapping between physics and subjective experience.
The wavelength of light maps pretty straightforwardly onto our perception of color. We can trace the activation of cones in our eyes to patterns of neuron firing in the optic nerve to neurons firing in the visual cortex. “Redness” isn’t magic. “Redness” is a particular configuration (or, more properly, a set of configurations) of neurons. The only reason it seems special to you is because you are experiencing the algorithm from the inside. Consciousness is what thinking feels like, not magic.
Sure… I’m with you until you get to the part where some (all?) configurations of matter have experiences from the inside, which nobody can detect or describe, and the only evidence that these “experiences” exist is that people say they can feel them… isn’t this exactly the kind of thinking we ought to dismiss as crazy? But on the other hand, I think I feel experiences too!
You’re making this more mysterious than it needs to be. No matter what our experiences felt like, we’d still call them qualia. No matter how we used our senses to acquire information about the world, we’d still call that process experience.
I wouldn’t feel comfortable making that claim until I’d tested it on a couple of non-human agents, and in any case I wouldn’t call it mysterious.
Really all I have is the suspicion that consciousness is much more normal than people tend to think. The only thing I’m confident of is that explaining consciousness won’t require magic or special exceptions to the laws of physics.
What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery. Even after we become able to model human brains on computers and after we discover which parts of brain are responsible for each exact feeling, I can’t imagine how this knowledge stops people wonder about qualia, zombies and Chinese rooms.
What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery.
I didn’t mean my question as a Kelvinian declaration that we will never understand. I was only curious whether WrongBot has some more specific idea what sort of answer can destroy the feeling of confusion when thinking about qualia. I am even not sure whether there is a question to be answered.
Right. I apologize, I didn’t read your comment very clearly. The Kelvin case offers some hope, though—after all, the New Age life-is-energy meme is a lot weaker than elan vital was.
I haven’t yet encountered a sufficiently precise definition of qualia (or consciousness, for that matter) to be able to say what exactly the confusion is, much less where it’s coming from or how it can be destroyed. The hard problem of consciousness is a wrong question, and I suspect that for any given untangling of it, the answer will be trivial.
“Redness” is a particular configuration (or, more properly, a set of configurations) of neurons.
You’re missing the basic problem: ‘neurons’ are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes. What’s the set of configurations of quarks which feels from the inside like thinking or like the color red? How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
I usually find Occam’s Razor to be sufficient. You are misapplying reductionism: if consciousness maps to a set of configurations of neurons, and neurons map to quarks, spacetime, and probability amplitudes, then we have no need of mysteriously specific exceptions to physical laws. Indeed, such hypothetical and entirely unsupported exceptions have no explanatory power at all.
What’s the set of configurations of quarks which feels from the inside like thinking or like the color red?
Why, the set of configurations of quarks which describe any member of the set of neurons which feel from the inside like thinking or like the color red, of course.
You’re missing the basic problem: ‘neurons’ are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes.
No, he’s not. Neurons are part of the territory. They are composed of other parts of the territory which are composed of quarks, spacetime, etc. But that doesn’t make a neuron not part of the territory. Just because something is ontologically reducible doesn’t mean it isn’t part of the territory. It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn’t.
Fine, substitute “not ontologically fundamental” for “not part of the territory” if you must.
It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn’t.
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to it, simply because it is foundational enough to you and anyone else with subjective experience. There is a reasonable argument to be made that “the way it feels from the inside” is just as fundamental as the basic physics of how the world works.
This does not imply that the two are necessarily related (for instance, P-zombies or robots can be unconscious yet physically talk about subjective experience). It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to subjective experience, simply because it is foundational enough to you and anyone else with subjective experience.
Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn’t give me any decent reason for thinking it has any such actually foundational status. Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn’t obvious to most humans that we use different sections of our brains to add and multiply. But, there’s a lot of evidence for this. For example, fMRI scans show different areas lighting up, with areas corresponding to memory lighting up for multiplication and areas corresponding to reasoning lighting up for addition. Similarly, there are stroke victims who only lose the ability to do one or the other operation. And this is but one example of how humans fail. Relying on human feelings to get an idea about how anything in the world, especially our own mind, works is not a good idea.
It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
I don’t follow this logic at all. I’m not completely sure what you are trying to do here but it sounds suspiciously like the theistic argument that God is a simple hypothesis. Just because I can posit something as a single, irreducible entity does not make that thing simple. (Also, can you expand on what you mean by a spooky superintelligence running debugging sessions since I can’t parse this is in any coherent way)
Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn’t give me any decent reason for thinking it has any such actually foundational status.
Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn’t obvious to most humans that we use different sections of our brains to add and multiply.
What this is actually saying is that phenomenology (the stuff we can access by introspection) cannot directly map physical areas of the brain of the kind which might get damaged in a stroke. In itself, this is not evidence that humans “suck” at introspection; especially if our consciousness really is a quantum state with $bignum degrees of freedom, rather than a classical system with spatially separate subparts.
it sounds suspiciously like the theistic argument that God is a simple hypothesis.
God is not a simple hypothesis, but “this was affected by an optimization process which cares about X or something like it” is simpler than “this configuration which happens to be near-optimal for X arose by sheer luck”. Which is pretty much what one would have to posit in order to explain our subjective experience of the extremely complicated physical systems we call “brains”. There are other avenues such as the anthropic principle, but ISTM that at some point one would start to run into circularities.
it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
What else can it depend on? Your original claim was that it has to do something with quantum superpositions, so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure; this is contrary to what actual explanations are.
this configuration which happens to be near-optimal for X arose by sheer luck
The “naive physicalists” don’t maintain anything like that. Evolution isn’t sheer luck.
so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure
I’m not trying to explain why qualia occur, just seeking a sensible physical description of them. Given the requirement that qualia should be actually experienced in some sense, a “black box” system which clearly matches these mysterious experiences is better than a complicated classical configuration plus a lengthy description of how this configuration is felt from the inside.
The “naive physicalists” don’t maintain anything like that. Evolution isn’t sheer luck.
Indeed it’s not: it’s an optimization process! But why would evolution care about qualia? In fact, many physicalist philosophers think qualia exist as epiphenomena, and an epiphenomenon cannot be naturally selected for.
I’m not trying to explain why qualia occur, just seeking a sensible physical description of them.
I use description and explanation as synonyms most of the time. Black box description is not much of a description, it’s rather lack of one. What information is contained in “qualia work like a black box”, or in a little more fancy language, “qualia work due to still unknown physical mechanism”? These are not description of qualia; the only non-vacuous interpretation of such sentences is “the contemporary physics is not going to explain qualia”, which may be true, but still is a statement about our current knowledge, not about qualia.
But why would evolution care about qualia?
Well, you are probably right in that, even if we are getting dangerously close to the philosophical zombies’ realm.
What information is contained in “qualia work like a black box”, or in a little more fancy language, “qualia work due to still unknown physical mechanism”?
Very little, but this is not a real description of qualia, just a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, “black box” quantum states, and perhaps how that physical system interacts with known neural correlates of subjective experiences. Unfortunately, we’re nowhere near that level yet.
even if we are getting dangerously close to the philosophical zombies’ realm.
Dangerously close? Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?
[J]ust a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, “black box” quantum states [...]
What makes this avenue different from investigation of neuron configurations? New physical laws were never discovered after rejecting the old ones, saying that they couldn’t possibly work. All discoveries of new physics happened after conducting research using the old paradigm and realising anomalies. I mean, if there is something strangely quantum going on in the brains, we will not miss it even if we use the conventional approach.
Or said differently, I still have no idea what light quantumness can bring into the question.
Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?
I fear talking about things that aren’t connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.
What makes this avenue different from investigation of neuron configurations?
Not much. It’s still neuroscience, but it takes reports of subjective experience a bit more seriously, and tries to explain them by using existing physics, rather than treating them as meaningless or as magical and unexplainable.
I fear talking about thing that aren’t connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.
Look, it’s not that complicated. I’m not the only person who talks about the Cartesian theater and claims that we can somehow feel brain algorithms from the inside. If subjective experience is not an observable fact to you, then your psychology is radically different from that of many other people.
I should have written objective observable facts or something like that. I can observe that I am not a P-zombie, however the beauty of the whole P-zombie business is that such observation is, sort of, insufficient. I would need to observe whether you are a P-zombie, and that I can’t.
It is perhaps more economical and Occam-razorish for me to expect that other people are no P-zombies either, but even if they were zombies, I would have no way to realise that, and this renders the zombie question quite uninteresting.
Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
Do you question the consensus that you see using your eyes? Because the eye is a blatantly complicated mechanism directly in the middle of one of the direct experiences of the world you stake your theory on.
I’m not questioning the fact that complicated mechanisms are involved in creating your subjective experience; I question the physical description of that subjective experience as an incredibly complicated configuration in the brain. If your qualia are at all real in some sense, they should correspond to something far simpler than that on Occam’s Razor grounds. Alternately, you might just be a P-zombie. But then you’d have serious problems experiencing how your brain feels from the inside, although your brain would definitely be talking about its internal experiences.
I’m not questioning the fact that complicated mechanisms are involved in creating your subjective experience;
Why aren’t you? You just said that “[qualia] should correspond to something far simpler than that”. If a (say) visual quale is simple, then why does the human system need a complicated mechanism to capture large numbers of photons such that they form a coherent image on a surface coated with photosensitive neurons, which are wired so as to cause large-scale effects on other parts of the neural (and glial) system of the brain, starting with the visual cortex and spreading from there … to cause something simple? Light was simple to start with! If you expect things to be simple at the Cartesian theater, the visual system moves the wrong way.
Light is simple, but evolved organisms care very little about the fundamental qualities of light. They care a lot about running efficient computations using various inputs, including the excitation of photosensitive neurons. This is probably why the Cartesian theather feels very much like computation on high-level inputs and outputs, rather than objectively fundamental things such as wavelengths of light. And the computations which transform low-level data like excitation of sensory neurons into high-level inputs are probably unconscious because they are qualitatively different from conscious computation.
I would expect optimization for efficiency to be something evolution does—but I am compelled to note that I mentioned “the Cartesian theater” as a reference to Daniel Dennett’s Consciousness Explained, where he strenuously refuted the idea of the Cartesian theater. By Dennett’s argument—and even when Consciousness Explained came out, he had a lot of research data to work from—the collocation of all sensory data in a single channel to run past some homunculus recording our conscious experience is unlikely. After all, there already is a data-processing entity right there to collect all the sensory data—that’s the entire brain. So within the brain, it should not be surprising that different conscious experiences are saved to memory from different parts. Particularly since the brain is patently a parallel computer anyway.
Daniel Dennett’s “refutation” of the Cartesian theater has been widely criticized. Basically, he relies on perceptual illusions such as discrete motion being perceived as continuous, arguing that there should be a fact of the matter as to whether “the motion in the Cartesian theater” is continuous or not. But phenomenology is far simpler (or more complicated) than that: the fact that we perceive the quale of continuous_motion does not imply that a homunculous somewhere is seeing the object in an intermediate position at each given moment in time. It is a strawman argument.
There is a reasonable argument to be made that “the way it feels from the inside” is just as fundamental as the basic physics of how the world works.
Well, what is it, then?
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to subjective experience, simply because it is foundational enough to you and anyone else with subjective experience.
Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly… circular.
It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
Configurations of neurons are not complex. They are complicated, but they can still be explained by the same physics as everything else in the world. You are proposing a more complex universe. Or possibly a god. They are equally implausible without supporting evidence.
Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly… circular.
Feel free to run garbage collection on that circularity. You’ll find out what it feels like to subjectively vanish in a puff of logic.
You are proposing a more complex universe.
Not really, since both subjective experience and quantum mechanics are part of our universe already. Perhaps one could say that I’m proposing more complicated brains, but that adds little or nothng to the overall complexity budget given what we know about quantum biology, biophysics, evolution etc.
Not really, since both subjective experience and quantum mechanics are part of our universe already.
No, you are proposing a more complicated universe. Quantum mechanical systems can be simulated on a classical computer given a source of randomness. The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory for a classical system to simulate these runs than a quantum system would. If the complexity hierarchy exhibits partial collapse with say BQP being equal to P, then even this would in some sense not be true and we’d then have quantum computers as just classical machines with a source of random bits. Now, most comp sci people don’t believe that, but the thrust of this argument just requires the fact that classical machines with randomness can simulate quantum machines given extra time and space. since that is the case, in order to assert that quantum mechanics has any chance of causing things like qualia and consciousness would require that there are fundamental gaps in our understanding of quantum mechanics. It would also likely violate many forms of the Church-Turing thesis. So you’d have to basic failings in our understanding of QM and theoretical comp sci for this sort of approach to be even have a chance at working.
Quantum mechanical systems can be simulated on a classical computer given a source of randomness.
This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be “felt from the inside”. In this theory, qualia and consciousness are not caused by quantum mechanics; they are what some extremely complex quantum states feel like.
The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory
If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.
This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be “felt from the inside”. Qualia and consciousness are not caused by quantum mechanics, they are what some extremely complex quantum states feel like.
At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?
If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.
That’s actually the best argument I’ve heard for supposing that there’s a quantum mechanical aspect to our processing. Thank you for bringing it to my attention. It does make a QM aspect more plausible. However, it is still a very weak argument since a) evolution would only do this if it had an easy way of keeping things in coherence that didn’t take up too much resources b) It seems unlikely that there’s a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild. I don’t think for example that humans needed to factor large integers in our hunter gatherer societies. This does lead to the idea of deliberately evolving beings that actually use quantum mechanics in their thought processes by selecting for ones that are good at algorithms that do have speedups in a QM system.
At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?
Quantum systems have much nicer properties from this point of view. An internally entangled quantum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics.
evolution would only do this if it had an easy way of keeping things in coherence that didn’t take up too much resources
Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.
If there’s a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild.
Why wouldn’t there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.
Quantum systems have much nicer properties from this point of view. An internally entangled quntum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics
That makes some sense, although I don’t see why a classical simulation of the same wouldn’t feel identical.
Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.
This may be true in the same sense that sending a probe to Betelgeuse is easier than sending a probe to the Andromeda galaxy. You are still talking about fantastically difficult things to keep in coherence. We’re still talking about systems kept below at most 5 kelvin or so (being generous). It is noteworthy that so far we’ve actually had far more success implementing standard quantum computers than we have with topological quantum computers.
Why wouldn’t there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.
There’s no evidence of any process we associate as part of “intelligence” as being sped-up or made more efficient by quantum computation. I’d also be very interested in seeing citations for the claim that there are “many proposals for artificial general intelligence (AGI) using quantum computation.”
What’s the set of configurations of quarks which feels from the inside like thinking or like the color red?
Do you demand the exact wave function?
How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
I was never much comfortable with “consciousness is how thinking feels from inside” explanation, since it hardly explains anything. However, the alternatives are non-explanations even more. Unless the hypothesis predicts something testable, it is useless. The position that no non-standard physics is involved is a kind of default which is held whenever there are no clear reasons to think otherwise, that’s all.
It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven’t been convincing).
Considering that quantum physics is turing complete (unless it’s nonlinear etc) any quantum effects could be reproduced with classical computation. Therefore the assumption that cognition must involve quantum effects implicitly assumes that quantum physics is nonlinear or one of the various other requirements.
In this light the first question that ought to be asked from persons claiming quantum effects on brain is: What computation [performed in brain] requires basically infinite loops completed on finite time and based on what physics experiment they believe that quantum effects are more than turing complete.
I think the brain is probably ultimately computable by a classical computer and yet quantum computing in the brain might be significant. Here are couple of the potential problems we’ll have if the brain relies on quantum effects.
1) Difficulty in replacing bits of the brain functionally. If consciousness is some strange transitory gestalt quantum field; then you would need to to make a brain prosthesis that had the same electromagnetic properties as a neuron. Which might be quite hard.
2) A harder time simulating brains/doing AI: You might have to up the date you expect Whole Brain Emulations to become available (depending upon when we expect quantum computers to be useful).
I’m having trouble parsing your above comment. Are the points labeled 1 and 2 arguments for the presence of quantum computing in the brain or consequences of that belief?
Quantum computing in the brain might be happening, but if we want to understand conciousness it is irrelevant (Unless conciousness is noncomputable where it becomes a claim about quantum physics yet again). It’s as relevant as details about transistors or vacuum tubes are for understanding sorting algorithms.
Naturally when considering brain prostheses or simulating a brain the actual method with which brain computes is relevant.
I merely wished to clarify the difference between conciousness and how it is implemented in the brain. I had no intention of implying that it was part of the discussion. On retrospect the clarification was not required.
It’s just way too common for the two issues to get mixed up, as can be seen on the various threads.
Thanks for the interesting article.
I’d argue that people who like designing computer architectures should be interested in this as well.
Ignoring glia seems to me to have been an (mis-)application of assuming the simplest explanation consistent with the facts, when people weren’t in a position to fully explain the brain. I.e. people knew that you needed neurons to explain brain function, but because they couldn’t predict how the brain functioned, they didn’t know that a neural explanation was insufficient.
It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven’t been convincing).
I’d agree—I think the reasonable position at this point is to say that we shouldn’t privilege the hypothesis. Most of the argumentation along those lines that I have seen cited seems to be permissive, rather than compelling, towards the claim.
But the fact that we directly experience phenomenal qualia (or at least, you do) is compelling evidence that some fairly exotic physics is happening in the brain. Mesoscopic quantum superpositions is actually the least weird hypothesis in this respect. I think that people who dismiss this problem are biased to think that biology should be simple; they don’t understand that evolutions can come up with incredibly clever stuff. It’s the same mistake that leads people to dismiss the possible role of glial cells in cognition.
I don’t know about it being the least weird hypothesis but it certainly isn’t a useful one. I have yet to hear anything resembling a coherent explanation of what consciousness has to do with superposition. And I have even less of an easy time seeing how the presence of qualia have anything to do with this. (This may be connected to the fact that I don’t see qualia as a big deal needing some deep explanation.)
The real issue is not the “presence of qualia”, it’s what qualia should map to in the underlying physics. Saying that e.g. the color blue is an incredibly complex pattern in the classical physical system corresponding to the human visual cortex—which actually differs physically from human to human—is just not a tenable position.
So how is this at all distinct than the fact that words like “good” or “puppy” have complicated mappings onto our brain structure? These present just as much of a difficulty as qualia. And just because we can’t precisely map those now doesn’t make those positions untenable. Why posit new physical laws for a set of phenomena that we understand better and better with no sign of stopping? If we ran into some apparent wall in understanding how these function then after a while it might make sense to look at new physics that might explain things. But as it is now, we’ve been making steady progress on these issues for about a hundred years. We now can use electromagnetic stimulation to make people experience specific classes of feelings, and we can use electrodes more directly to trigger direct responses. We can see emotions and sensations actively in the brain by fMRI and other methods. There’s no need for spooky suppositions.
It’s different because we know for certain that the mapping of words such as “good” and “puppy” onto our basic phenomenology is culturally dependent, learned throughout our childhood, etc. We can say no such thing about the mapping between physics and subjective experience. And in the former case, some drastic simplifications can actually be made: see e.g. the work of George Lakoff and other cognitive linguists about the linkages between “abstract” semantics and basic phenomenology.
It’s not because we can’t precisely map them; it’s because the possiblity of there even being a mapping is so weird and complicated that spooky, exotic physics looks good by comparison. (Basically you would be forced to argue that the mapping between classical physics and subjective perceptions was picked by an optimizing agent, which is far more spooky.)
Bzzzzzt. We can see macroscopic correlates of emotions and sensations in an fMRI. Ths does not mean that the emotion and sensation is the same thing as the change in fMRI. (In fact, all fMRI does is measure changes in blood flow.)
I’m missing something here. How does the fact that this correlation isn’t as culturally dependent imply something spooky is going on?
Again, I don’t follow your logic. What would be weird and complicated about such a mapping?
Why? What need is there for an optimizing agent? What do you think this optimizing agent would have done? I’m not sure what you are trying to say here but it almost seems to some sort of argument that if one wants to reject theism one needs spooky physics. I don’t know how to respond to that.
You might notice that I said “fMRI and other methods.” We can for example, use deep brain stimulation to directly stimulate emotions (this is in fact a cutting edge treatment for people with severe depression and is being investigated for use in treating other illnesses). We can see which parts of the brain are being used for what emotions and sensations and we can stimulate those regions to duplicate those emotions and sensations.
More generally, it seems like you may be confusing the map with the territory. A blank or poorly drawn area of a map doesn’t tell us about the territory. It is true that repeated failure to get a good map of an area of territory can tell us that our mapping method has a problem or that another section of our map has issues. That’s essentially what happened with Copernicus and Kepler; the repeated failures to get accurate models of the heavens forced a redrawing of fundamental sections of the map. But in order to justify that, one needs to have repeated problems over a long period of time with trying to get a good map of an area. If your map keeps getting more and more precise, that’s not useful. Finally, a question if you don’t mind: what hypothetical evidence would convince you that qualia can be explained by our current laws of physics?
We can stimulate emotions, yet we are nowhere near a satisfactory explanation of why each emotion has the psychological effects it does. It’s quite clear that we can only play with the brain at a very coarse level.
Reliable brain simulation would be solid evidence here. Others have pointed out that we probably won’t be able to revive cryopreserved patients without a thorough understanding of brain physics.
Sure, but who cares? The point is that our ability to do this has been steadily improving and there’s no indication that any part of our coarse play has turned up any evidence of any special physics at work.
The wavelength of light maps pretty straightforwardly onto our perception of color. We can trace the activation of cones in our eyes to patterns of neuron firing in the optic nerve to neurons firing in the visual cortex. “Redness” isn’t magic. “Redness” is a particular configuration (or, more properly, a set of configurations) of neurons. The only reason it seems special to you is because you are experiencing the algorithm from the inside. Consciousness is what thinking feels like, not magic.
Sure… I’m with you until you get to the part where some (all?) configurations of matter have experiences from the inside, which nobody can detect or describe, and the only evidence that these “experiences” exist is that people say they can feel them… isn’t this exactly the kind of thinking we ought to dismiss as crazy? But on the other hand, I think I feel experiences too!
You’re making this more mysterious than it needs to be. No matter what our experiences felt like, we’d still call them qualia. No matter how we used our senses to acquire information about the world, we’d still call that process experience.
Are you claiming that any sufficiently complex agent will report a mysterious feeling of consciousness? That can’t be right.
I wouldn’t feel comfortable making that claim until I’d tested it on a couple of non-human agents, and in any case I wouldn’t call it mysterious.
Really all I have is the suspicion that consciousness is much more normal than people tend to think. The only thing I’m confident of is that explaining consciousness won’t require magic or special exceptions to the laws of physics.
What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery. Even after we become able to model human brains on computers and after we discover which parts of brain are responsible for each exact feeling, I can’t imagine how this knowledge stops people wonder about qualia, zombies and Chinese rooms.
I imagine Lord Kelvin felt similarly when he thought of the elan vital. It didn’t work for that, and it didn’t work for a very good reason: your ignorance of the realm of possibilities is not good evidence. An inability to come up with alternatives may be better support for a claim than showing that you have not yet been compelled to admit defeat, but it’s still nearly worthless.
I didn’t mean my question as a Kelvinian declaration that we will never understand. I was only curious whether WrongBot has some more specific idea what sort of answer can destroy the feeling of confusion when thinking about qualia. I am even not sure whether there is a question to be answered.
Right. I apologize, I didn’t read your comment very clearly. The Kelvin case offers some hope, though—after all, the New Age life-is-energy meme is a lot weaker than elan vital was.
I haven’t yet encountered a sufficiently precise definition of qualia (or consciousness, for that matter) to be able to say what exactly the confusion is, much less where it’s coming from or how it can be destroyed. The hard problem of consciousness is a wrong question, and I suspect that for any given untangling of it, the answer will be trivial.
You’re missing the basic problem: ‘neurons’ are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes. What’s the set of configurations of quarks which feels from the inside like thinking or like the color red? How can you be so confident that no magic is involved in this “how it feels from the inside” business, while casually talking about configurations of neurons?
I usually find Occam’s Razor to be sufficient. You are misapplying reductionism: if consciousness maps to a set of configurations of neurons, and neurons map to quarks, spacetime, and probability amplitudes, then we have no need of mysteriously specific exceptions to physical laws. Indeed, such hypothetical and entirely unsupported exceptions have no explanatory power at all.
Why, the set of configurations of quarks which describe any member of the set of neurons which feel from the inside like thinking or like the color red, of course.
No, he’s not. Neurons are part of the territory. They are composed of other parts of the territory which are composed of quarks, spacetime, etc. But that doesn’t make a neuron not part of the territory. Just because something is ontologically reducible doesn’t mean it isn’t part of the territory. It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn’t.
Fine, substitute “not ontologically fundamental” for “not part of the territory” if you must.
The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to it, simply because it is foundational enough to you and anyone else with subjective experience. There is a reasonable argument to be made that “the way it feels from the inside” is just as fundamental as the basic physics of how the world works.
This does not imply that the two are necessarily related (for instance, P-zombies or robots can be unconscious yet physically talk about subjective experience). It does mean that Occam’s razor should apply to “the way it feels from the inside”, which tends to weigh against complex explanations like “configurations of neurons” and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.
Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn’t give me any decent reason for thinking it has any such actually foundational status. Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn’t obvious to most humans that we use different sections of our brains to add and multiply. But, there’s a lot of evidence for this. For example, fMRI scans show different areas lighting up, with areas corresponding to memory lighting up for multiplication and areas corresponding to reasoning lighting up for addition. Similarly, there are stroke victims who only lose the ability to do one or the other operation. And this is but one example of how humans fail. Relying on human feelings to get an idea about how anything in the world, especially our own mind, works is not a good idea.
I don’t follow this logic at all. I’m not completely sure what you are trying to do here but it sounds suspiciously like the theistic argument that God is a simple hypothesis. Just because I can posit something as a single, irreducible entity does not make that thing simple. (Also, can you expand on what you mean by a spooky superintelligence running debugging sessions since I can’t parse this is in any coherent way)
Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.
What this is actually saying is that phenomenology (the stuff we can access by introspection) cannot directly map physical areas of the brain of the kind which might get damaged in a stroke. In itself, this is not evidence that humans “suck” at introspection; especially if our consciousness really is a quantum state with $bignum degrees of freedom, rather than a classical system with spatially separate subparts.
God is not a simple hypothesis, but “this was affected by an optimization process which cares about X or something like it” is simpler than “this configuration which happens to be near-optimal for X arose by sheer luck”. Which is pretty much what one would have to posit in order to explain our subjective experience of the extremely complicated physical systems we call “brains”. There are other avenues such as the anthropic principle, but ISTM that at some point one would start to run into circularities.
What else can it depend on? Your original claim was that it has to do something with quantum superpositions, so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure; this is contrary to what actual explanations are.
The “naive physicalists” don’t maintain anything like that. Evolution isn’t sheer luck.
I’m not trying to explain why qualia occur, just seeking a sensible physical description of them. Given the requirement that qualia should be actually experienced in some sense, a “black box” system which clearly matches these mysterious experiences is better than a complicated classical configuration plus a lengthy description of how this configuration is felt from the inside.
Indeed it’s not: it’s an optimization process! But why would evolution care about qualia? In fact, many physicalist philosophers think qualia exist as epiphenomena, and an epiphenomenon cannot be naturally selected for.
I use description and explanation as synonyms most of the time. Black box description is not much of a description, it’s rather lack of one. What information is contained in “qualia work like a black box”, or in a little more fancy language, “qualia work due to still unknown physical mechanism”? These are not description of qualia; the only non-vacuous interpretation of such sentences is “the contemporary physics is not going to explain qualia”, which may be true, but still is a statement about our current knowledge, not about qualia.
Well, you are probably right in that, even if we are getting dangerously close to the philosophical zombies’ realm.
Very little, but this is not a real description of qualia, just a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, “black box” quantum states, and perhaps how that physical system interacts with known neural correlates of subjective experiences. Unfortunately, we’re nowhere near that level yet.
Dangerously close? Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?
What makes this avenue different from investigation of neuron configurations? New physical laws were never discovered after rejecting the old ones, saying that they couldn’t possibly work. All discoveries of new physics happened after conducting research using the old paradigm and realising anomalies. I mean, if there is something strangely quantum going on in the brains, we will not miss it even if we use the conventional approach.
Or said differently, I still have no idea what light quantumness can bring into the question.
I fear talking about things that aren’t connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.
Not much. It’s still neuroscience, but it takes reports of subjective experience a bit more seriously, and tries to explain them by using existing physics, rather than treating them as meaningless or as magical and unexplainable.
Look, it’s not that complicated. I’m not the only person who talks about the Cartesian theater and claims that we can somehow feel brain algorithms from the inside. If subjective experience is not an observable fact to you, then your psychology is radically different from that of many other people.
I should have written objective observable facts or something like that. I can observe that I am not a P-zombie, however the beauty of the whole P-zombie business is that such observation is, sort of, insufficient. I would need to observe whether you are a P-zombie, and that I can’t.
It is perhaps more economical and Occam-razorish for me to expect that other people are no P-zombies either, but even if they were zombies, I would have no way to realise that, and this renders the zombie question quite uninteresting.
Do you question the consensus that you see using your eyes? Because the eye is a blatantly complicated mechanism directly in the middle of one of the direct experiences of the world you stake your theory on.
I’m not questioning the fact that complicated mechanisms are involved in creating your subjective experience; I question the physical description of that subjective experience as an incredibly complicated configuration in the brain. If your qualia are at all real in some sense, they should correspond to something far simpler than that on Occam’s Razor grounds. Alternately, you might just be a P-zombie. But then you’d have serious problems experiencing how your brain feels from the inside, although your brain would definitely be talking about its internal experiences.
Why aren’t you? You just said that “[qualia] should correspond to something far simpler than that”. If a (say) visual quale is simple, then why does the human system need a complicated mechanism to capture large numbers of photons such that they form a coherent image on a surface coated with photosensitive neurons, which are wired so as to cause large-scale effects on other parts of the neural (and glial) system of the brain, starting with the visual cortex and spreading from there … to cause something simple? Light was simple to start with! If you expect things to be simple at the Cartesian theater, the visual system moves the wrong way.
Light is simple, but evolved organisms care very little about the fundamental qualities of light. They care a lot about running efficient computations using various inputs, including the excitation of photosensitive neurons. This is probably why the Cartesian theather feels very much like computation on high-level inputs and outputs, rather than objectively fundamental things such as wavelengths of light. And the computations which transform low-level data like excitation of sensory neurons into high-level inputs are probably unconscious because they are qualitatively different from conscious computation.
I would expect optimization for efficiency to be something evolution does—but I am compelled to note that I mentioned “the Cartesian theater” as a reference to Daniel Dennett’s Consciousness Explained, where he strenuously refuted the idea of the Cartesian theater. By Dennett’s argument—and even when Consciousness Explained came out, he had a lot of research data to work from—the collocation of all sensory data in a single channel to run past some homunculus recording our conscious experience is unlikely. After all, there already is a data-processing entity right there to collect all the sensory data—that’s the entire brain. So within the brain, it should not be surprising that different conscious experiences are saved to memory from different parts. Particularly since the brain is patently a parallel computer anyway.
Daniel Dennett’s “refutation” of the Cartesian theater has been widely criticized. Basically, he relies on perceptual illusions such as discrete motion being perceived as continuous, arguing that there should be a fact of the matter as to whether “the motion in the Cartesian theater” is continuous or not. But phenomenology is far simpler (or more complicated) than that: the fact that we perceive the quale of continuous_motion does not imply that a homunculous somewhere is seeing the object in an intermediate position at each given moment in time. It is a strawman argument.
Before I respond: are we actually getting anywhere in this discussion? I have this sinking feeling that I’m asking the wrong questions.
Well, what is it, then?
Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly… circular.
Configurations of neurons are not complex. They are complicated, but they can still be explained by the same physics as everything else in the world. You are proposing a more complex universe. Or possibly a god. They are equally implausible without supporting evidence.
Feel free to run garbage collection on that circularity. You’ll find out what it feels like to subjectively vanish in a puff of logic.
Not really, since both subjective experience and quantum mechanics are part of our universe already. Perhaps one could say that I’m proposing more complicated brains, but that adds little or nothng to the overall complexity budget given what we know about quantum biology, biophysics, evolution etc.
No, you are proposing a more complicated universe. Quantum mechanical systems can be simulated on a classical computer given a source of randomness. The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory for a classical system to simulate these runs than a quantum system would. If the complexity hierarchy exhibits partial collapse with say BQP being equal to P, then even this would in some sense not be true and we’d then have quantum computers as just classical machines with a source of random bits. Now, most comp sci people don’t believe that, but the thrust of this argument just requires the fact that classical machines with randomness can simulate quantum machines given extra time and space. since that is the case, in order to assert that quantum mechanics has any chance of causing things like qualia and consciousness would require that there are fundamental gaps in our understanding of quantum mechanics. It would also likely violate many forms of the Church-Turing thesis. So you’d have to basic failings in our understanding of QM and theoretical comp sci for this sort of approach to be even have a chance at working.
This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be “felt from the inside”. In this theory, qualia and consciousness are not caused by quantum mechanics; they are what some extremely complex quantum states feel like.
If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.
At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?
That’s actually the best argument I’ve heard for supposing that there’s a quantum mechanical aspect to our processing. Thank you for bringing it to my attention. It does make a QM aspect more plausible. However, it is still a very weak argument since a) evolution would only do this if it had an easy way of keeping things in coherence that didn’t take up too much resources b) It seems unlikely that there’s a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild. I don’t think for example that humans needed to factor large integers in our hunter gatherer societies. This does lead to the idea of deliberately evolving beings that actually use quantum mechanics in their thought processes by selecting for ones that are good at algorithms that do have speedups in a QM system.
Quantum systems have much nicer properties from this point of view. An internally entangled quantum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics.
Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.
Why wouldn’t there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.
That makes some sense, although I don’t see why a classical simulation of the same wouldn’t feel identical.
This may be true in the same sense that sending a probe to Betelgeuse is easier than sending a probe to the Andromeda galaxy. You are still talking about fantastically difficult things to keep in coherence. We’re still talking about systems kept below at most 5 kelvin or so (being generous). It is noteworthy that so far we’ve actually had far more success implementing standard quantum computers than we have with topological quantum computers.
There’s no evidence of any process we associate as part of “intelligence” as being sped-up or made more efficient by quantum computation. I’d also be very interested in seeing citations for the claim that there are “many proposals for artificial general intelligence (AGI) using quantum computation.”
Do you demand the exact wave function?
I was never much comfortable with “consciousness is how thinking feels from inside” explanation, since it hardly explains anything. However, the alternatives are non-explanations even more. Unless the hypothesis predicts something testable, it is useless. The position that no non-standard physics is involved is a kind of default which is held whenever there are no clear reasons to think otherwise, that’s all.
Considering that quantum physics is turing complete (unless it’s nonlinear etc) any quantum effects could be reproduced with classical computation. Therefore the assumption that cognition must involve quantum effects implicitly assumes that quantum physics is nonlinear or one of the various other requirements.
In this light the first question that ought to be asked from persons claiming quantum effects on brain is: What computation [performed in brain] requires basically infinite loops completed on finite time and based on what physics experiment they believe that quantum effects are more than turing complete.
I think the brain is probably ultimately computable by a classical computer and yet quantum computing in the brain might be significant. Here are couple of the potential problems we’ll have if the brain relies on quantum effects.
1) Difficulty in replacing bits of the brain functionally. If consciousness is some strange transitory gestalt quantum field; then you would need to to make a brain prosthesis that had the same electromagnetic properties as a neuron. Which might be quite hard.
2) A harder time simulating brains/doing AI: You might have to up the date you expect Whole Brain Emulations to become available (depending upon when we expect quantum computers to be useful).
I’m having trouble parsing your above comment. Are the points labeled 1 and 2 arguments for the presence of quantum computing in the brain or consequences of that belief?
Sorry consequences. I’ll edit for clarity.
Quantum computing in the brain might be happening, but if we want to understand conciousness it is irrelevant (Unless conciousness is noncomputable where it becomes a claim about quantum physics yet again). It’s as relevant as details about transistors or vacuum tubes are for understanding sorting algorithms.
Naturally when considering brain prostheses or simulating a brain the actual method with which brain computes is relevant.
Whoever said that this conversation was about understanding consciousness?
Personally I think that that topic is a tarpit, which I prefer to ignore until we know how the brain works.
I merely wished to clarify the difference between conciousness and how it is implemented in the brain. I had no intention of implying that it was part of the discussion. On retrospect the clarification was not required.
It’s just way too common for the two issues to get mixed up, as can be seen on the various threads.