This is… really not how scientific practice works, though.
This is how some, older, philosophers of science thought science ought to work. Namely Karl Popper. Who had some points, for sure, but notably, was not a scientist himself, so he was speculating about a practice he was not a part of—and had to discover that, having described to scientists the laws that ought to govern how they ought to act, found that they in fact did not, nor agreed that they would get better results that way. Philosophy of science really took off as an entire discipline here, and a lot of it pointed out huge aspects that Popper had overlooked, or outright contradictions between his ideas and actual scientific practice—in part, because his clean idea of falsification does not translate well to the testing of complex theories.
Instead of speculating about how science might work, and then saying it is bad, let’s look at how it actually does, to see if your criticism applies. Say you applied for a grant to develop this theory of yours. Or submitted a talk on it at a scientific conference. Or drafted it as a project todo for an academic position. This is usually when the scientific community determines if something should go further.
They’d ask you all sorts of things. Where did your idea come from? Is your theory consistent with existing theories? If not; does it plausibly promise to resolve their issues, and do a comparably good job, and do you have an explanation for why evidence so far pointed so much to existing theories? What observations do you have that would suggest your theory is likely? Is it internally consistent? Does it explain a lot, while being relatively simple with relatively few extra assumptions? What mechanism are you proposing, and how does it look in detail, it is plausible? Can you show us mathematical equations, or code? If your theory were correct, what would follow—are there useful applications, new things we could do or understand that we previously could not, new areas? If we gave you money to pursue this for 3 years, what tangible results would you think you are likely to produce, and what is your step by step plan to get there? What do you want to calculate, what do you want to define, what experiments do you intend to run, what will all this produce? - If the answers to this seem plausible and promising, the next step would be getting some experts in quantum physics, neuroscience, and philosophy of mind, having them read your work, and ask you some very critical questions—can you answer these questions? Do they think the resulting theory is promising? There is no one simple set of rules of criteria, but the process is not random, either. And it gives a relatively decent assessment of whether a theory is plausible enough to invest in.
I’ve mentioned a survey among researchers of consciousness on less wrong before. https://academic.oup.com/nc/article/2022/1/niac011/6663928 Note that interestingly, when it comes to theories of consciousness, the researchers are asked to evaluate if the various theories the survey then goes through are in their opinion promising. None of them can be falsified, yet, but that does not mean they are all given the same amount of resources. They all clearly. understand the question, and give clear answers. And quantum theories make the very end of the list. Regular science was absolutely equipped to answer this very question, prior to any falsification.
Popper was never a working scientist, but “In 1928, Popper earned a doctorate in psychology, under the supervision of Karl Bühler—with Moritz Schlick being the second chair of the thesis committee” ( Schlick was a famous Vienna Circle figure).
I am not saying Popper was scientifically illiterate at all. I find falsification a beautiful ideal, and have admiration for him.
But I am saying that you get very different philosophy of science if you base your writings not on your abstract reflections of how a perfect science ought to work, but on doing experiments yourself—Poppers thesis was “On the Problem of Method in the Psychology of Thinking”. More importantly, on observing researchers doing actual, effective research, and how it is determined which theories make it and which don’t.
And I am saying that the messiness of real science makes pure falsification naive and even counterproductive—it rules out some things too late (which should have been given up as non-promising), and others too early (when their core idea was brilliant, but the initial way of phrasing this was still faulty, or needed additional constraints; theories, when first developed, aren’t yet finished). E.g. looking at paradigmatic revolutions in science, and where they actually came from, what impact experiments falsifying them actually had—many theories we now recognise as clearly superior to the ones they supplanted were, in their initial imperfect formulation, or due to external implicit assumptions that were false, or due to faulty measuring instruments, falsified; and yet the researchers did not give up on them, and turned out to be right not to. But they did the very things Popper was so worried about—make a theory, make a prediction, do an experiment, see the prediction did not work out—and keep the theory anyway, adapting it to the prediction. The question at which point this becomes perfecting a promising theory into a coherent beauty that explains all prior observations and now also makes precise novel predictions that come true, and at which point it becomes patching up utter nonsense with endless random additions that make no sense except to account for the bonkers results, is not a trivial one to answer, but an important one. Take the classic switch to placing the sun in the center of the solar system, rather than the earth. Absolutely correct move. Also initially lead to absolute nonsense in the predictions, because the initial false theory had been patched up so many times to match predictions that it could predict quite a bit, while the new theory, being wrong about a huge number of other factors about how planets move, was totally off. If you put the sun in the center, but assume planets run on a perfect circle around it, and have not got the faintest idea how gravity works, the planet’s actual location will be very different from the one you suspected—but the thing that is wrong here is not your idea that the sun ought to be in the center, it is the idea that a planet circles the sun in a perfect circle. But in practice, figuring out which of your assumptions led to the mess is not that easy, but really has to be done in the long run.
Imre Lakatos did a decent attempt of tracing this, also integrating Thomas Kuhn’s excellent ideas on paradigm shifts in science.
Regular science was absolutely equipped to answer this very question, prior to any falsification.
Almost half of respondents to the poll (46%) are neutral or positive towards quantum theories of consciousness. That’s not a decisive verdict in either direction.
De facto, it is—and honestly, the way you are presenting this through how your are grouping it is misrepresenting the result. Of the ten theories or theory clusters evaluated, the entire group of quantum theories fares worst by a significant margin, to a degree that makes it clear that there won’t be significant funding or attention going here. You are making it appear less bad by grouping together the minuscule number of people who actually said this theory definitely held promise (which looks to be about 1 %) and the people who thought it probably held promise (about 15 %) with the much larger number of people who selected “neutral on whether this theory is promising”, while ignoring that this theory got by far the highest number of people saying “definitely no promise”. Like, look at the visual representation, in the context of the other theories.
And why do a significant number of people say “neutral”? I took this to mean “I’m not familiar enough with it to give a qualified opinion”—which inherently implies that it did not make it to their journals, conferences, university curricula, paper reading lists, etc. enough for them to seriously engage with it, despite it having been around for decades, which is itself an indication of the take the general scientific community had on this—it just isn’t getting picked up, because over and over, people judge it not worth investing in.
Compare how the theories higher up in the ranking have significantly lower numbers of neutral—even those researchers who in the end conclude that this is not the right direction after all saw these theories (global workspace, predictive processing, IIT) as worth properly engaging in based on how the rest of the community framed them. E.g. I think global workspace misses a phenomenon I am most interested in (sentience/p-consciousness) but I do recognise that it had useful things to say about access consciousness which are promising to spell out further. I do think IIT is wrong—but honestly, making a theory mathematically precise enough to be judged as wrong rather than just vague/unclear already constituted promising progress we can use to learn from. But I share the assessment of my fellow researchers here—no quantum theory ever struck me as promising enough for me to even sit down for a couple workdays to work my way through it. (I wondered whether this was because I subconsciously judged quantum phenomena to be too hard, so I once showed one to my girlfriend, a postdoc who works in quantum physics in academia for a living… and whose assessment was, you guessed it, “This is meaningless, where are the equations?… Oh dear God, what is up with this notation? What, this does not follow! What is that supposed to even mean? … I am sorry, do you really need me to look at this? This nonsense does not seem worth my time”.) If a conference offered a quantum theories talk and another on something else, I’d almost certainly go to the other one—and if the other one was also lame/unpromising, I’d be more likely to retreat to my hotel room to meditate or work out to be energised for a later talk, take my reMarkable and dig into my endless reading list of promising papers, or to grab a coffee and hit up a colleague in the mingle areas about an earlier awesome talk and potential collaboration. There is so much promising stuff to keep up with, so much to learn, to practice, to write out, to teach, to support, to fund, and so little money and time, that people just cannot afford to engage with things that do not seem promising.
If over half the scientific community judges something to not be worth pursuing (so they have decided to at minimum not to engage with it or actively support it), of which half are so strongly opposed to pursuing it that they will typically actively block funding allocations or speaking slots or publications in this direction as a waste of resources and diversion, and the majority of the remainder are not interested enough to even have an opinion, while the number of genuine supporters is also miniscule… this is not the sign of a theory that is going anywhere. A paradigm shifting theory might have significant opposition, but it also has significant proud and vibrant supporters, and most people have an opinion on it. This is really clearly not the case here. Instead, it holds the horrible middle between ridiculed and ignored, which amounts to death in science. Frankly, I was surprised it even made it on the survey, and I wondered if they just put it on there to make clear what the research community thought on the issue—I doubt they will still have it on the next.
So what do you make of there being a major consciousness conference just a few days from now, with Anil Seth and David Chalmers as keynote speakers, in which at least 2 out of 9 plenary sessions have a quantum component?
Of the nine plenary sessions, I see one explicitly on quantum theories. Held by the anesthesiologist Stuart Hameroff himself, who I assume was invited by… the organiser and center director, Stuart Hameroff.
Let me quote literal Wikipedia on this conference here: “The conference and its main organizers were the subject of a long feature in June 2018, first in the Chronicle of Higher Education, and re-published in The Guardian. Tom Bartlett concluded that the conference was “more or less the Stuart [Hameroff] Show. He decides who will and who will not present. [...] Some consciousness researchers believe that the whole shindig has gone off the rails, that it’s seriously damaging the field of consciousness studies, and that it should be shut down.”
For context, the Stuart Hameroff mentioned here is well-known for being a quantum proponent, has been pushing for this since the 80′s, and has been very, very broadly criticised on this for a long time, without that going much of anywhere.
I assume Chalmer’s agreed to go because when this conference first started, Chalmers was a founding part of it, and it was really good back then—but you’d have to ask him.
I’d be pleased to be wrong—maybe they have come up with totally novel evidence and we will understand a whole lot more about consciousness via quantum, and we will feel bad for having dismissed him. But I am not planning on being there to check personally, I have too much other stuff to do that I am overwhelmed with, and really try to avoid flying when I can help it. Unsure how many others that is true of—the Wikipedia article has the interesting “Each conference attracts hundreds[citation needed] of attendees.” note. I hope that if the stuff said there is genuinely new and plausible enough to warrant re-evaluation, I expect it will make it round the grapevine. Which was the point I was making.
I have actually worked with Stuart Hameroff! So I should stop being coy: I pay close attention to quantum mind theories, I have specific reasons to take them seriously, and I know enough to independently evaluate the physics component of a new theory when it shows up. This is one of those situations where it would take something much more concrete than an opinion poll to affect my views.
But if I were a complete outsider, trying to judge the plausibility of such a hypothesis, solely on the basis of the sociological evidence you’ve provided… I hope I’d still only be mildly negative about it? In the poll, only 50% of the researchers expressly disapprove. A little investigation reveals that there are two conferences, TSC and ASSC; that ASSC allows a broader range of topics than TSC; and that quantum mind theories are absent from ASSC, but have a haven at TSC because the main organizer favors them. ASSC can say quantum mind is being artificially kept alive by an influential figure, TSC can say he’s saving it from the prejudice of professional groupthink.
(By the way, the other TSC plenary that I counted as partly quantum is “EM & Resonance Theories”, because it’s proposing to ground consciousness in a fundamental physica field.)
The main reason is the fuzzy physical ontology of standard computational states, and how that makes them unsuitable as the mereological base for consciousness. When we ascribe a computational state to something like a transistor, we’re not talking about a crisply objective property. The physical criterion for standard computational ontology is functional: if the device performs a certain role reliably enough, then we say it’s in a 0 state, or a 1 state, or whatever. But physically, there are always possible edge states, in which the performance of the computational role is less and less reliable. It’s a kind of sorites problem.
For engineering, the vagueness of edge states doesn’t matter, so long as you prevent them from occurring. Ontology is different. If something has an observer-independent existence, then for all possible states, either it’s there or it’s not. Consciousness must satisfy this criterion, standard computational states cannot, therefore consciousness cannot be founded on standard computational states.
For me, this provides a huge incentive to look for quantum effects in the brain being functionally relevant to cognition and consciousness—because the quantum world introduces different kinds of ontological possibilities. Basically one might look for reservoirs of entanglement, that are coupled to the classical computational processes which form the whole of present-day cognitive neuroscience. Candidates would include various collective modes of photons, electrons, phonons, in cytoplasmic water or polymeric structures like microtubules. I feel like the biggest challenge is to get entanglement on a scale larger than the individual cell; I should look at Michael Levin’s stuff from that perspective some time.
Just showing that entanglement matters at some stage of cognition doesn’t solve my vagueness problem, but it does lead to new mereological possibilities, that appear to be badly needed.
Many worlds is an ontological possibility. I don’t regard it as favored ahead of one-world ontologies. I’m not aware of a fully satisfactory, rigorous, realist ontology, even just for relativistic QFT.
Is there a clash between many worlds and what you quoted?
I was thinking that “either it’s there or it’s not” as applied to a conscious state would imply you don’t think consciousness can be in an entangled state, or something along those lines.
But reading it again, it seem like you are saying consciousness is discontinuous? As in, there are no partially-conscious states? Is that right?
I’m also unaware of a fully satisfactory ontology for relativistic QFT, sadly.
Gradations of consciousness, and the possibility of a continuum between consciousness and non-consciousness, are subtle topics; especially when considered in conjunction with concepts whose physical grounding is vague.
Some of the kinds of vagueness that show up:
Many-worlders who are vague about how many worlds there are. This can lead to vagueness about how many minds there are too.
Sorites-style vagueness about the boundary in physical state space between different computational states, and about exactly which microphysical entities count as part of the relevant physical state.
(An example of a microphysically vague state which is being used to define boundaries, is the adaptation of “Markov blanket” by fans of Friston and the free energy principle.)
I think a properly critical discussion of vagueness and continuity, in the context of the mind-brain relationship, would need to figure out which kinds of vagueness can be tolerated and which cannot; and would also caution against hiding bad vagueness behind good vagueness.
Here I mean that sometimes, if one objects to basing mental ontology on microphysically vague concepts of Everett branch or computational state, one is told that this is OK because there’s vagueness in the mental realm too—e.g. vagueness of a color concept, or vagueness of the boundary between being conscious and being unconscious.
Alternatively, one also hears mystical ideas like “all minds are One” being justified on the grounds that the physical world is supposedly a continuum without objective boundaries.
Sometimes, one ends up having to appeal to very basic facts about the experienced world, like, my experience always has a particular form. I am always having a specific experience, in a way that is unaffected by the referential vagueness of the words or concepts I might use to describe it. Or: I am not having your experience, and you are not having mine, the implication being that there is some kind of objective difference or boundary between us.
To me, those are the considerations that can ultimately decide whether a particular proposed psychophysical vagueness is true, possible, or impossible.
“I pay close attention to quantum mind theories, I have specific reasons to take them seriously”
Now I am curious. What specific reasons?
Say I had an hour of focus to look into this one of these days. Can you recommend a paper or something similar I could read in that hour that should leave me convinced enough to warrant digging into this more deeply? Like, an overview of central pieces of evidence and arguments for quantum effects being crucial to consciousness with links so one can review the logic and data in detail if sceptical, a hint what profound implications this would has for ethics, theory and empirical methods, and brief rebuttals to common critiques with links to more comprehensive ones if not immediately convincing? Something with math to make it precise? Doesn’t have to (and can’t) cover everything of course, but enough that after an hour, I’d have reason to suspect that they are onto something that cannot be easily otherwise explained, that their interpretation is plausible, and that if they are right, this really matters, so I will be intrigued enough that I would then decide to invest more time, and know where to continue looking?
If there is genuine evidence (or at least a really good, plausible argument to be made for) quantum effects playing a crucial role for consciousness, I would really want and need to know. It would matter for issues I am interested in, like the resolution necessary in scanning and the functionality necessary in the resulting process for uploading to be successful, and independently for evaluating sentience in non-human agents. It intuitively sounds like crucial quantum effects would massively complicate progress in these issues, so I would want good reason to assume that this is actually necessary. But if we cannot make proper progress without it, no matter how annoying it will be to compute, and how unpopular it is, I would want to know.
Originally I was going to answer your question with another question—what kind of relation do you think exists between fundamental physical properties of the brain and (let’s say) phenomenal properties? I’m not asking for biological details, but rather for a philosophical position, about reduction or emergence or whatever. Since you apparently work in consciousness studies, you possibly have quite precise opinions on philosophy of mind; and then I could explain myself in response to those.
But I already discussed my views with @Adele Lopez in the other thread, so I may as well state them here. My main motivation is ontological—I think there is a problem in principle with any attempt to identify (let’s say) phenomenal properties, with physical properties of the brain that are not microphysically exact.
If a physical property is vague, that means there are microphysically exact states where there is no objective fact about whether or not the vague physical property holds—they’re on the fuzzy edge of belonging or not belonging to that classification.
But if the properties constitutive of consciousness are identified with vague physical properties of the brain, that means that there are specific physical states of the brain, where there is no objective fact about e.g. whether or not there is a consciousness present. And I regard that as a reductio ad absurdum, of whatever premise brought you to that conclusion.
Possibly this argument exists in the literature, but I don’t have a reference.
If you do think it’s untenable to reduce consciousness to computational states which are themselves vague coarse-grainings of exact physical states, then you have an incentive to consider quantum mind theories. But certainly the empirical evidence isn’t there yet. The most advanced quantum phenomenon conventionally believed to be at work in biology, is quantum coherence in chlorophyll, and even there, there isn’t quite consensus about its nature or role.
Empirically, I think the verdict on quantum biology is still “not proven”—not proved, and not disproved. The debate is mostly theoretical, e.g. about whether decoherence can be avoided. The problem is that quantum effects are potentially very subtle (the literature on quantum coherence in chlorophyll again illustrates this). It’s not like the statistics of observable behaviors of neurons tells us all the biophysical mechanisms that contribute to those behaviors. For that we need intimate biophysical knowledge of the cell that doesn’t quite exist.
Mh. I am not sure I follow. Can I give an analogy, and you tell me whether it holds or not?
I work on consciousness. As such, I am aware that individual human minds are very, very complicated and confusing things.
But in the past, I have also worked on human crowd dynamics. Every single human in a human crowd is one of these very complicated human things with their complicated conscious minds. Every one is an individual. Every single one has a distinct experience affecting their behaviour. They turn up at the crowd that day with different amounts of knowledge, and intentions, and strength, and all sorts of complicating factors. Like, hey, maybe they have themselves studied crowd dynamics, and wish to use this knowledge to keep safe.
But if I zoom out, and look at the crowd as a whole, and want to figure out e.g. if there will be a stampede… I do not actually need to know any of that. A dense human crowd starts acting very much like a liquid. Tell me how dense it it, tell me how narrow the corridors are among which it will be channeled… and we can saw whether people will get likely trampled, or even certainly trampled. Not which one will be trampled, but whether there will be a trampling. I can say, if we implement a barrier here, the people will spill around there, if we close a door here, people will pile up there. If we enter more people here, the force will get intolerable over there. Basically, I can easily model the macro effects of the whole system, while entirely ignoring the micro effects. Because they even out. Because the individual randomness of the humans does not not change the movement of the crowd as a whole. And if a grad student said, but shouldn’t we be interviewing all the individual people about their intentions for how they want to move today, I would say absolutely hard no, that is neither necessary nor helpful, but a huge time sink.
Similarly, I know that atoms are not, at all, simply little billiard balls that just vibrate more and push further away from each other if you make them warmer, like we are shown in primary school. There are a lot of chemical and physical effects where that is very important to know. But if I just want to model whether heating the content of my pressure pot to a certain temperature will make it explode? Doesn’t matter at all. I can assume, for simplicities sake, that atoms are little billiard balls, and be perfectly fine. If I added more information, my prediction would not get better. I might actually end up with so much confusion I can’t predict anything at all, because I never finish the math. I also know that Newstons ideas were tragically limited compared to Einsteins, and if I were to built a space rocket, I would certainly want proper physics accounting for relativity. But if I am just playing billards, with everyone involved on earth, and the balls moving insanely slowly compared to the speed of light? I’ll be calculating trajectories with Newton, and not feeling the slightest built guilty. You get the idea.
I see consciousness as an emergent phenomenon, but in a very straightforward sense of the word, the way that say, crowd violence is an emergent phenomenon. Not magical or beyond physics. And I suspect there comes a degree of resolution in the underlying substrate where it ceases to matter for the macroscopic effect, where figuring that out is just detail that will cause extra work and confuse everyone, and we already have a horrible issue in biology with people getting so cluttered beneath details that we get completely stuck. I don’t think it matters how many neurotransmitters exactly are poured into the gap, but just whether the neuron fires or not as a result, for example. So I suspect that that degree of resolution is breached far before we reach the quantum level, with whole groups of technically very different things being grouped as effectively the same for our sakes and purposes. So every macroscopic state would have a defined designation as conscious or not, but beneath that, a lot of very different stuff would be grouped together. But there would be no undefined states, per se. The conscious system would be the one where, to take a common example, information has looped around and back to the same neuron, regardless of how exactly it did.
But I say all this while not having a good understanding of quantum physics at all, so I am really sorry if I got you wrong.
every macroscopic state would have a defined designation as conscious or not [...] there would be no undefined states, per se
But the actual states of things are microscopic. And from a microscopic perspective, macroscopic states are vague. They have edge cases, they have sorites problems.
For crowds, or clouds, this doesn’t matter. That these are vague concepts does not create a philosophical crisis, because we have no reason to believe that there is an “essence of crowd” or “essence of cloud”, that is either present or not present, in every possible state of affairs.
Consciousness is different—it is definitely, actually there. As such, its relationship to the microphysical reality cannot be vague or conventional in nature. The relationship has to be exact.
The conscious system would be the one where, to take a common example, information has looped around and back to the same neuron, regardless of how exactly it did.
So by my criteria, the question is whether you can define informational states, and circulation of information, in such a way that from a microphysical perspective, there is never any ambiguity about whether they occurred. For all possible microphysical states, you should be able to say whether or not a given “informational state” is present. I’m not saying that every microphysical detail must contribute to consciousness; but if consciousness is to be identified with informational states, informational states have to have a fully objective existence.
This is… really not how scientific practice works, though.
This is how some, older, philosophers of science thought science ought to work. Namely Karl Popper. Who had some points, for sure, but notably, was not a scientist himself, so he was speculating about a practice he was not a part of—and had to discover that, having described to scientists the laws that ought to govern how they ought to act, found that they in fact did not, nor agreed that they would get better results that way. Philosophy of science really took off as an entire discipline here, and a lot of it pointed out huge aspects that Popper had overlooked, or outright contradictions between his ideas and actual scientific practice—in part, because his clean idea of falsification does not translate well to the testing of complex theories.
Instead of speculating about how science might work, and then saying it is bad, let’s look at how it actually does, to see if your criticism applies. Say you applied for a grant to develop this theory of yours. Or submitted a talk on it at a scientific conference. Or drafted it as a project todo for an academic position. This is usually when the scientific community determines if something should go further.
They’d ask you all sorts of things. Where did your idea come from? Is your theory consistent with existing theories? If not; does it plausibly promise to resolve their issues, and do a comparably good job, and do you have an explanation for why evidence so far pointed so much to existing theories? What observations do you have that would suggest your theory is likely? Is it internally consistent? Does it explain a lot, while being relatively simple with relatively few extra assumptions? What mechanism are you proposing, and how does it look in detail, it is plausible? Can you show us mathematical equations, or code? If your theory were correct, what would follow—are there useful applications, new things we could do or understand that we previously could not, new areas? If we gave you money to pursue this for 3 years, what tangible results would you think you are likely to produce, and what is your step by step plan to get there? What do you want to calculate, what do you want to define, what experiments do you intend to run, what will all this produce? - If the answers to this seem plausible and promising, the next step would be getting some experts in quantum physics, neuroscience, and philosophy of mind, having them read your work, and ask you some very critical questions—can you answer these questions? Do they think the resulting theory is promising? There is no one simple set of rules of criteria, but the process is not random, either. And it gives a relatively decent assessment of whether a theory is plausible enough to invest in.
I’ve mentioned a survey among researchers of consciousness on less wrong before. https://academic.oup.com/nc/article/2022/1/niac011/6663928 Note that interestingly, when it comes to theories of consciousness, the researchers are asked to evaluate if the various theories the survey then goes through are in their opinion promising. None of them can be falsified, yet, but that does not mean they are all given the same amount of resources. They all clearly. understand the question, and give clear answers. And quantum theories make the very end of the list. Regular science was absolutely equipped to answer this very question, prior to any falsification.
Popper was never a working scientist, but “In 1928, Popper earned a doctorate in psychology, under the supervision of Karl Bühler—with Moritz Schlick being the second chair of the thesis committee” ( Schlick was a famous Vienna Circle figure).
I am not saying Popper was scientifically illiterate at all. I find falsification a beautiful ideal, and have admiration for him.
But I am saying that you get very different philosophy of science if you base your writings not on your abstract reflections of how a perfect science ought to work, but on doing experiments yourself—Poppers thesis was “On the Problem of Method in the Psychology of Thinking”. More importantly, on observing researchers doing actual, effective research, and how it is determined which theories make it and which don’t.
And I am saying that the messiness of real science makes pure falsification naive and even counterproductive—it rules out some things too late (which should have been given up as non-promising), and others too early (when their core idea was brilliant, but the initial way of phrasing this was still faulty, or needed additional constraints; theories, when first developed, aren’t yet finished). E.g. looking at paradigmatic revolutions in science, and where they actually came from, what impact experiments falsifying them actually had—many theories we now recognise as clearly superior to the ones they supplanted were, in their initial imperfect formulation, or due to external implicit assumptions that were false, or due to faulty measuring instruments, falsified; and yet the researchers did not give up on them, and turned out to be right not to. But they did the very things Popper was so worried about—make a theory, make a prediction, do an experiment, see the prediction did not work out—and keep the theory anyway, adapting it to the prediction. The question at which point this becomes perfecting a promising theory into a coherent beauty that explains all prior observations and now also makes precise novel predictions that come true, and at which point it becomes patching up utter nonsense with endless random additions that make no sense except to account for the bonkers results, is not a trivial one to answer, but an important one. Take the classic switch to placing the sun in the center of the solar system, rather than the earth. Absolutely correct move. Also initially lead to absolute nonsense in the predictions, because the initial false theory had been patched up so many times to match predictions that it could predict quite a bit, while the new theory, being wrong about a huge number of other factors about how planets move, was totally off. If you put the sun in the center, but assume planets run on a perfect circle around it, and have not got the faintest idea how gravity works, the planet’s actual location will be very different from the one you suspected—but the thing that is wrong here is not your idea that the sun ought to be in the center, it is the idea that a planet circles the sun in a perfect circle. But in practice, figuring out which of your assumptions led to the mess is not that easy, but really has to be done in the long run.
Imre Lakatos did a decent attempt of tracing this, also integrating Thomas Kuhn’s excellent ideas on paradigm shifts in science.
Almost half of respondents to the poll (46%) are neutral or positive towards quantum theories of consciousness. That’s not a decisive verdict in either direction.
De facto, it is—and honestly, the way you are presenting this through how your are grouping it is misrepresenting the result. Of the ten theories or theory clusters evaluated, the entire group of quantum theories fares worst by a significant margin, to a degree that makes it clear that there won’t be significant funding or attention going here. You are making it appear less bad by grouping together the minuscule number of people who actually said this theory definitely held promise (which looks to be about 1 %) and the people who thought it probably held promise (about 15 %) with the much larger number of people who selected “neutral on whether this theory is promising”, while ignoring that this theory got by far the highest number of people saying “definitely no promise”. Like, look at the visual representation, in the context of the other theories.
And why do a significant number of people say “neutral”? I took this to mean “I’m not familiar enough with it to give a qualified opinion”—which inherently implies that it did not make it to their journals, conferences, university curricula, paper reading lists, etc. enough for them to seriously engage with it, despite it having been around for decades, which is itself an indication of the take the general scientific community had on this—it just isn’t getting picked up, because over and over, people judge it not worth investing in.
Compare how the theories higher up in the ranking have significantly lower numbers of neutral—even those researchers who in the end conclude that this is not the right direction after all saw these theories (global workspace, predictive processing, IIT) as worth properly engaging in based on how the rest of the community framed them. E.g. I think global workspace misses a phenomenon I am most interested in (sentience/p-consciousness) but I do recognise that it had useful things to say about access consciousness which are promising to spell out further. I do think IIT is wrong—but honestly, making a theory mathematically precise enough to be judged as wrong rather than just vague/unclear already constituted promising progress we can use to learn from. But I share the assessment of my fellow researchers here—no quantum theory ever struck me as promising enough for me to even sit down for a couple workdays to work my way through it. (I wondered whether this was because I subconsciously judged quantum phenomena to be too hard, so I once showed one to my girlfriend, a postdoc who works in quantum physics in academia for a living… and whose assessment was, you guessed it, “This is meaningless, where are the equations?… Oh dear God, what is up with this notation? What, this does not follow! What is that supposed to even mean? … I am sorry, do you really need me to look at this? This nonsense does not seem worth my time”.) If a conference offered a quantum theories talk and another on something else, I’d almost certainly go to the other one—and if the other one was also lame/unpromising, I’d be more likely to retreat to my hotel room to meditate or work out to be energised for a later talk, take my reMarkable and dig into my endless reading list of promising papers, or to grab a coffee and hit up a colleague in the mingle areas about an earlier awesome talk and potential collaboration. There is so much promising stuff to keep up with, so much to learn, to practice, to write out, to teach, to support, to fund, and so little money and time, that people just cannot afford to engage with things that do not seem promising.
If over half the scientific community judges something to not be worth pursuing (so they have decided to at minimum not to engage with it or actively support it), of which half are so strongly opposed to pursuing it that they will typically actively block funding allocations or speaking slots or publications in this direction as a waste of resources and diversion, and the majority of the remainder are not interested enough to even have an opinion, while the number of genuine supporters is also miniscule… this is not the sign of a theory that is going anywhere. A paradigm shifting theory might have significant opposition, but it also has significant proud and vibrant supporters, and most people have an opinion on it. This is really clearly not the case here. Instead, it holds the horrible middle between ridiculed and ignored, which amounts to death in science. Frankly, I was surprised it even made it on the survey, and I wondered if they just put it on there to make clear what the research community thought on the issue—I doubt they will still have it on the next.
So what do you make of there being a major consciousness conference just a few days from now, with Anil Seth and David Chalmers as keynote speakers, in which at least 2 out of 9 plenary sessions have a quantum component?
Of the nine plenary sessions, I see one explicitly on quantum theories. Held by the anesthesiologist Stuart Hameroff himself, who I assume was invited by… the organiser and center director, Stuart Hameroff.
Let me quote literal Wikipedia on this conference here: “The conference and its main organizers were the subject of a long feature in June 2018, first in the Chronicle of Higher Education, and re-published in The Guardian. Tom Bartlett concluded that the conference was “more or less the Stuart [Hameroff] Show. He decides who will and who will not present. [...] Some consciousness researchers believe that the whole shindig has gone off the rails, that it’s seriously damaging the field of consciousness studies, and that it should be shut down.”
For context, the Stuart Hameroff mentioned here is well-known for being a quantum proponent, has been pushing for this since the 80′s, and has been very, very broadly criticised on this for a long time, without that going much of anywhere.
I assume Chalmer’s agreed to go because when this conference first started, Chalmers was a founding part of it, and it was really good back then—but you’d have to ask him.
I’d be pleased to be wrong—maybe they have come up with totally novel evidence and we will understand a whole lot more about consciousness via quantum, and we will feel bad for having dismissed him. But I am not planning on being there to check personally, I have too much other stuff to do that I am overwhelmed with, and really try to avoid flying when I can help it. Unsure how many others that is true of—the Wikipedia article has the interesting “Each conference attracts hundreds[citation needed] of attendees.” note. I hope that if the stuff said there is genuinely new and plausible enough to warrant re-evaluation, I expect it will make it round the grapevine. Which was the point I was making.
I have actually worked with Stuart Hameroff! So I should stop being coy: I pay close attention to quantum mind theories, I have specific reasons to take them seriously, and I know enough to independently evaluate the physics component of a new theory when it shows up. This is one of those situations where it would take something much more concrete than an opinion poll to affect my views.
But if I were a complete outsider, trying to judge the plausibility of such a hypothesis, solely on the basis of the sociological evidence you’ve provided… I hope I’d still only be mildly negative about it? In the poll, only 50% of the researchers expressly disapprove. A little investigation reveals that there are two conferences, TSC and ASSC; that ASSC allows a broader range of topics than TSC; and that quantum mind theories are absent from ASSC, but have a haven at TSC because the main organizer favors them. ASSC can say quantum mind is being artificially kept alive by an influential figure, TSC can say he’s saving it from the prejudice of professional groupthink.
(By the way, the other TSC plenary that I counted as partly quantum is “EM & Resonance Theories”, because it’s proposing to ground consciousness in a fundamental physica field.)
What specific reasons do you have to take them seriously?
The main reason is the fuzzy physical ontology of standard computational states, and how that makes them unsuitable as the mereological base for consciousness. When we ascribe a computational state to something like a transistor, we’re not talking about a crisply objective property. The physical criterion for standard computational ontology is functional: if the device performs a certain role reliably enough, then we say it’s in a 0 state, or a 1 state, or whatever. But physically, there are always possible edge states, in which the performance of the computational role is less and less reliable. It’s a kind of sorites problem.
For engineering, the vagueness of edge states doesn’t matter, so long as you prevent them from occurring. Ontology is different. If something has an observer-independent existence, then for all possible states, either it’s there or it’s not. Consciousness must satisfy this criterion, standard computational states cannot, therefore consciousness cannot be founded on standard computational states.
For me, this provides a huge incentive to look for quantum effects in the brain being functionally relevant to cognition and consciousness—because the quantum world introduces different kinds of ontological possibilities. Basically one might look for reservoirs of entanglement, that are coupled to the classical computational processes which form the whole of present-day cognitive neuroscience. Candidates would include various collective modes of photons, electrons, phonons, in cytoplasmic water or polymeric structures like microtubules. I feel like the biggest challenge is to get entanglement on a scale larger than the individual cell; I should look at Michael Levin’s stuff from that perspective some time.
Just showing that entanglement matters at some stage of cognition doesn’t solve my vagueness problem, but it does lead to new mereological possibilities, that appear to be badly needed.
Should I infer that you don’t believe in many worlds?
Many worlds is an ontological possibility. I don’t regard it as favored ahead of one-world ontologies. I’m not aware of a fully satisfactory, rigorous, realist ontology, even just for relativistic QFT.
Is there a clash between many worlds and what you quoted?
I was thinking that “either it’s there or it’s not” as applied to a conscious state would imply you don’t think consciousness can be in an entangled state, or something along those lines.
But reading it again, it seem like you are saying consciousness is discontinuous? As in, there are no partially-conscious states? Is that right?
I’m also unaware of a fully satisfactory ontology for relativistic QFT, sadly.
Gradations of consciousness, and the possibility of a continuum between consciousness and non-consciousness, are subtle topics; especially when considered in conjunction with concepts whose physical grounding is vague.
Some of the kinds of vagueness that show up:
Many-worlders who are vague about how many worlds there are. This can lead to vagueness about how many minds there are too.
Sorites-style vagueness about the boundary in physical state space between different computational states, and about exactly which microphysical entities count as part of the relevant physical state.
(An example of a microphysically vague state which is being used to define boundaries, is the adaptation of “Markov blanket” by fans of Friston and the free energy principle.)
I think a properly critical discussion of vagueness and continuity, in the context of the mind-brain relationship, would need to figure out which kinds of vagueness can be tolerated and which cannot; and would also caution against hiding bad vagueness behind good vagueness.
Here I mean that sometimes, if one objects to basing mental ontology on microphysically vague concepts of Everett branch or computational state, one is told that this is OK because there’s vagueness in the mental realm too—e.g. vagueness of a color concept, or vagueness of the boundary between being conscious and being unconscious.
Alternatively, one also hears mystical ideas like “all minds are One” being justified on the grounds that the physical world is supposedly a continuum without objective boundaries.
Sometimes, one ends up having to appeal to very basic facts about the experienced world, like, my experience always has a particular form. I am always having a specific experience, in a way that is unaffected by the referential vagueness of the words or concepts I might use to describe it. Or: I am not having your experience, and you are not having mine, the implication being that there is some kind of objective difference or boundary between us.
To me, those are the considerations that can ultimately decide whether a particular proposed psychophysical vagueness is true, possible, or impossible.
“I pay close attention to quantum mind theories, I have specific reasons to take them seriously”
Now I am curious. What specific reasons?
Say I had an hour of focus to look into this one of these days. Can you recommend a paper or something similar I could read in that hour that should leave me convinced enough to warrant digging into this more deeply? Like, an overview of central pieces of evidence and arguments for quantum effects being crucial to consciousness with links so one can review the logic and data in detail if sceptical, a hint what profound implications this would has for ethics, theory and empirical methods, and brief rebuttals to common critiques with links to more comprehensive ones if not immediately convincing? Something with math to make it precise? Doesn’t have to (and can’t) cover everything of course, but enough that after an hour, I’d have reason to suspect that they are onto something that cannot be easily otherwise explained, that their interpretation is plausible, and that if they are right, this really matters, so I will be intrigued enough that I would then decide to invest more time, and know where to continue looking?
If there is genuine evidence (or at least a really good, plausible argument to be made for) quantum effects playing a crucial role for consciousness, I would really want and need to know. It would matter for issues I am interested in, like the resolution necessary in scanning and the functionality necessary in the resulting process for uploading to be successful, and independently for evaluating sentience in non-human agents. It intuitively sounds like crucial quantum effects would massively complicate progress in these issues, so I would want good reason to assume that this is actually necessary. But if we cannot make proper progress without it, no matter how annoying it will be to compute, and how unpopular it is, I would want to know.
Originally I was going to answer your question with another question—what kind of relation do you think exists between fundamental physical properties of the brain and (let’s say) phenomenal properties? I’m not asking for biological details, but rather for a philosophical position, about reduction or emergence or whatever. Since you apparently work in consciousness studies, you possibly have quite precise opinions on philosophy of mind; and then I could explain myself in response to those.
But I already discussed my views with @Adele Lopez in the other thread, so I may as well state them here. My main motivation is ontological—I think there is a problem in principle with any attempt to identify (let’s say) phenomenal properties, with physical properties of the brain that are not microphysically exact.
If a physical property is vague, that means there are microphysically exact states where there is no objective fact about whether or not the vague physical property holds—they’re on the fuzzy edge of belonging or not belonging to that classification.
But if the properties constitutive of consciousness are identified with vague physical properties of the brain, that means that there are specific physical states of the brain, where there is no objective fact about e.g. whether or not there is a consciousness present. And I regard that as a reductio ad absurdum, of whatever premise brought you to that conclusion.
Possibly this argument exists in the literature, but I don’t have a reference.
If you do think it’s untenable to reduce consciousness to computational states which are themselves vague coarse-grainings of exact physical states, then you have an incentive to consider quantum mind theories. But certainly the empirical evidence isn’t there yet. The most advanced quantum phenomenon conventionally believed to be at work in biology, is quantum coherence in chlorophyll, and even there, there isn’t quite consensus about its nature or role.
Empirically, I think the verdict on quantum biology is still “not proven”—not proved, and not disproved. The debate is mostly theoretical, e.g. about whether decoherence can be avoided. The problem is that quantum effects are potentially very subtle (the literature on quantum coherence in chlorophyll again illustrates this). It’s not like the statistics of observable behaviors of neurons tells us all the biophysical mechanisms that contribute to those behaviors. For that we need intimate biophysical knowledge of the cell that doesn’t quite exist.
Mh. I am not sure I follow. Can I give an analogy, and you tell me whether it holds or not?
I work on consciousness. As such, I am aware that individual human minds are very, very complicated and confusing things.
But in the past, I have also worked on human crowd dynamics. Every single human in a human crowd is one of these very complicated human things with their complicated conscious minds. Every one is an individual. Every single one has a distinct experience affecting their behaviour. They turn up at the crowd that day with different amounts of knowledge, and intentions, and strength, and all sorts of complicating factors. Like, hey, maybe they have themselves studied crowd dynamics, and wish to use this knowledge to keep safe.
But if I zoom out, and look at the crowd as a whole, and want to figure out e.g. if there will be a stampede… I do not actually need to know any of that. A dense human crowd starts acting very much like a liquid. Tell me how dense it it, tell me how narrow the corridors are among which it will be channeled… and we can saw whether people will get likely trampled, or even certainly trampled. Not which one will be trampled, but whether there will be a trampling. I can say, if we implement a barrier here, the people will spill around there, if we close a door here, people will pile up there. If we enter more people here, the force will get intolerable over there. Basically, I can easily model the macro effects of the whole system, while entirely ignoring the micro effects. Because they even out. Because the individual randomness of the humans does not not change the movement of the crowd as a whole. And if a grad student said, but shouldn’t we be interviewing all the individual people about their intentions for how they want to move today, I would say absolutely hard no, that is neither necessary nor helpful, but a huge time sink.
Similarly, I know that atoms are not, at all, simply little billiard balls that just vibrate more and push further away from each other if you make them warmer, like we are shown in primary school. There are a lot of chemical and physical effects where that is very important to know. But if I just want to model whether heating the content of my pressure pot to a certain temperature will make it explode? Doesn’t matter at all. I can assume, for simplicities sake, that atoms are little billiard balls, and be perfectly fine. If I added more information, my prediction would not get better. I might actually end up with so much confusion I can’t predict anything at all, because I never finish the math. I also know that Newstons ideas were tragically limited compared to Einsteins, and if I were to built a space rocket, I would certainly want proper physics accounting for relativity. But if I am just playing billards, with everyone involved on earth, and the balls moving insanely slowly compared to the speed of light? I’ll be calculating trajectories with Newton, and not feeling the slightest built guilty. You get the idea.
I see consciousness as an emergent phenomenon, but in a very straightforward sense of the word, the way that say, crowd violence is an emergent phenomenon. Not magical or beyond physics. And I suspect there comes a degree of resolution in the underlying substrate where it ceases to matter for the macroscopic effect, where figuring that out is just detail that will cause extra work and confuse everyone, and we already have a horrible issue in biology with people getting so cluttered beneath details that we get completely stuck. I don’t think it matters how many neurotransmitters exactly are poured into the gap, but just whether the neuron fires or not as a result, for example. So I suspect that that degree of resolution is breached far before we reach the quantum level, with whole groups of technically very different things being grouped as effectively the same for our sakes and purposes. So every macroscopic state would have a defined designation as conscious or not, but beneath that, a lot of very different stuff would be grouped together. But there would be no undefined states, per se. The conscious system would be the one where, to take a common example, information has looped around and back to the same neuron, regardless of how exactly it did.
But I say all this while not having a good understanding of quantum physics at all, so I am really sorry if I got you wrong.
But the actual states of things are microscopic. And from a microscopic perspective, macroscopic states are vague. They have edge cases, they have sorites problems.
For crowds, or clouds, this doesn’t matter. That these are vague concepts does not create a philosophical crisis, because we have no reason to believe that there is an “essence of crowd” or “essence of cloud”, that is either present or not present, in every possible state of affairs.
Consciousness is different—it is definitely, actually there. As such, its relationship to the microphysical reality cannot be vague or conventional in nature. The relationship has to be exact.
So by my criteria, the question is whether you can define informational states, and circulation of information, in such a way that from a microphysical perspective, there is never any ambiguity about whether they occurred. For all possible microphysical states, you should be able to say whether or not a given “informational state” is present. I’m not saying that every microphysical detail must contribute to consciousness; but if consciousness is to be identified with informational states, informational states have to have a fully objective existence.