I take it that the papers you mention are this and this?
In the Francken survey, several of the questions seem to be about the definition of the word “consciousness” rather than about the phenomenon. A positive answer to the evolution question as stated is practically a tautology, and the consensus over “Mary” and “Explanatory gap” suggests that they think there is something there but that they still don’t know what.
I can only find the word “qualia” once in Northoff & Lamme, but not in a substantial way, so unless they’re using other language to talk about qualia, it seems like if anything, they are going around it rather than through. All the theories of consciousness I have seen, including those in Northoff & Lamme, have been like that: qualia end up being left out, when qualia were the very thing that was supposed to be explained.
For the ancient Greeks, “the star-like thing we see in the morning” (and in the evening—they knew back then that they were the same object) would be a perfectly good characterisation of Venus. We now know more about Venus, but there is no point in debating which of the many things we know about it is “the meaning” of the word “Venus”.
On the survey: On the question of whether consciousness itself fulfils a function that evolution has selected for, while highly plausible, is not obvious, and has been disputed. The common argument against it is the fact that polar bear coats are heavy, so one could ask whether evolution has selected for heavyness. And of course, it has not—the weight is detrimental—but is has selected for a coat that keeps a polar bear warm in their incredibly cold environment, and the random process there failed to find a coat that was sufficiently warm, but significantly lighter, and also scoring high on other desirable aspects. But in this case, the heavyness of the coat is a negative side consequence of a trait that was actually selected for. And we can conceive of coats that are warm, but lighter.
The distinction may seem persnickety, but it isn’t, it has profound implications. In one scenario, consciousness could be an itself valueless side product of a development that was actually useful (some neat brain process, perhaps), but the consciousness itself plays no functional role. One important implication of this would be that it would not be possible to identify consciousness based on behaviour, because it would not affect behaviour. This is the idea of epiphenomenalism—basically, that there is a process running in your brain that is actually what matters for your behaviour, but the process of its running also, on the side, leads to a subjective experience, which is itself irrelevant—just generated, the way that a locomotive produces steam. While epiphenomenalism leads you into absolutely absurd territory (zombies), there are a surprising number of scientists who have historically essentially prescribed to it, because it allows you to circumvent a bunch of hard questions. You can continue to imagine consciousness as a mysterious, unphysical thing that does not have to be translated into math, because it does not really exist on a physical level—you describe a physical process, and then at some point, you handwave.
However, epiphenomenalism is false. It falls prey to the self-stultification argument; the very fact that we are talking about it implies that it is false. Because if consciousness has no function, is just a side effect that does not itself affect the brain, it cannot affect behaviour. But talking is behaviour, and we are talking intensely about a phenomenon our brain, which controls the speaking should have zero awareness of.
Taking this seriously means concluding that consciousness is not produced by a brain process, the result or side effect of a brain process, but identical with particular kinds of neural/information processing. Which is one of those statements that it is easy to agree with (it seems an obvious choice for a physicalist), but when you try to actually understand it, you get a headache (or at least, I do.) Because it means you can never handwave. You can never have a process on one side, and then go “anyhow, and this leads to consciousness arising” as something separate, but it means that as you are studying the process, you are looking at consciousness itself, from the outside.
***
Northoff & Lamme, like a bunch of neuroscientists, avoid philosophical terminology like the plague, so as a philosopher wanting to use their works, you need to yourself piece together which phenomena they were working towards. Their essential position is that philosophers are people who muck around while avoiding the actual empirical work, and that associating with them is icky. This has the unfortunate consequence that their terminology is horribly vague—Lamme by himself uses “consciousness” for all sorts of stuff. As someone who works on visual processing, I think Lamme also dislikes the word “qualia” for a more justified reason—the idea that the building blocks of consciousness are individual subjective experiences like “red” is nonsense. Our conscious perception of a lilly lake looks nothing like Monet painting. We don’t consciously see the light colours that are hitting our retina as a separate kaleidoscope—we see the whole objects, in what we assume are colours corresponding to their surface properties, with additional information given on potential factors making the colour perception unreliable—itself the result of a long sequence of unconscious processing.
That said, he does place qualia in the correct context. A point he is making there is that neural theories that seem to disagree a lot are targeting different aspects of consciousness, but increasingly look like they can be slotted together into a coherent theory. E.g. Lamme’s ideas and global workspace have little in common, but they focus on different phases—a distinction that I think most corresponds with the distinction between phenomenal and access consciousness. I agree with you that the latter is better understood at this point than the former, though there are good reasons for that—it is incredibly hard to empirically distinguish between precursors of consciousness and the formation of consciousness prior to it being committed to short term memory, and introspective reports for verification start fucking everything up (because introspecting about the stimulus completely changes what is going on phenomenally and neurally), while no report paradigms have other severe difficulties.
But we are still beginning to narrow down how it works—ineptly, sure, and a lot of it is going “ah, this person no longer experiences x, and their brain is damaged in this particular fashion, so something about this damage must have interrupted the relevant process”, while others essentially amount to trying to put people into controlled environments and showing them specifically varied stimuli and scanning them to see what changes (with difficulties in the resolution being terrible, and more difficulties in the fact that people start thinking about other stuff during boring neuroscience experiments), but it is no longer a complete blackbox.
And I would say that Lamme does focus on the phenomenal aspect of things—like I said, not individual colours, but the subjective experience of vision, yes.
And we have also made progress on qualia (e.g. likely ruling out inverse qualia scenarios), see the work Kawakita et al. are doing, which is being discussed here on Less Wrong. https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you It’s part of a larger line of research looking to accurately jot down psychophysical explanations on colour qualia to built phenomenal maps, and then looking for something correlated in the brain. That still leaves us unsure how and why you see anything at all consciously, but is progress on why the particular thing you are seeing is green and not blue.
Honestly, my TL,DR is that saying that we know nothing about the structure of reality that constitutes consciousness is increasingly unfair in light of how much we do meanwhile understand. We aren’t done, but we have made tangible progress on the question, we have fragments that are beginning to slot into place. Most importantly, we are going away from “how this experience arises will be forever a mystery” to increasingly concrete, solvable questions. I think we started the way the ancient Greeks did—just pointing at what we saw, the “star” in the evening, and the “star” in the morning, not knowing what was causing that visual, the way we go “I subjectively experience x, but no idea why”—but then progressing to realising that they had the same origin, then that the origin was not in fact a star, etc. Starting with a perception, and then looking at its origin—but in our case, the origin we were interested in was not the object being perceived, but the process of perception.
I take it that the papers you mention are this and this?
In the Francken survey, several of the questions seem to be about the definition of the word “consciousness” rather than about the phenomenon. A positive answer to the evolution question as stated is practically a tautology, and the consensus over “Mary” and “Explanatory gap” suggests that they think there is something there but that they still don’t know what.
I can only find the word “qualia” once in Northoff & Lamme, but not in a substantial way, so unless they’re using other language to talk about qualia, it seems like if anything, they are going around it rather than through. All the theories of consciousness I have seen, including those in Northoff & Lamme, have been like that: qualia end up being left out, when qualia were the very thing that was supposed to be explained.
For the ancient Greeks, “the star-like thing we see in the morning” (and in the evening—they knew back then that they were the same object) would be a perfectly good characterisation of Venus. We now know more about Venus, but there is no point in debating which of the many things we know about it is “the meaning” of the word “Venus”.
Yes, those are the papers.
On the survey: On the question of whether consciousness itself fulfils a function that evolution has selected for, while highly plausible, is not obvious, and has been disputed. The common argument against it is the fact that polar bear coats are heavy, so one could ask whether evolution has selected for heavyness. And of course, it has not—the weight is detrimental—but is has selected for a coat that keeps a polar bear warm in their incredibly cold environment, and the random process there failed to find a coat that was sufficiently warm, but significantly lighter, and also scoring high on other desirable aspects. But in this case, the heavyness of the coat is a negative side consequence of a trait that was actually selected for. And we can conceive of coats that are warm, but lighter.
The distinction may seem persnickety, but it isn’t, it has profound implications. In one scenario, consciousness could be an itself valueless side product of a development that was actually useful (some neat brain process, perhaps), but the consciousness itself plays no functional role. One important implication of this would be that it would not be possible to identify consciousness based on behaviour, because it would not affect behaviour. This is the idea of epiphenomenalism—basically, that there is a process running in your brain that is actually what matters for your behaviour, but the process of its running also, on the side, leads to a subjective experience, which is itself irrelevant—just generated, the way that a locomotive produces steam. While epiphenomenalism leads you into absolutely absurd territory (zombies), there are a surprising number of scientists who have historically essentially prescribed to it, because it allows you to circumvent a bunch of hard questions. You can continue to imagine consciousness as a mysterious, unphysical thing that does not have to be translated into math, because it does not really exist on a physical level—you describe a physical process, and then at some point, you handwave.
However, epiphenomenalism is false. It falls prey to the self-stultification argument; the very fact that we are talking about it implies that it is false. Because if consciousness has no function, is just a side effect that does not itself affect the brain, it cannot affect behaviour. But talking is behaviour, and we are talking intensely about a phenomenon our brain, which controls the speaking should have zero awareness of.
Taking this seriously means concluding that consciousness is not produced by a brain process, the result or side effect of a brain process, but identical with particular kinds of neural/information processing. Which is one of those statements that it is easy to agree with (it seems an obvious choice for a physicalist), but when you try to actually understand it, you get a headache (or at least, I do.) Because it means you can never handwave. You can never have a process on one side, and then go “anyhow, and this leads to consciousness arising” as something separate, but it means that as you are studying the process, you are looking at consciousness itself, from the outside.
***
Northoff & Lamme, like a bunch of neuroscientists, avoid philosophical terminology like the plague, so as a philosopher wanting to use their works, you need to yourself piece together which phenomena they were working towards. Their essential position is that philosophers are people who muck around while avoiding the actual empirical work, and that associating with them is icky. This has the unfortunate consequence that their terminology is horribly vague—Lamme by himself uses “consciousness” for all sorts of stuff. As someone who works on visual processing, I think Lamme also dislikes the word “qualia” for a more justified reason—the idea that the building blocks of consciousness are individual subjective experiences like “red” is nonsense. Our conscious perception of a lilly lake looks nothing like Monet painting. We don’t consciously see the light colours that are hitting our retina as a separate kaleidoscope—we see the whole objects, in what we assume are colours corresponding to their surface properties, with additional information given on potential factors making the colour perception unreliable—itself the result of a long sequence of unconscious processing.
That said, he does place qualia in the correct context. A point he is making there is that neural theories that seem to disagree a lot are targeting different aspects of consciousness, but increasingly look like they can be slotted together into a coherent theory. E.g. Lamme’s ideas and global workspace have little in common, but they focus on different phases—a distinction that I think most corresponds with the distinction between phenomenal and access consciousness. I agree with you that the latter is better understood at this point than the former, though there are good reasons for that—it is incredibly hard to empirically distinguish between precursors of consciousness and the formation of consciousness prior to it being committed to short term memory, and introspective reports for verification start fucking everything up (because introspecting about the stimulus completely changes what is going on phenomenally and neurally), while no report paradigms have other severe difficulties.
But we are still beginning to narrow down how it works—ineptly, sure, and a lot of it is going “ah, this person no longer experiences x, and their brain is damaged in this particular fashion, so something about this damage must have interrupted the relevant process”, while others essentially amount to trying to put people into controlled environments and showing them specifically varied stimuli and scanning them to see what changes (with difficulties in the resolution being terrible, and more difficulties in the fact that people start thinking about other stuff during boring neuroscience experiments), but it is no longer a complete blackbox.
And I would say that Lamme does focus on the phenomenal aspect of things—like I said, not individual colours, but the subjective experience of vision, yes.
And we have also made progress on qualia (e.g. likely ruling out inverse qualia scenarios), see the work Kawakita et al. are doing, which is being discussed here on Less Wrong. https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you It’s part of a larger line of research looking to accurately jot down psychophysical explanations on colour qualia to built phenomenal maps, and then looking for something correlated in the brain. That still leaves us unsure how and why you see anything at all consciously, but is progress on why the particular thing you are seeing is green and not blue.
Honestly, my TL,DR is that saying that we know nothing about the structure of reality that constitutes consciousness is increasingly unfair in light of how much we do meanwhile understand. We aren’t done, but we have made tangible progress on the question, we have fragments that are beginning to slot into place. Most importantly, we are going away from “how this experience arises will be forever a mystery” to increasingly concrete, solvable questions. I think we started the way the ancient Greeks did—just pointing at what we saw, the “star” in the evening, and the “star” in the morning, not knowing what was causing that visual, the way we go “I subjectively experience x, but no idea why”—but then progressing to realising that they had the same origin, then that the origin was not in fact a star, etc. Starting with a perception, and then looking at its origin—but in our case, the origin we were interested in was not the object being perceived, but the process of perception.