Well, post #2 is about conscious awareness so it gets the closest, but you only really talk about how there is a serial processing stream in the brain whose contents roughly correspond to what we claim is in awareness—which I’d argue is just the coarse functional behavior, i.e., the macro problem. This doesn’t seem very related to the hard meta problem because I can imagine either one of the problems not existing without the other. I.e., I can imagine that (a) people do claim to be conscious but in a very different way, and (b) people don’t claim to be conscious, but their high-level functional recollection does match the model you describe in the post. And if that’s the case, then by definition they’re independent. … if you actually ask camp #2 people, I think they’ll tell you that the problem isn’t really about the macro functional behavior of awareness
The way intuitive models work (I claim) is that there are concepts, and associations / implications / connotations of those concepts. There’s a core intuitive concept “carrot”, and it has implications about shape, color, taste, botanical origin, etc. And if you specify the shape, color, etc. of a thing, and they’re somewhat different from most normal carrots, then people will feel like there’s a question “but now is it really a carrot?” that goes beyond the complete list of its actual properties. But there isn’t, really. Once you list all the properties, there’s no additional unanswered question. It just feels like there is. This is an aspect of how intuitive models work, but it doesn’t veridically correspond to anything of substance.
So anyway, if “consciousness” has connotations / implications A,B,C,D,E, etc. (it’s “subjective”, it goes away under general anesthesia, it’s connected to memory, etc.), then people will feel like there’s an additional question “but is it really consciousness”, that still needs to be answered, above and beyond the specific properties A,B,C,D,E.
And likewise, if you ask a person “Can you imagine something that lacks A,B,C,D,E, but still constitutes ‘consciousness’”, then they may well say “yeah I can imagine that”. But we shouldn’t take that report to be particularly meaningful.
Copying the above terminology, we could phrase the hard problem of seeing as explaining why people see images, and the hard meta problem of seeing as explaining why people claim to see images.
As in Post 2, there’s an intuitive concept that I’m calling “conscious awareness” that captures the fact that the cortex has different generative models active at different times. Different parts of the cortex wind up building different kinds of models—S1 builds generative models of somatosensory data, M1 builds generative models of motor programs, and so on. But here I want to talk about the areas in the overlap between the “ventral visual stream” and the “global workspace”, which is mainly in and around the inferior temporal gyrus, “IT”.
When we’re paying attention to what we’re looking at, IT would have some generative model active that optimally balances between (1) priors about the visual world, and (2) the visual input right now. Alternatively, if we’re zoning out from what we’re looking at, and instead using visual imagination or visual memory, then (2) is off (i.e., the active IT model can be wildly incompatible with immediate visual input), but (1) is still relevant, and instead there needs to be consistency between IT and episodic memory areas, or various other possibilities.
So anyway,
In the territory, “Model A is currently active in IT” is a very different situation from “Model B is currently active in the superior temporal gyrus” or whatever.
Correspondingly, in the map, we wind up with the intuition that “X is in awareness as a vision” is very different from “Y is in awareness as a sound”, and both are very different from “Z is in awareness as a plan”, etc.
You brought up blindsight. That would be where the model “X is in awareness as a vision” seems wrong. That model would entail a specific set of predictions about the state of IT, and it turns out that those predictions are false. However, some other part of awareness is still getting visual information via some other pathway. (Visual information gets into various parts of the cortex via more than one pathway.) So the blindsight patient might describe their experience as “I don’t see anything, but for some reason I feel like there’s motion on the left side”, or whatever. And we can map that utterance into a correct description of what was happening in their cortex.
Separately, as for the hard problem of consciousness, you might be surprised to learn that I actually haven’t thought about it much and still find it kinda confusing. I had written something into an early draft of post 1 but wound up deleting it before publication. Here’s what it said:
Start with an analogy to physics. There’s a Stephen Hawking quote I like:
> “Even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing?”
I could be wrong, but Hawking’s question seems to be pointing at a real mystery. But as Hawking says, there seems to be no possible observation or scientific experiment that would shed light on that mystery. Whatever the true laws of physics are in our universe, every possible experiment would just confirm, yup, those are the true laws of physics. It wouldn’t help us figure out what if anything “breathes fire” into those laws. What would progress on the “breathes fire” question even look like?? (See Tegmark’s Mathematical Universe book for the only serious attempt I know of, which I still find unsatisfying. He basically says that all possible laws of the universe have fire breathed into them. But even if that’s true, I still want to ask … why?)
By analogy, I’m tempted to say that an illusionist account can explain every possible experiment about consciousness, including our belief that consciousness exists at all, and all its properties, and all the philosophy books on it, and so on … but yet I’m tempted to still say that there’s some “breathes fire” / “why is there something rather than nothing” type question left unanswered by the illusionist account. This unanswered question should not be called “the hard problem”, but rather “the impossible problem”, in the sense that, just like Hawking’s question above, there seems to be no possible scientific measurement or introspective experiment and that could shed light on it—all possible such data, including the very fact that I’m writing this paragraph, are already screened off by the illusionist framework.
The way intuitive models work (I claim) is that there are concepts, and associations / implications / connotations of those concepts. There’s a core intuitive concept “carrot”, and it has implications about shape, color, taste, botanical origin, etc. And if you specify the shape, color, etc. of a thing, and they’re somewhat different from most normal carrots, then people will feel like there’s a question “but now is it really a carrot?” that goes beyond the complete list of its actual properties. But there isn’t, really. Once you list all the properties, there’s no additional unanswered question. It just feels like there is. This is an aspect of how intuitive models work, but it doesn’t veridically correspond to anything of substance.
Mhhhmhh. Let me see if I can work with the carrot example to where it fits my view of the debate.
A botanist is charged with filling a small field with plants, any plants. A chemist hands him a perfect plastic replica of a carrot, perfect in shape, color, texture, and (miraculously) taste. The botanist says that it’s not a plant. The chemist, who has never seen plants other than carrots, points out the matching qualities to the plants he knows. The botanist says okay but those are just properties that a particular kind of plant happens to have, they’re not the integral property of what makes something a plant. “The core intuitive concept ‘plant’ has implications about shape, color, texture, taste, et cetera”, says the chemist. “If all those properties are met, people may think there’s an additional question about the true plant-ness of the object, but [...].” The botanist points out that he is not talking about an intangible, immeasurable, or non-physical property but rather about the fact that this carrot won’t grow and spread seeds when planted into the earth. The chemist, having conversed extensively with people who define plants primarily by their shape, color, texture, and taste (which are all those of carrots because they’ve also not seen other plants) just sighs, rolling his eyes at the attempt to redefine plant-ness to be entirely about this one obscure feature that also just happens to be the most difficult one to test.
Which is to say that I get—or at least I think I get—the sense that we’re successfully explaining important features of consciousness and the case for linking it to anything special is clearly diminishing—but I don’t think it’s correct. When I say that the hard meta problem of seeing probably contains ~90% of the difficulty of the hard meta problem of consciousness whereas the meta problem of free will contains 0% and the problem of awareness ~2%, then I’m not changing my model in response to new evidence. I’ve always thought Free Will was nonsense!
(The botanist separately points out that there in fact other plants with different shape, texture, and taste, although they all do have green leaves, to which the chemist replies that ?????. This is just to come back to the point that people report advanced meditative states that lose many of the common properties of consciousness, including Free Will, the feeling of having a self (I’ve experienced that one!) and even the presence of any information content whatsoever, and afaik they tend to be more “impressed”, roughly speaking, with consciousness as a result of those experiences, not less.)
[seeing stuff]
Attempt to rephrase: the brain has several different intuitive models in different places. These models have different causal profiles, which explains how they can correspond to different introspective reports. One model corresponds to the person talking about smelling stuff. Another corresponds to the person talking about seeing stuff. Yet another corresponds to the person talking about obtaining vague intuitions about the presence and location of objects. The latter two are triggered by visual inputs. Blindsight turns off the second but not the third.
If this is roughly correct, my response to it is that proposing different categories isn’t enough because the distinction between visually vivid experience and vague intuitions isn’t just that we happen to call them by different labels. (And the analogous thing is true for every other sensory modality, although the case is the least confusing with vision.) Claiming to see a visual image is different from claiming to have a vague intuition in all the ways that it’s different; people claim to see something made out of pixels, which can look beautiful or ugly, seems to have form, depth, spatial location, etc. They also claim to perceive a full visual image constantly, which presumably isn’t possible(?) since it would contain more information than can actually be there, so a solution has to explain how this illusion of having access to so much information is possible. (Is awareness really a serial processor in any meaningful way if it can contain as much information at once as a visual image seems to contain?)
(I didn’t actually intend to get into a discussion about any of this though, I was just using it as a demonstration of why I think the hard metaproblem of consciousness has at least one real subset and hence isn’t empty.)
Hard Problem
Yeah, I mean, since I’m on board with reducing everything to the meta problem, the hard problem itself can just be sidestepped entirely.
But since you brought it up, I’ll just shamelessly use this opportunity to make a philosophical point that I’ve never seen anyone else make, which is that imo the common belief that no empirical data can help distinguish an illusionist from a realist universe… is actually false! The reason is that consciousness is a high-level phenomenon in the illusionist universe and a low phenomenon in at least some versions of the realist universe, and we have different priors for how high-level vs. low-level phenomena behave.
The analogy I like is, imagine there’s a drug that makes people see ghosts, and some think these ghosts tap into the fundamental equations of physics, whereas others think the brain is just making stuff up. One way you can go about this is to have a thousand people describe their ghosts in detail. If you find that the brightness of hallucinated ghosts is consistently proportional to their height, then you’ve pretty much disproved the “the brain is just making stuff up hypothesis”. (Whereas if you find no such relationships, you’ve strengthened the hypothesis.) This is difficult to operationalize for consciousness, but I think determining the presence of absence of elegant mathematical structure within human consciousness is, at least in principle, an answer to the question of “[w]hat would progress on the ‘breathes fire’ question even look like”.
people report advanced meditative states that lose many of the common properties of consciousness, including Free Will, the feeling of having a self (I’ve experienced that one!) and even the presence of any information content whatsoever, and afaik they tend to be more “impressed”, roughly speaking, with consciousness as a result of those experiences, not less.
I think that’s compatible with my models, because those meditators still have a cortex, in which patterns of neurons can be firing or not firing at any particular time. And that’s the core aspect of the “territory” which corresponds to “conscious awareness” in the “map”. No amount of meditation, drugs, etc., can change that.
Attempt to rephrase: the brain has several different intuitive models in different places. These models have different causal profiles, which explains how they can correspond to different introspective reports.
Hmm, I think that’s not really what I would say. I would say that that there’s a concept “conscious awareness” (in the map) that corresponds to the fact (in the territory) that different patterns of neurons can be active or inactive in the cortex at different times. And then there are more specific aspects of “conscious awareness”, like “visual awareness”, which corresponds to the fact that the cortex has different parts (motor cortex etc.), and different patterns of neurons can be active or inactive in any given part of the cortex at different times.
…Maybe this next part will help ↓
the distinction between visually vivid experience and vague intuitions isn’t just that we happen to call them by different labels … Claiming to see a visual image is different from claiming to have a vague intuition in all the ways that it’s different
The contents of IT are really truly different from the contents of LIP [I didn’t check where the visual information gets to the cortex in blindsight, I’m just guessing LIP for concreteness]. Querying IT is a different operation than querying LIP. IT holds different types of information than LIP does, and does different things with that information, including leading to different visceral reactions, motivations, semantic knowledge, etc., all of which correspond to neuroscientific differences in how IT versus LIP is wired up.
All these differences between IT vs LIP are in the territory, not the map. So I definitely agree that “the distinction [between seeing and vague-sense-of-presence] isn’t just that we happen to call them by different labels”. They’re different like how the concept “hand” is different from the concept “foot”—a distinction on the map downstream of a distinction in the territory.
Is awareness really a serial processor in any meaningful way if it can contain as much information at once as a visual image seems to contain?
I’m sure you’re aware that people feel like they have a broader continuous awareness of their visual field than they actully do. There are lots of demonstrations of this—e.g. change blindness, selective attention test, the fact that peripheral vision has terrible resolution and terrible color perception and makes faces look creepy. There’s a refrigerator light illusion thing—if X is in my peripheral vision, then maybe it’s currently active as just a little pointer in a tiny sub-area of my cortex, but as soon as I turn my attention to X it immediately unfolds in full detail across the global workspace.
The cortex has 10 billion neurons which is more than enough to do some things in parallel—e.g. I can have a song stuck in my head in auditory cortex, while tapping my foot with motor cortex, while doing math homework with other parts of the cortex. But there’s also a serial aspect to it—you can’t parse a legal document and try to remember your friend’s name at the exact same moment.
Does that help? Sorry if I’m not responding to what you see as most important, happy to keep going. :)
(You did respond to all the important parts, rest of my comment is very much optional.)
I’m sure you’re aware that people feel like they have a broader continuous awareness of their visual field than they actully do. There are lots of demonstrations of this—e.g. change blindness, selective attention test, the fact that peripheral vision has terrible resolution and terrible color perception and makes faces look creepy. There’s a refrigerator light illusion thing—if X is in my peripheral vision, then maybe it’s currently active as just a little pointer in a tiny sub-area of my cortex, but as soon as I turn my attention to X it immediately unfolds in full detail across the global workspace.
Yes—and my point was that appealing to these phenomena is the kind of thing you will probably have to do to explain the meta problem of seeing. Which raises all kinds of issues—for example, change blindness by itself doesn’t logically prove anything, since it’s possible not to notice that something changed even if it was represented. Only the reverse conclusion is valid—if a subject can tell that X changed, then X was in awareness, but if they can’t tell, X may or may not have been in awareness. So teasing out exactly how much information is really present in awareness, given the positive and negative evidence, is a pretty big rabbit hole. (Poor resolution in peripheral vision does prove absence of information, but as with the memory example I’ve complained about in post #2, this is an example of something people don’t endorse under reflection anyway, so it doesn’t get you very far. Like, there is a very, very big difference between arguing that peripheral resolution is poor, which people will agree with as soon as they actually pay attention to their peripheral vision for the first time, and arguing that the continuous visual image they think they see is not really there, which most people will stubbornly disagree with regardless of how much attention they pay to it.)
Anyway, that’s the only claim I was making—I was only trying to go as far as “this is why I think the problem is nontrivial and you haven’t solved it yet”, not “and that’s why you can’t solve it”.
The contents of IT are really truly different from the contents of LIP [I didn’t check where the visual information gets to the cortex in blindsight, I’m just guessing LIP for concreteness]. Querying IT is a different operation than querying LIP. IT holds different types of information than LIP does, and does different things with that information, including leading to different visceral reactions, motivations, semantic knowledge, etc., all of which correspond to neuroscientific differences in how IT versus LIP is wired up.
All these differences between IT vs LIP are in the territory, not the map. So I definitely agree that “the distinction [between seeing and vague-sense-of-presence] isn’t just that we happen to call them by different labels”. They’re different like how the concept “hand” is different from the concept “foot”—a distinction on the map downstream of a distinction in the territory.
Right, and I agree that this makes it apriori plausible that they could account for the differences in how people talk about, e.g., vivid seeing vs. intangible intuitions. But it doesn’t prove that they do, it only shows that this is the kind of explanation that, on first glance, looks like it could work. To actually solve the meta problem, you still have to do the work of explaining all the properties of introspective reports, which requires going into a lot of detail.
As of above, this is the only claim I was making—I’m not saying any of these issues are provably impossible with your approach, I’m only saying that your approach hasn’t provided a full solution yet. (And that I genuinely think most of the difficulty happens to be in these still unaddressed details; this was the point of the carrot/plant analogy.)
I think that’s compatible with my models, because those meditators still have a cortex, in which patterns of neurons can be firing or not firing at any particular time. And that’s the core aspect of the “territory” which corresponds to “conscious awareness” in the “map”. No amount of meditation, drugs, etc., can change that.
Fair enough, but I think it does show that free will isn’t that central of a piece.
Start with an analogy to physics. There’s a Stephen Hawking quote I like:
> “Even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing?”
I could be wrong, but Hawking’s question seems to be pointing at a real mystery. But as Hawking says, there seems to be no possible observation or scientific experiment that would shed light on that mystery. Whatever the true laws of physics are in our universe, every possible experiment would just confirm, yup, those are the true laws of physics. It wouldn’t help us figure out what if anything “breathes fire” into those laws. What would progress on the “breathes fire” question even look like?? (See Tegmark’s Mathematical Universe book for the only serious attempt I know of, which I still find unsatisfying. He basically says that all possible laws of the universe have fire breathed into them. But even if that’s true, I still want to ask … why?)
By analogy, I’m tempted to say that an illusionist account can explain every possible experiment about consciousness, including our belief that consciousness exists at all, and all its properties, and all the philosophy books on it, and so on … but yet I’m tempted to still say that there’s some “breathes fire” / “why is there something rather than nothing” type question left unanswered by the illusionist account. This unanswered question should not be called “the hard problem”, but rather “the impossible problem”, in the sense that, just like Hawking’s question above, there seems to be no possible scientific measurement or introspective experiment and that could shed light on it—all possible such data, including the very fact that I’m writing this paragraph, are already screened off by the illusionist framework.
Well, hmm, maybe that’s stupid. I dunno.
My provisional answer is “An infinity of FLOPs/compute backs up the equations to make sure it works.”
Thanks for the detailed comment!
The way intuitive models work (I claim) is that there are concepts, and associations / implications / connotations of those concepts. There’s a core intuitive concept “carrot”, and it has implications about shape, color, taste, botanical origin, etc. And if you specify the shape, color, etc. of a thing, and they’re somewhat different from most normal carrots, then people will feel like there’s a question “but now is it really a carrot?” that goes beyond the complete list of its actual properties. But there isn’t, really. Once you list all the properties, there’s no additional unanswered question. It just feels like there is. This is an aspect of how intuitive models work, but it doesn’t veridically correspond to anything of substance.
The old Yudkowsky post “How An Algorithm Feels From Inside” is a great discussion of this point.
So anyway, if “consciousness” has connotations / implications A,B,C,D,E, etc. (it’s “subjective”, it goes away under general anesthesia, it’s connected to memory, etc.), then people will feel like there’s an additional question “but is it really consciousness”, that still needs to be answered, above and beyond the specific properties A,B,C,D,E.
And likewise, if you ask a person “Can you imagine something that lacks A,B,C,D,E, but still constitutes ‘consciousness’”, then they may well say “yeah I can imagine that”. But we shouldn’t take that report to be particularly meaningful.
(…See also Frankish’s “Quining Diet Qualia” (2012).)
As in Post 2, there’s an intuitive concept that I’m calling “conscious awareness” that captures the fact that the cortex has different generative models active at different times. Different parts of the cortex wind up building different kinds of models—S1 builds generative models of somatosensory data, M1 builds generative models of motor programs, and so on. But here I want to talk about the areas in the overlap between the “ventral visual stream” and the “global workspace”, which is mainly in and around the inferior temporal gyrus, “IT”.
When we’re paying attention to what we’re looking at, IT would have some generative model active that optimally balances between (1) priors about the visual world, and (2) the visual input right now. Alternatively, if we’re zoning out from what we’re looking at, and instead using visual imagination or visual memory, then (2) is off (i.e., the active IT model can be wildly incompatible with immediate visual input), but (1) is still relevant, and instead there needs to be consistency between IT and episodic memory areas, or various other possibilities.
So anyway,
In the territory, “Model A is currently active in IT” is a very different situation from “Model B is currently active in the superior temporal gyrus” or whatever.
Correspondingly, in the map, we wind up with the intuition that “X is in awareness as a vision” is very different from “Y is in awareness as a sound”, and both are very different from “Z is in awareness as a plan”, etc.
You brought up blindsight. That would be where the model “X is in awareness as a vision” seems wrong. That model would entail a specific set of predictions about the state of IT, and it turns out that those predictions are false. However, some other part of awareness is still getting visual information via some other pathway. (Visual information gets into various parts of the cortex via more than one pathway.) So the blindsight patient might describe their experience as “I don’t see anything, but for some reason I feel like there’s motion on the left side”, or whatever. And we can map that utterance into a correct description of what was happening in their cortex.
Separately, as for the hard problem of consciousness, you might be surprised to learn that I actually haven’t thought about it much and still find it kinda confusing. I had written something into an early draft of post 1 but wound up deleting it before publication. Here’s what it said:
Mhhhmhh. Let me see if I can work with the carrot example to where it fits my view of the debate.
A botanist is charged with filling a small field with plants, any plants. A chemist hands him a perfect plastic replica of a carrot, perfect in shape, color, texture, and (miraculously) taste. The botanist says that it’s not a plant. The chemist, who has never seen plants other than carrots, points out the matching qualities to the plants he knows. The botanist says okay but those are just properties that a particular kind of plant happens to have, they’re not the integral property of what makes something a plant. “The core intuitive concept ‘plant’ has implications about shape, color, texture, taste, et cetera”, says the chemist. “If all those properties are met, people may think there’s an additional question about the true plant-ness of the object, but [...].” The botanist points out that he is not talking about an intangible, immeasurable, or non-physical property but rather about the fact that this carrot won’t grow and spread seeds when planted into the earth. The chemist, having conversed extensively with people who define plants primarily by their shape, color, texture, and taste (which are all those of carrots because they’ve also not seen other plants) just sighs, rolling his eyes at the attempt to redefine plant-ness to be entirely about this one obscure feature that also just happens to be the most difficult one to test.
Which is to say that I get—or at least I think I get—the sense that we’re successfully explaining important features of consciousness and the case for linking it to anything special is clearly diminishing—but I don’t think it’s correct. When I say that the hard meta problem of seeing probably contains ~90% of the difficulty of the hard meta problem of consciousness whereas the meta problem of free will contains 0% and the problem of awareness ~2%, then I’m not changing my model in response to new evidence. I’ve always thought Free Will was nonsense!
(The botanist separately points out that there in fact other plants with different shape, texture, and taste, although they all do have green leaves, to which the chemist replies that ?????. This is just to come back to the point that people report advanced meditative states that lose many of the common properties of consciousness, including Free Will, the feeling of having a self (I’ve experienced that one!) and even the presence of any information content whatsoever, and afaik they tend to be more “impressed”, roughly speaking, with consciousness as a result of those experiences, not less.)
Attempt to rephrase: the brain has several different intuitive models in different places. These models have different causal profiles, which explains how they can correspond to different introspective reports. One model corresponds to the person talking about smelling stuff. Another corresponds to the person talking about seeing stuff. Yet another corresponds to the person talking about obtaining vague intuitions about the presence and location of objects. The latter two are triggered by visual inputs. Blindsight turns off the second but not the third.
If this is roughly correct, my response to it is that proposing different categories isn’t enough because the distinction between visually vivid experience and vague intuitions isn’t just that we happen to call them by different labels. (And the analogous thing is true for every other sensory modality, although the case is the least confusing with vision.) Claiming to see a visual image is different from claiming to have a vague intuition in all the ways that it’s different; people claim to see something made out of pixels, which can look beautiful or ugly, seems to have form, depth, spatial location, etc. They also claim to perceive a full visual image constantly, which presumably isn’t possible(?) since it would contain more information than can actually be there, so a solution has to explain how this illusion of having access to so much information is possible. (Is awareness really a serial processor in any meaningful way if it can contain as much information at once as a visual image seems to contain?)
(I didn’t actually intend to get into a discussion about any of this though, I was just using it as a demonstration of why I think the hard metaproblem of consciousness has at least one real subset and hence isn’t empty.)
Yeah, I mean, since I’m on board with reducing everything to the meta problem, the hard problem itself can just be sidestepped entirely.
But since you brought it up, I’ll just shamelessly use this opportunity to make a philosophical point that I’ve never seen anyone else make, which is that imo the common belief that no empirical data can help distinguish an illusionist from a realist universe… is actually false! The reason is that consciousness is a high-level phenomenon in the illusionist universe and a low phenomenon in at least some versions of the realist universe, and we have different priors for how high-level vs. low-level phenomena behave.
The analogy I like is, imagine there’s a drug that makes people see ghosts, and some think these ghosts tap into the fundamental equations of physics, whereas others think the brain is just making stuff up. One way you can go about this is to have a thousand people describe their ghosts in detail. If you find that the brightness of hallucinated ghosts is consistently proportional to their height, then you’ve pretty much disproved the “the brain is just making stuff up hypothesis”. (Whereas if you find no such relationships, you’ve strengthened the hypothesis.) This is difficult to operationalize for consciousness, but I think determining the presence of absence of elegant mathematical structure within human consciousness is, at least in principle, an answer to the question of “[w]hat would progress on the ‘breathes fire’ question even look like”.
Thanks for the comment!
I think that’s compatible with my models, because those meditators still have a cortex, in which patterns of neurons can be firing or not firing at any particular time. And that’s the core aspect of the “territory” which corresponds to “conscious awareness” in the “map”. No amount of meditation, drugs, etc., can change that.
Hmm, I think that’s not really what I would say. I would say that that there’s a concept “conscious awareness” (in the map) that corresponds to the fact (in the territory) that different patterns of neurons can be active or inactive in the cortex at different times. And then there are more specific aspects of “conscious awareness”, like “visual awareness”, which corresponds to the fact that the cortex has different parts (motor cortex etc.), and different patterns of neurons can be active or inactive in any given part of the cortex at different times.
…Maybe this next part will help ↓
The contents of IT are really truly different from the contents of LIP [I didn’t check where the visual information gets to the cortex in blindsight, I’m just guessing LIP for concreteness]. Querying IT is a different operation than querying LIP. IT holds different types of information than LIP does, and does different things with that information, including leading to different visceral reactions, motivations, semantic knowledge, etc., all of which correspond to neuroscientific differences in how IT versus LIP is wired up.
All these differences between IT vs LIP are in the territory, not the map. So I definitely agree that “the distinction [between seeing and vague-sense-of-presence] isn’t just that we happen to call them by different labels”. They’re different like how the concept “hand” is different from the concept “foot”—a distinction on the map downstream of a distinction in the territory.
I’m sure you’re aware that people feel like they have a broader continuous awareness of their visual field than they actully do. There are lots of demonstrations of this—e.g. change blindness, selective attention test, the fact that peripheral vision has terrible resolution and terrible color perception and makes faces look creepy. There’s a refrigerator light illusion thing—if X is in my peripheral vision, then maybe it’s currently active as just a little pointer in a tiny sub-area of my cortex, but as soon as I turn my attention to X it immediately unfolds in full detail across the global workspace.
The cortex has 10 billion neurons which is more than enough to do some things in parallel—e.g. I can have a song stuck in my head in auditory cortex, while tapping my foot with motor cortex, while doing math homework with other parts of the cortex. But there’s also a serial aspect to it—you can’t parse a legal document and try to remember your friend’s name at the exact same moment.
Does that help? Sorry if I’m not responding to what you see as most important, happy to keep going. :)
(You did respond to all the important parts, rest of my comment is very much optional.)
Yes—and my point was that appealing to these phenomena is the kind of thing you will probably have to do to explain the meta problem of seeing. Which raises all kinds of issues—for example, change blindness by itself doesn’t logically prove anything, since it’s possible not to notice that something changed even if it was represented. Only the reverse conclusion is valid—if a subject can tell that X changed, then X was in awareness, but if they can’t tell, X may or may not have been in awareness. So teasing out exactly how much information is really present in awareness, given the positive and negative evidence, is a pretty big rabbit hole. (Poor resolution in peripheral vision does prove absence of information, but as with the memory example I’ve complained about in post #2, this is an example of something people don’t endorse under reflection anyway, so it doesn’t get you very far. Like, there is a very, very big difference between arguing that peripheral resolution is poor, which people will agree with as soon as they actually pay attention to their peripheral vision for the first time, and arguing that the continuous visual image they think they see is not really there, which most people will stubbornly disagree with regardless of how much attention they pay to it.)
Anyway, that’s the only claim I was making—I was only trying to go as far as “this is why I think the problem is nontrivial and you haven’t solved it yet”, not “and that’s why you can’t solve it”.
Right, and I agree that this makes it apriori plausible that they could account for the differences in how people talk about, e.g., vivid seeing vs. intangible intuitions. But it doesn’t prove that they do, it only shows that this is the kind of explanation that, on first glance, looks like it could work. To actually solve the meta problem, you still have to do the work of explaining all the properties of introspective reports, which requires going into a lot of detail.
As of above, this is the only claim I was making—I’m not saying any of these issues are provably impossible with your approach, I’m only saying that your approach hasn’t provided a full solution yet. (And that I genuinely think most of the difficulty happens to be in these still unaddressed details; this was the point of the carrot/plant analogy.)
Fair enough, but I think it does show that free will isn’t that central of a piece.
My provisional answer is “An infinity of FLOPs/compute backs up the equations to make sure it works.”