Thanks for those thoughts. And also for linking to Kaj’s post again; I finally decided to read it and it’s quite good. I don’t think it helps at all with the hard problem (i.e., you could replace ‘consciousness’ with some other process in the brain that has these properties but doesn’t have the subjective component, and I don’t think that would pose any problems), but it helps quite a bit with the ‘what is consciousness doing’ question, which I also care about.
(Now I’m trying to look at the wall of my room and to decide whether I actually do see pixels or ‘line segments’, which is an exercise that really puts a knot into my head.)
One of the things that makes this difficult is that, whenever you focus on a particular part, it’s probably consistent with the framework that this part gets reported in a lot more detail. If that’s true, then testing the theory requires you to look at the parts you’re not paying attention to, which is… um.
Maybe evidence here would be something like, do you recognize concepts in your peripheral vision more than hard-to-clasiffy-things and actually I think you do. (E.g, if I move my gaze to the left, I can still kind of see the vertical cable of a light on the wall even though the wall itself seems not visible.)
(Now I’m trying to look at the wall of my room and to decide whether I actually do see pixels or ‘line segments’, which is an exercise that really puts a knot into my head.)
Sorry if I’m misunderstanding what you’re getting at but...
I don’t think there’s any point in which there are signals in your brain that correspond directly to something like pixels in a camera. Even in the retina, there’s supposedly predictive coding data compression going on (I haven’t looked into that in detail). By the time the signals are going to the neocortex, they’ve been split into three data streams carrying different types of distilled data: magnocellular, parvocellular, and koniocellular (actually several types of konio I think), if memory serves. There’s a theory I like about the information-processing roles of magno and parvo; nobody seems to have any idea what the konio information is doing and neither do I. :-P
But does it matter whether the signals are superficially the same or not? If you do a lossless transformation from pixels into edges (for example), who cares, the information is still there, right?
So then the question is, what information is in (say) V1 but is not represented in V2 or higher layers, and do we have conscious access to that information? V1 has so many cortical columns processing so much data, intuitively there has to be compression going on.
I haven’t really thought much about how information compression in the neocortex works per se. Dileep George & Jeff Hawkins say here that there’s something like compressed sensing happening, and Randall O’Reilly says here that there’s error-driven learning (something like gradient descent) making sure that the top-down predictions are close enough to the input. Close on what metric though? Probably not pixel-to-pixel differences … probably more like “close in whatever compressed-sensing representation space is created by the V1 columns”...?
Maybe a big part of the data compression is: we only attend to one object at a time, and everything else is lumped together into “background”. Like, you might think you’re paying close attention to both your hand and your pen, but actually you’re flipping back and forth, or else lumping the two together into a composite object! (I’m speculating.) Then the product space of every possible object in every possible arrangement in your field of view is broken into a dramatically smaller disjunctive space of possibilities, consisting of any one possible object in any one possible position. Now that you’ve thrown out 99.999999% of the information by only attending to one object at a time, there’s plenty of room for the GNW to have lots of detail about the object’s position, color, texture, motion, etc.
For the hard problem of consciousness, the steps in my mind are
1. GNW --> 2. Solution to the meta-problem of consciousness --> 3. Feeling forced to accept illusionism --> 4. Enthusiastically believing in illusionism.
I wrote the post Book Review: Rethinking Consciousness about my journey from step 1 --> step 2 --> step 3. And that’s where I’m still at. I haven’t gotten to step 4, I would need to think about it more. :-P
Thanks for those thoughts. And also for linking to Kaj’s post again; I finally decided to read it and it’s quite good. I don’t think it helps at all with the hard problem (i.e., you could replace ‘consciousness’ with some other process in the brain that has these properties but doesn’t have the subjective component, and I don’t think that would pose any problems), but it helps quite a bit with the ‘what is consciousness doing’ question, which I also care about.
(Now I’m trying to look at the wall of my room and to decide whether I actually do see pixels or ‘line segments’, which is an exercise that really puts a knot into my head.)
One of the things that makes this difficult is that, whenever you focus on a particular part, it’s probably consistent with the framework that this part gets reported in a lot more detail. If that’s true, then testing the theory requires you to look at the parts you’re not paying attention to, which is… um.
Maybe evidence here would be something like, do you recognize concepts in your peripheral vision more than hard-to-clasiffy-things and actually I think you do. (E.g, if I move my gaze to the left, I can still kind of see the vertical cable of a light on the wall even though the wall itself seems not visible.)
Sorry if I’m misunderstanding what you’re getting at but...
I don’t think there’s any point in which there are signals in your brain that correspond directly to something like pixels in a camera. Even in the retina, there’s supposedly predictive coding data compression going on (I haven’t looked into that in detail). By the time the signals are going to the neocortex, they’ve been split into three data streams carrying different types of distilled data: magnocellular, parvocellular, and koniocellular (actually several types of konio I think), if memory serves. There’s a theory I like about the information-processing roles of magno and parvo; nobody seems to have any idea what the konio information is doing and neither do I. :-P
But does it matter whether the signals are superficially the same or not? If you do a lossless transformation from pixels into edges (for example), who cares, the information is still there, right?
So then the question is, what information is in (say) V1 but is not represented in V2 or higher layers, and do we have conscious access to that information? V1 has so many cortical columns processing so much data, intuitively there has to be compression going on.
I haven’t really thought much about how information compression in the neocortex works per se. Dileep George & Jeff Hawkins say here that there’s something like compressed sensing happening, and Randall O’Reilly says here that there’s error-driven learning (something like gradient descent) making sure that the top-down predictions are close enough to the input. Close on what metric though? Probably not pixel-to-pixel differences … probably more like “close in whatever compressed-sensing representation space is created by the V1 columns”...?
Maybe a big part of the data compression is: we only attend to one object at a time, and everything else is lumped together into “background”. Like, you might think you’re paying close attention to both your hand and your pen, but actually you’re flipping back and forth, or else lumping the two together into a composite object! (I’m speculating.) Then the product space of every possible object in every possible arrangement in your field of view is broken into a dramatically smaller disjunctive space of possibilities, consisting of any one possible object in any one possible position. Now that you’ve thrown out 99.999999% of the information by only attending to one object at a time, there’s plenty of room for the GNW to have lots of detail about the object’s position, color, texture, motion, etc.
Not sure how helpful any of this is :-P
For the hard problem of consciousness, the steps in my mind are
1. GNW -->
2. Solution to the meta-problem of consciousness -->
3. Feeling forced to accept illusionism -->
4. Enthusiastically believing in illusionism.
I wrote the post Book Review: Rethinking Consciousness about my journey from step 1 --> step 2 --> step 3. And that’s where I’m still at. I haven’t gotten to step 4, I would need to think about it more. :-P