my vision appears to me as a continuous field of color and light, not as a highly-compressed and invariant representation of objects.
One thing is: I have an artist friend who said that when he teaches drawing classes, he sometimes has people try to focus on and draw the “negative space” instead of the objects—like, “draw the blob of wall that is not blocked by the chair”. The reason is: most people find it hard to visualize the 3D world as “contours and surfaces as they appear from our viewpoint”, we remember the chair as a 3D chair, not as a 2D projection of a chair, except with conscious effort. The “blob of wall not blocked by the chair” is not associated with a preconception of a 3D object so we have an easier time remembering what it actually looks like from our perspective.
Another thing is: When I look at a scene, I have a piece of knowledge “that’s a chair” or “this is my room” which is not associated in any simple way with the contours and surfaces I’m looking at—I can’t give it (x,y) coordinates—it’s just sorta a thing in my mind, in a separate, parallel idea-space. Likewise “this thing is moving” or “this just changed” feels to me like a separate piece of information, and I just know it, it doesn’t have an (x,y) coordinate in my field of view. Like those motion illusions that were going around twitter recently.
Our conscious awareness consists of the patterns in the Global Neuronal Workspace. I would assume that these patterns involve not only predictions about the object-recognition stuff going on in IT but also predictions about a sampling of lower-level visual patterns in V2 or maybe even V1. So then we would get conscious access to something closer to the original pattern of incoming light. Maybe.
Thanks for those thoughts. And also for linking to Kaj’s post again; I finally decided to read it and it’s quite good. I don’t think it helps at all with the hard problem (i.e., you could replace ‘consciousness’ with some other process in the brain that has these properties but doesn’t have the subjective component, and I don’t think that would pose any problems), but it helps quite a bit with the ‘what is consciousness doing’ question, which I also care about.
(Now I’m trying to look at the wall of my room and to decide whether I actually do see pixels or ‘line segments’, which is an exercise that really puts a knot into my head.)
One of the things that makes this difficult is that, whenever you focus on a particular part, it’s probably consistent with the framework that this part gets reported in a lot more detail. If that’s true, then testing the theory requires you to look at the parts you’re not paying attention to, which is… um.
Maybe evidence here would be something like, do you recognize concepts in your peripheral vision more than hard-to-clasiffy-things and actually I think you do. (E.g, if I move my gaze to the left, I can still kind of see the vertical cable of a light on the wall even though the wall itself seems not visible.)
(Now I’m trying to look at the wall of my room and to decide whether I actually do see pixels or ‘line segments’, which is an exercise that really puts a knot into my head.)
Sorry if I’m misunderstanding what you’re getting at but...
I don’t think there’s any point in which there are signals in your brain that correspond directly to something like pixels in a camera. Even in the retina, there’s supposedly predictive coding data compression going on (I haven’t looked into that in detail). By the time the signals are going to the neocortex, they’ve been split into three data streams carrying different types of distilled data: magnocellular, parvocellular, and koniocellular (actually several types of konio I think), if memory serves. There’s a theory I like about the information-processing roles of magno and parvo; nobody seems to have any idea what the konio information is doing and neither do I. :-P
But does it matter whether the signals are superficially the same or not? If you do a lossless transformation from pixels into edges (for example), who cares, the information is still there, right?
So then the question is, what information is in (say) V1 but is not represented in V2 or higher layers, and do we have conscious access to that information? V1 has so many cortical columns processing so much data, intuitively there has to be compression going on.
I haven’t really thought much about how information compression in the neocortex works per se. Dileep George & Jeff Hawkins say here that there’s something like compressed sensing happening, and Randall O’Reilly says here that there’s error-driven learning (something like gradient descent) making sure that the top-down predictions are close enough to the input. Close on what metric though? Probably not pixel-to-pixel differences … probably more like “close in whatever compressed-sensing representation space is created by the V1 columns”...?
Maybe a big part of the data compression is: we only attend to one object at a time, and everything else is lumped together into “background”. Like, you might think you’re paying close attention to both your hand and your pen, but actually you’re flipping back and forth, or else lumping the two together into a composite object! (I’m speculating.) Then the product space of every possible object in every possible arrangement in your field of view is broken into a dramatically smaller disjunctive space of possibilities, consisting of any one possible object in any one possible position. Now that you’ve thrown out 99.999999% of the information by only attending to one object at a time, there’s plenty of room for the GNW to have lots of detail about the object’s position, color, texture, motion, etc.
For the hard problem of consciousness, the steps in my mind are
1. GNW --> 2. Solution to the meta-problem of consciousness --> 3. Feeling forced to accept illusionism --> 4. Enthusiastically believing in illusionism.
I wrote the post Book Review: Rethinking Consciousness about my journey from step 1 --> step 2 --> step 3. And that’s where I’m still at. I haven’t gotten to step 4, I would need to think about it more. :-P
One thing is: I have an artist friend who said that when he teaches drawing classes, he sometimes has people try to focus on and draw the “negative space” instead of the objects—like, “draw the blob of wall that is not blocked by the chair”. The reason is: most people find it hard to visualize the 3D world as “contours and surfaces as they appear from our viewpoint”, we remember the chair as a 3D chair, not as a 2D projection of a chair, except with conscious effort. The “blob of wall not blocked by the chair” is not associated with a preconception of a 3D object so we have an easier time remembering what it actually looks like from our perspective.
Another thing is: When I look at a scene, I have a piece of knowledge “that’s a chair” or “this is my room” which is not associated in any simple way with the contours and surfaces I’m looking at—I can’t give it (x,y) coordinates—it’s just sorta a thing in my mind, in a separate, parallel idea-space. Likewise “this thing is moving” or “this just changed” feels to me like a separate piece of information, and I just know it, it doesn’t have an (x,y) coordinate in my field of view. Like those motion illusions that were going around twitter recently.
Our conscious awareness consists of the patterns in the Global Neuronal Workspace. I would assume that these patterns involve not only predictions about the object-recognition stuff going on in IT but also predictions about a sampling of lower-level visual patterns in V2 or maybe even V1. So then we would get conscious access to something closer to the original pattern of incoming light. Maybe.
I dunno, just thoughts off the top of my head.
Thanks for those thoughts. And also for linking to Kaj’s post again; I finally decided to read it and it’s quite good. I don’t think it helps at all with the hard problem (i.e., you could replace ‘consciousness’ with some other process in the brain that has these properties but doesn’t have the subjective component, and I don’t think that would pose any problems), but it helps quite a bit with the ‘what is consciousness doing’ question, which I also care about.
(Now I’m trying to look at the wall of my room and to decide whether I actually do see pixels or ‘line segments’, which is an exercise that really puts a knot into my head.)
One of the things that makes this difficult is that, whenever you focus on a particular part, it’s probably consistent with the framework that this part gets reported in a lot more detail. If that’s true, then testing the theory requires you to look at the parts you’re not paying attention to, which is… um.
Maybe evidence here would be something like, do you recognize concepts in your peripheral vision more than hard-to-clasiffy-things and actually I think you do. (E.g, if I move my gaze to the left, I can still kind of see the vertical cable of a light on the wall even though the wall itself seems not visible.)
Sorry if I’m misunderstanding what you’re getting at but...
I don’t think there’s any point in which there are signals in your brain that correspond directly to something like pixels in a camera. Even in the retina, there’s supposedly predictive coding data compression going on (I haven’t looked into that in detail). By the time the signals are going to the neocortex, they’ve been split into three data streams carrying different types of distilled data: magnocellular, parvocellular, and koniocellular (actually several types of konio I think), if memory serves. There’s a theory I like about the information-processing roles of magno and parvo; nobody seems to have any idea what the konio information is doing and neither do I. :-P
But does it matter whether the signals are superficially the same or not? If you do a lossless transformation from pixels into edges (for example), who cares, the information is still there, right?
So then the question is, what information is in (say) V1 but is not represented in V2 or higher layers, and do we have conscious access to that information? V1 has so many cortical columns processing so much data, intuitively there has to be compression going on.
I haven’t really thought much about how information compression in the neocortex works per se. Dileep George & Jeff Hawkins say here that there’s something like compressed sensing happening, and Randall O’Reilly says here that there’s error-driven learning (something like gradient descent) making sure that the top-down predictions are close enough to the input. Close on what metric though? Probably not pixel-to-pixel differences … probably more like “close in whatever compressed-sensing representation space is created by the V1 columns”...?
Maybe a big part of the data compression is: we only attend to one object at a time, and everything else is lumped together into “background”. Like, you might think you’re paying close attention to both your hand and your pen, but actually you’re flipping back and forth, or else lumping the two together into a composite object! (I’m speculating.) Then the product space of every possible object in every possible arrangement in your field of view is broken into a dramatically smaller disjunctive space of possibilities, consisting of any one possible object in any one possible position. Now that you’ve thrown out 99.999999% of the information by only attending to one object at a time, there’s plenty of room for the GNW to have lots of detail about the object’s position, color, texture, motion, etc.
Not sure how helpful any of this is :-P
For the hard problem of consciousness, the steps in my mind are
1. GNW -->
2. Solution to the meta-problem of consciousness -->
3. Feeling forced to accept illusionism -->
4. Enthusiastically believing in illusionism.
I wrote the post Book Review: Rethinking Consciousness about my journey from step 1 --> step 2 --> step 3. And that’s where I’m still at. I haven’t gotten to step 4, I would need to think about it more. :-P