I think I’d vote for: “Network 2 for this particular example with those particular labels, but with the subtext that the central node is NOT a fundamentally different kind of thing from the other five nodes; and also, if you zoom way out to include everything in the whole giant world-model, you also find lots of things that look more like Network 1. As an example of the latter: in the world of cars, their colors, dents, and makes have nonzero probabilistic relations that people can get a sense for (“huh, a beat-up hot-pink Mercedes, don’t normally see that...”) but it doesn’t fit into any categorization scheme.”
Hm, now I’m not sure if I’ve gotten things wrong :)
So a few things I think might clarify what I’m thinking, and I guess loosely argue for it:
There’s various specialized areas of the brain, where killing off some neurons will cause loss of capabilities (e.g. the fusiform face area for recognizing faces). But my impression was there isn’t a region where “the blegg neurons” (or the tiger neurons, or the chocolate chip cookie neurons) are, such that if they get killed you (selectively) lose the ability to associate the features of a blegg with other features of a blegg.
Top-down or lateral connections are more common than many used to think. Network 2 can still have plenty of top-down feedback, it just has to originate from a localized Blegg HQ[1]. Lateral connections are a harder problem for network 2 - I found numenta’s youtube channel a few weeks ago and half-understood a talk about lateral connections, but somewhere along the line I got sold on the idea that lateral connections, while sparse, are dense enough to allow information to percolate every-which-way.
Although, given sparsity, a specific patch at a specific time might have strictly hierarchical information flow with some high (?) probability.
I suspect you’re thinking about object recognition in the prefrontal cortex (maybe even activation of a specific column). Which… is a good point. I guess my two questions are something like: How much distributed processing bypasses the prefrontal cortex? E.g. suppose I cut off someone’s frontal lobe[2], and then put an egg in their hand—they’re more likely to say “egg” or do egg-related things, surely—how does that fit into a coarse-grained graph like in this post? And second, how distributed is object recognition in the PFC? if we zoom in on object-recognition, does the information actually converge hierarchically to a single point, or does it get used in a lot of ways in parallel that are then sent back out?
I guess in that latter case, drawing network 2 can still be appropriate if from “far away in the brain” it’s hard to see internal structure of object recognition.
Although that assumes the other nodes are far away—e.g. identifying the “furred” node with a representation in the somatosensory cortex, rather than as a more abstract concept of furriness.
Unless Blegg HQ isn’t localized, in which case one would be interpreting the diagram more figuratively—maybe even as a transition diagram between what thoughts predominate?
Okay, I just googled this and got the absolutely flooring quote “Removal of approximately the anterior half of the right frontal lobe in a third case was not associated with any noticeable alteration, neurological or psychological.”
I think we’re mostly talking past each other, or emphasizing different things, or something. Oh actually, I think you’re saying “the edges of Network 1 exist”, and I’m saying “the edges & central node of Network 2 can exist”? If so, that’s not a disagreement—both can and do exist. :)
Maybe we should switch away from bleggs/rubes to a real example of coke cans / pepsi cans. There is a central node—I can have a (gestalt) belief that this is a coke can and that is a pepsi can. And the central node is in fact important in practice. For example, if you see some sliver of the label of an unknown can, and then you’re trying to guess what it looks like in another distant part of the can (where the image is obstructed by my hand), then I claim the main pathway used by that query is probably (part of image) → “this is a coke can” (with such-and-such angle, lighting, etc.) → (guess about a distant part of image). I think that’s spiritually closer to a Network 2 type inference.
Granted, there are other cases where we can make inferences without needing to resolve that central node. The Network 1 edges exist too! Maybe that’s all you’re saying, in which case I agree. There are also situations where there is no central node, like my example of car dents / colors / makes.
Separately, I think your neuroanatomy is off—visual object recognition is conventionally associated with the occipital and temporal lobes (cf. “ventral stream”), and has IMO almost nothing to do with the prefrontal cortex. As for a “region where “the blegg neurons”…are, such that if they get killed you (selectively) lose the ability to associate the features of a blegg with other features of a blegg”: if you’re just talking about visual features, then I think the term is “agnosia”, and if it’s more general types of “features”, I think the term is “semantic dementia”. They’re both associated mainly with temporal lobe damage, if I recall correctly, although not the same parts of the temporal lobe.
Separately, I think your neuroanatomy is off—visual object recognition is conventionally associated with the occipital and temporal lobes (cf. “ventral stream”)
Well, object recognition is happening all over :P My neuroanatomy is certainly off, but I was more thinking about integrating multiple senses (parietal lobe getting added to the bingo card) with abstract/linguistic knowledge.
Maybe we should switch away from bleggs/rubes to a real example of coke cans / pepsi cans. There is a central node—I can have a (gestalt) belief that this is a coke can and that is a pepsi can. And the central node is in fact important in practice. For example, if you see some sliver of the label of an unknown can, and then you’re trying to guess what it looks like in another distant part of the can (where the image is obstructed by my hand), then I claim the main pathway used by that query is probably (part of image) → “this is a coke can” (with such-and-such angle, lighting, etc.) → (guess about a distant part of image). I think that’s spiritually closer to a Network 2 type inference.
Yeah, filling in one part of the coke can image based on distant parts definitely seems like something we should abstract as Network 2. I think part of why this is such a good example is because the leaf nodes are concrete pieces of sensory information that we wouldn’t expect to be able to interact without lots of processing.
If we imagine the leaf nodes as more processed/abstract features that are already “closer together,” I think the Network 1 case gets stronger.
I think I’d vote for: “Network 2 for this particular example with those particular labels, but with the subtext that the central node is NOT a fundamentally different kind of thing from the other five nodes; and also, if you zoom way out to include everything in the whole giant world-model, you also find lots of things that look more like Network 1. As an example of the latter: in the world of cars, their colors, dents, and makes have nonzero probabilistic relations that people can get a sense for (“huh, a beat-up hot-pink Mercedes, don’t normally see that...”) but it doesn’t fit into any categorization scheme.”
Hm, now I’m not sure if I’ve gotten things wrong :)
So a few things I think might clarify what I’m thinking, and I guess loosely argue for it:
There’s various specialized areas of the brain, where killing off some neurons will cause loss of capabilities (e.g. the fusiform face area for recognizing faces). But my impression was there isn’t a region where “the blegg neurons” (or the tiger neurons, or the chocolate chip cookie neurons) are, such that if they get killed you (selectively) lose the ability to associate the features of a blegg with other features of a blegg.
Top-down or lateral connections are more common than many used to think. Network 2 can still have plenty of top-down feedback, it just has to originate from a localized Blegg HQ[1]. Lateral connections are a harder problem for network 2 - I found numenta’s youtube channel a few weeks ago and half-understood a talk about lateral connections, but somewhere along the line I got sold on the idea that lateral connections, while sparse, are dense enough to allow information to percolate every-which-way.
Although, given sparsity, a specific patch at a specific time might have strictly hierarchical information flow with some high (?) probability.
I suspect you’re thinking about object recognition in the prefrontal cortex (maybe even activation of a specific column). Which… is a good point. I guess my two questions are something like: How much distributed processing bypasses the prefrontal cortex? E.g. suppose I cut off someone’s frontal lobe[2], and then put an egg in their hand—they’re more likely to say “egg” or do egg-related things, surely—how does that fit into a coarse-grained graph like in this post? And second, how distributed is object recognition in the PFC? if we zoom in on object-recognition, does the information actually converge hierarchically to a single point, or does it get used in a lot of ways in parallel that are then sent back out?
I guess in that latter case, drawing network 2 can still be appropriate if from “far away in the brain” it’s hard to see internal structure of object recognition.
Although that assumes the other nodes are far away—e.g. identifying the “furred” node with a representation in the somatosensory cortex, rather than as a more abstract concept of furriness.
Unless Blegg HQ isn’t localized, in which case one would be interpreting the diagram more figuratively—maybe even as a transition diagram between what thoughts predominate?
Okay, I just googled this and got the absolutely flooring quote “Removal of approximately the anterior half of the right frontal lobe in a third case was not associated with any noticeable alteration, neurological or psychological.”
I think we’re mostly talking past each other, or emphasizing different things, or something. Oh actually, I think you’re saying “the edges of Network 1 exist”, and I’m saying “the edges & central node of Network 2 can exist”? If so, that’s not a disagreement—both can and do exist. :)
Maybe we should switch away from bleggs/rubes to a real example of coke cans / pepsi cans. There is a central node—I can have a (gestalt) belief that this is a coke can and that is a pepsi can. And the central node is in fact important in practice. For example, if you see some sliver of the label of an unknown can, and then you’re trying to guess what it looks like in another distant part of the can (where the image is obstructed by my hand), then I claim the main pathway used by that query is probably (part of image) → “this is a coke can” (with such-and-such angle, lighting, etc.) → (guess about a distant part of image). I think that’s spiritually closer to a Network 2 type inference.
Granted, there are other cases where we can make inferences without needing to resolve that central node. The Network 1 edges exist too! Maybe that’s all you’re saying, in which case I agree. There are also situations where there is no central node, like my example of car dents / colors / makes.
Separately, I think your neuroanatomy is off—visual object recognition is conventionally associated with the occipital and temporal lobes (cf. “ventral stream”), and has IMO almost nothing to do with the prefrontal cortex. As for a “region where “the blegg neurons”…are, such that if they get killed you (selectively) lose the ability to associate the features of a blegg with other features of a blegg”: if you’re just talking about visual features, then I think the term is “agnosia”, and if it’s more general types of “features”, I think the term is “semantic dementia”. They’re both associated mainly with temporal lobe damage, if I recall correctly, although not the same parts of the temporal lobe.
Well, object recognition is happening all over :P My neuroanatomy is certainly off, but I was more thinking about integrating multiple senses (parietal lobe getting added to the bingo card) with abstract/linguistic knowledge.
Yeah, filling in one part of the coke can image based on distant parts definitely seems like something we should abstract as Network 2. I think part of why this is such a good example is because the leaf nodes are concrete pieces of sensory information that we wouldn’t expect to be able to interact without lots of processing.
If we imagine the leaf nodes as more processed/abstract features that are already “closer together,” I think the Network 1 case gets stronger.
Gonna go read about semantic dementia.