I think we’re mostly talking past each other, or emphasizing different things, or something. Oh actually, I think you’re saying “the edges of Network 1 exist”, and I’m saying “the edges & central node of Network 2 can exist”? If so, that’s not a disagreement—both can and do exist. :)
Maybe we should switch away from bleggs/rubes to a real example of coke cans / pepsi cans. There is a central node—I can have a (gestalt) belief that this is a coke can and that is a pepsi can. And the central node is in fact important in practice. For example, if you see some sliver of the label of an unknown can, and then you’re trying to guess what it looks like in another distant part of the can (where the image is obstructed by my hand), then I claim the main pathway used by that query is probably (part of image) → “this is a coke can” (with such-and-such angle, lighting, etc.) → (guess about a distant part of image). I think that’s spiritually closer to a Network 2 type inference.
Granted, there are other cases where we can make inferences without needing to resolve that central node. The Network 1 edges exist too! Maybe that’s all you’re saying, in which case I agree. There are also situations where there is no central node, like my example of car dents / colors / makes.
Separately, I think your neuroanatomy is off—visual object recognition is conventionally associated with the occipital and temporal lobes (cf. “ventral stream”), and has IMO almost nothing to do with the prefrontal cortex. As for a “region where “the blegg neurons”…are, such that if they get killed you (selectively) lose the ability to associate the features of a blegg with other features of a blegg”: if you’re just talking about visual features, then I think the term is “agnosia”, and if it’s more general types of “features”, I think the term is “semantic dementia”. They’re both associated mainly with temporal lobe damage, if I recall correctly, although not the same parts of the temporal lobe.
Separately, I think your neuroanatomy is off—visual object recognition is conventionally associated with the occipital and temporal lobes (cf. “ventral stream”)
Well, object recognition is happening all over :P My neuroanatomy is certainly off, but I was more thinking about integrating multiple senses (parietal lobe getting added to the bingo card) with abstract/linguistic knowledge.
Maybe we should switch away from bleggs/rubes to a real example of coke cans / pepsi cans. There is a central node—I can have a (gestalt) belief that this is a coke can and that is a pepsi can. And the central node is in fact important in practice. For example, if you see some sliver of the label of an unknown can, and then you’re trying to guess what it looks like in another distant part of the can (where the image is obstructed by my hand), then I claim the main pathway used by that query is probably (part of image) → “this is a coke can” (with such-and-such angle, lighting, etc.) → (guess about a distant part of image). I think that’s spiritually closer to a Network 2 type inference.
Yeah, filling in one part of the coke can image based on distant parts definitely seems like something we should abstract as Network 2. I think part of why this is such a good example is because the leaf nodes are concrete pieces of sensory information that we wouldn’t expect to be able to interact without lots of processing.
If we imagine the leaf nodes as more processed/abstract features that are already “closer together,” I think the Network 1 case gets stronger.
I think we’re mostly talking past each other, or emphasizing different things, or something. Oh actually, I think you’re saying “the edges of Network 1 exist”, and I’m saying “the edges & central node of Network 2 can exist”? If so, that’s not a disagreement—both can and do exist. :)
Maybe we should switch away from bleggs/rubes to a real example of coke cans / pepsi cans. There is a central node—I can have a (gestalt) belief that this is a coke can and that is a pepsi can. And the central node is in fact important in practice. For example, if you see some sliver of the label of an unknown can, and then you’re trying to guess what it looks like in another distant part of the can (where the image is obstructed by my hand), then I claim the main pathway used by that query is probably (part of image) → “this is a coke can” (with such-and-such angle, lighting, etc.) → (guess about a distant part of image). I think that’s spiritually closer to a Network 2 type inference.
Granted, there are other cases where we can make inferences without needing to resolve that central node. The Network 1 edges exist too! Maybe that’s all you’re saying, in which case I agree. There are also situations where there is no central node, like my example of car dents / colors / makes.
Separately, I think your neuroanatomy is off—visual object recognition is conventionally associated with the occipital and temporal lobes (cf. “ventral stream”), and has IMO almost nothing to do with the prefrontal cortex. As for a “region where “the blegg neurons”…are, such that if they get killed you (selectively) lose the ability to associate the features of a blegg with other features of a blegg”: if you’re just talking about visual features, then I think the term is “agnosia”, and if it’s more general types of “features”, I think the term is “semantic dementia”. They’re both associated mainly with temporal lobe damage, if I recall correctly, although not the same parts of the temporal lobe.
Well, object recognition is happening all over :P My neuroanatomy is certainly off, but I was more thinking about integrating multiple senses (parietal lobe getting added to the bingo card) with abstract/linguistic knowledge.
Yeah, filling in one part of the coke can image based on distant parts definitely seems like something we should abstract as Network 2. I think part of why this is such a good example is because the leaf nodes are concrete pieces of sensory information that we wouldn’t expect to be able to interact without lots of processing.
If we imagine the leaf nodes as more processed/abstract features that are already “closer together,” I think the Network 1 case gets stronger.
Gonna go read about semantic dementia.