Imagining two apples is a different thought from imagining one apple, right?
I mean, is it? Different states of the whole cortex are different. And the cortex can’t be in a state of imagining only one apple and, simultaneously, be in a state of imagining two apples, obviously. But it’s tautological. What are we gaining from thinking about it in such terms? You can say the same thing about the whole brain itself, that it can only have one brain-state in a moment.
I guess there is a sense in which other parts of the brain have more various thoughts relative to what cortex can handle, but, like you said, you can use half of cortex capacity, so why not define song and legal document as different thoughts?
As abstract elements of provisional framework cortex-level thoughts are fine, I just wonder what are you claiming about real constrains, aside from “there limits on thoughts”. because, for example, you need other limits anyway—you can’t think arbitrary complex thought even if it is intuitively cohesive. But yeah, enough gory details.
On the other hand, I can’t have two songs playing in my head simultaneously, nor can I be thinking about two unrelated legal documents simultaneously.
I can’t either, but I don’t see just from the architecture why it would be impossible in principle.
Again, I think autoassociative memory / attractor dynamics is a helpful analogy here. If I have a physical instantiation of a Hopfield network, I can’t query 100 of its stored patterns in parallel, right? I have to do it serially.
Yes, but you can theoretically encode many things in each pattern? Although if your parallel processes need different data, one of them will have to skip some responses… Would be better to have different networks, but I don’t see brain providing much isolation. Well, it seems to illustrate complications of parallel processing that may played a role in humans usually staying serial.
You say “tautological”, I say “obvious”. You can’t parse a legal document and try to remember your friend’s name at the exact same moment. That’s all I’m saying! This is supposed to be very obvious common sense, not profound.
What are we gaining from thinking about it in such terms?
Consider the following fact:
FACT: Sometimes, I’m thinking about pencils. Other times, I’m not thinking about pencils.
Now imagine that there’s a predictive (a.k.a. self-supervised) learning algorithm which is tasked with predicting upcoming sensory inputs, by building generative models. The above fact is very important! If the predictive learning algorithm does not somehow incorporate that fact into its generative models, then those generative models will be worse at making predictions. For example, if I’m thinking about pencils, then I’m likelier to talk about pencils, and look at pencils, and grab a pencil, etc., compared to if I’m not thinking about pencils. So the predictive learning algorithm is incentivized (by its predictive loss function) to build a generative model that can represent the fact that any given concept might be active in the cortex at a certain time, or might not be.
Again, this is all supposed to sound very obvious, not profound.
You can say the same thing about the whole brain itself, that it can only have one brain-state in a moment.
Yes, it’s also useful for the predictive learning algorithm to build generative models that capture other aspects of the brain state, outside the cortex. Thus we wind up with intuitive concepts that represent the possibility that we can be in one mood or another, that we can be experiencing a certain physiological reaction, etc.
I mean, is it? Different states of the whole cortex are different. And the cortex can’t be in a state of imagining only one apple and, simultaneously, be in a state of imagining two apples, obviously. But it’s tautological. What are we gaining from thinking about it in such terms? You can say the same thing about the whole brain itself, that it can only have one brain-state in a moment.
I guess there is a sense in which other parts of the brain have more various thoughts relative to what cortex can handle, but, like you said, you can use half of cortex capacity, so why not define song and legal document as different thoughts?
As abstract elements of provisional framework cortex-level thoughts are fine, I just wonder what are you claiming about real constrains, aside from “there limits on thoughts”. because, for example, you need other limits anyway—you can’t think arbitrary complex thought even if it is intuitively cohesive. But yeah, enough gory details.
I can’t either, but I don’t see just from the architecture why it would be impossible in principle.
Yes, but you can theoretically encode many things in each pattern? Although if your parallel processes need different data, one of them will have to skip some responses… Would be better to have different networks, but I don’t see brain providing much isolation. Well, it seems to illustrate complications of parallel processing that may played a role in humans usually staying serial.
You say “tautological”, I say “obvious”. You can’t parse a legal document and try to remember your friend’s name at the exact same moment. That’s all I’m saying! This is supposed to be very obvious common sense, not profound.
Consider the following fact:
FACT: Sometimes, I’m thinking about pencils. Other times, I’m not thinking about pencils.
Now imagine that there’s a predictive (a.k.a. self-supervised) learning algorithm which is tasked with predicting upcoming sensory inputs, by building generative models. The above fact is very important! If the predictive learning algorithm does not somehow incorporate that fact into its generative models, then those generative models will be worse at making predictions. For example, if I’m thinking about pencils, then I’m likelier to talk about pencils, and look at pencils, and grab a pencil, etc., compared to if I’m not thinking about pencils. So the predictive learning algorithm is incentivized (by its predictive loss function) to build a generative model that can represent the fact that any given concept might be active in the cortex at a certain time, or might not be.
Again, this is all supposed to sound very obvious, not profound.
Yes, it’s also useful for the predictive learning algorithm to build generative models that capture other aspects of the brain state, outside the cortex. Thus we wind up with intuitive concepts that represent the possibility that we can be in one mood or another, that we can be experiencing a certain physiological reaction, etc.