I don’t think I agree with the “but only one thought can be there at a time” part.
I’m probably just defining “thought” more broadly than you. The cortex has many areas. The auditory parts can be doing some auditory thing, and simultaneously the motor parts can be doing some motor thing, and all that together constitutes (what I call) a “thought”.
I don’t think anything in the series really hinges on the details here. “Conscious awareness” is not conceptualized as some atomic concept where there’s nothing else to say about it. If you ask people to describe their conscious awareness, they can go on and on for hours. My claim is that: when they go on and on for hours describing “conscious awareness”, all the rich details that they’re describing can be mapped onto accurate claims about properties of the cortex and its activation states. (E.g. the next part of this comment.)
I think each of the workspaces have their own short memory which spans maybe like 2seconds.
I agree that, if some part of the cortex is in activation state A at time T, those particular neurons (that constitute A) will get less and less active over the course of a second or two, such that at time T+1 second, it’s still possible for other parts of the cortex to reactivate activation state A via an appropriate query. I don’t think all traces of A immediately disappear entirely every time a new activation state appears.
Again, I think the capability of the cortex to have more than one incompatible generative models active simultaneously is extremely limited, and that for most purposes we should think of it as only having a single (MAP estimate) generative model active. But the capability to track multiple incompatible models simultaneously does exist, to some extent. I think this slow-fading thing is one application of that capability. Another is the time-extended probabilistic inference thing that I talked about in §2.3.
I’m probably just defining “thought” more broadly than you. The cortex has many areas. The auditory parts can be doing some auditory thing, and simultaneously the motor parts can be doing some motor thing, and all that together constitutes (what I call) a “thought”.
I don’t think anything in the series really hinges on the details here. “Conscious awareness” is not conceptualized as some atomic concept where there’s nothing else to say about it. If you ask people to describe their conscious awareness, they can go on and on for hours. My claim is that: when they go on and on for hours describing “conscious awareness”, all the rich details that they’re describing can be mapped onto accurate claims about properties of the cortex and its activation states. (E.g. the next part of this comment.)
I agree that, if some part of the cortex is in activation state A at time T, those particular neurons (that constitute A) will get less and less active over the course of a second or two, such that at time T+1 second, it’s still possible for other parts of the cortex to reactivate activation state A via an appropriate query. I don’t think all traces of A immediately disappear entirely every time a new activation state appears.
Again, I think the capability of the cortex to have more than one incompatible generative models active simultaneously is extremely limited, and that for most purposes we should think of it as only having a single (MAP estimate) generative model active. But the capability to track multiple incompatible models simultaneously does exist, to some extent. I think this slow-fading thing is one application of that capability. Another is the time-extended probabilistic inference thing that I talked about in §2.3.
ok yep sounds like we basically agree i think.