Your GNW has an active generative model built out of lots of component models. I would say that the “tennis-match-flow” case entails little sub-sub-components asynchronously updating themselves as new information comes in—the tennis ball was over there, and now it’s over here. By contrast the more typically “choppy” way of thinking involves frequently throwing out the whole manifold of generative models all at once, and activating a wholly new set of interlocking generative models. The latter (unlike the former) involves an attentional blink, because it takes some time for all the new neural codes to become active and synchronized, and in between you’re in an incoherent, unstable state with mutually-contradictory generative models fighting it out.
Ahhhh this seems like an idea I was missing. I was thinking of the generative models as all being in a ready and waiting state, only ever swapping in and out of broadcasting on the GNW. But a model might take time to become active and/or do it’s work. I’ve been very fuzzy on how generative models are arranged and organized. You pointing this out makes me think that attentional blink (or “frame rate” stuff in general) is probably rarely limited by the actual “time it takes a signal to be propogated on the GNW” and much more related to the “loading” and “activation” of the models that are doing the work.
I do think signal propagation time is probably a big contributor. I think activating a generative model in the GNW entails activating a particular set of interconnected neurons scattered around the GNW parts of the neocortex, which in turn requires those neurons to talk with each other. You can think of a probabilistic graphical model … you change the value of some node and then run the message-passing algorithm a bit, and the network settles into a new configuration. Something like that, I think...
Ahhhh this seems like an idea I was missing. I was thinking of the generative models as all being in a ready and waiting state, only ever swapping in and out of broadcasting on the GNW. But a model might take time to become active and/or do it’s work. I’ve been very fuzzy on how generative models are arranged and organized. You pointing this out makes me think that attentional blink (or “frame rate” stuff in general) is probably rarely limited by the actual “time it takes a signal to be propogated on the GNW” and much more related to the “loading” and “activation” of the models that are doing the work.
I do think signal propagation time is probably a big contributor. I think activating a generative model in the GNW entails activating a particular set of interconnected neurons scattered around the GNW parts of the neocortex, which in turn requires those neurons to talk with each other. You can think of a probabilistic graphical model … you change the value of some node and then run the message-passing algorithm a bit, and the network settles into a new configuration. Something like that, I think...