Given the serial, discrete nature of the GNW, it follows that consciousness is fundamentally a discrete and choppy thing, not a smooth continuous stream.
Here’s my take. Think of the neocortex as having a zoo of generative models with methods for building them and sorting through them. The models are compositional—compatible models can snap together like legos. Thus I can imagine a rubber wine glass, because the rubber generative models bottom out in a bunch of predictions of boolean variables, the wine glass generative models bottom out in a bunch of predictions of different boolean variables (and/or consistent predictions of the same boolean variables), and therefore I can union the predictions of the two sets of models.
Your GNW has an active generative model built out of lots of component models. I would say that the “tennis-match-flow” case entails little sub-sub-components asynchronously updating themselves as new information comes in—the tennis ball was over there, and now it’s over here. By contrast the more typically “choppy” way of thinking involves frequently throwing out the whole manifold of generative models all at once, and activating a wholly new set of interlocking generative models. The latter (unlike the former) involves an attentional blink, because it takes some time for all the new neural codes to become active and synchronized, and in between you’re in an incoherent, unstable state with mutually-contradictory generative models fighting it out.
Perhaps the attentional blink literature is a bit complicated because, with practice or intention, you can build a single GNW generative model that predicts both of two sequential inputs.
Your GNW has an active generative model built out of lots of component models. I would say that the “tennis-match-flow” case entails little sub-sub-components asynchronously updating themselves as new information comes in—the tennis ball was over there, and now it’s over here. By contrast the more typically “choppy” way of thinking involves frequently throwing out the whole manifold of generative models all at once, and activating a wholly new set of interlocking generative models. The latter (unlike the former) involves an attentional blink, because it takes some time for all the new neural codes to become active and synchronized, and in between you’re in an incoherent, unstable state with mutually-contradictory generative models fighting it out.
Ahhhh this seems like an idea I was missing. I was thinking of the generative models as all being in a ready and waiting state, only ever swapping in and out of broadcasting on the GNW. But a model might take time to become active and/or do it’s work. I’ve been very fuzzy on how generative models are arranged and organized. You pointing this out makes me think that attentional blink (or “frame rate” stuff in general) is probably rarely limited by the actual “time it takes a signal to be propogated on the GNW” and much more related to the “loading” and “activation” of the models that are doing the work.
I do think signal propagation time is probably a big contributor. I think activating a generative model in the GNW entails activating a particular set of interconnected neurons scattered around the GNW parts of the neocortex, which in turn requires those neurons to talk with each other. You can think of a probabilistic graphical model … you change the value of some node and then run the message-passing algorithm a bit, and the network settles into a new configuration. Something like that, I think...
Here’s my take. Think of the neocortex as having a zoo of generative models with methods for building them and sorting through them. The models are compositional—compatible models can snap together like legos. Thus I can imagine a rubber wine glass, because the rubber generative models bottom out in a bunch of predictions of boolean variables, the wine glass generative models bottom out in a bunch of predictions of different boolean variables (and/or consistent predictions of the same boolean variables), and therefore I can union the predictions of the two sets of models.
Your GNW has an active generative model built out of lots of component models. I would say that the “tennis-match-flow” case entails little sub-sub-components asynchronously updating themselves as new information comes in—the tennis ball was over there, and now it’s over here. By contrast the more typically “choppy” way of thinking involves frequently throwing out the whole manifold of generative models all at once, and activating a wholly new set of interlocking generative models. The latter (unlike the former) involves an attentional blink, because it takes some time for all the new neural codes to become active and synchronized, and in between you’re in an incoherent, unstable state with mutually-contradictory generative models fighting it out.
Perhaps the attentional blink literature is a bit complicated because, with practice or intention, you can build a single GNW generative model that predicts both of two sequential inputs.
Ahhhh this seems like an idea I was missing. I was thinking of the generative models as all being in a ready and waiting state, only ever swapping in and out of broadcasting on the GNW. But a model might take time to become active and/or do it’s work. I’ve been very fuzzy on how generative models are arranged and organized. You pointing this out makes me think that attentional blink (or “frame rate” stuff in general) is probably rarely limited by the actual “time it takes a signal to be propogated on the GNW” and much more related to the “loading” and “activation” of the models that are doing the work.
I do think signal propagation time is probably a big contributor. I think activating a generative model in the GNW entails activating a particular set of interconnected neurons scattered around the GNW parts of the neocortex, which in turn requires those neurons to talk with each other. You can think of a probabilistic graphical model … you change the value of some node and then run the message-passing algorithm a bit, and the network settles into a new configuration. Something like that, I think...