My current model of consciousness is that it is the process of encoding cognitive programs (action) or belief maps (perception). These programs/maps can then be stored in long-term memory to be called upon later, or they can be transcoded onto the language centers of the brain to allow them to be replicated in the minds of others via language.
Both of these functions would have a high selective advantage on their own. Those who can better replicate a complex sequence of actions that proved successful in the past (by loading a cognitive program from memory or from language input) and those who can model the world in a manner that has proven useful (loading a belief map from memory or from language input) can more quickly adapt to changes in the environment than can those who rely on mere reinforcement learning. RL, like evolution, is basically a brute-force approach to learning, whereas the encodings created by conscious attention would allow the brain to load and run executable programs more like a computer. Of course, this process is imperfect in humans since most of our evolutionary history has involved brains that depended more on unsupervised learning of world models and reinforcement learning of behavioral policies. Even the hippocampus probably acts more like a replay buffer for training a reinforcement learning algorithm in most species than as a generalized memory system.
Note that this doesn’t imply that an agent is necessarily conscious when it uses language or memory (or even when it uses a model of the self). I think consciousness probably involves pulling together a bunch of different mechanisms (attention, self-modeling, world-modeling, etc.) in order to create the belief maps and cognitive programs that can be reloaded/transmitted later. It’s the encoding process itself, not the reloading or communication necessarily. Of course, one could be conscious of those other processes, but it’s not strictly necessary. People who enter a “flow” state seem to be relying on purely unconscious cognitive processes (more like what non-human animals rely on all the time), since conscious encoding/reloading is very expensive.
I’m no expert on any of this, though, so please feel free to poke holes in this model. I just think that consciousness and qualia aren’t things that anyone should bother trying to program directly. It’s more likely, in my opinion, that they will come about naturally as a result of designing AI with more sophisticated cognitive abilities, just like what happened in human evolution.
My current model of consciousness is that it is the process of encoding cognitive programs (action) or belief maps (perception). These programs/maps can then be stored in long-term memory to be called upon later, or they can be transcoded onto the language centers of the brain to allow them to be replicated in the minds of others via language.
Both of these functions would have a high selective advantage on their own. Those who can better replicate a complex sequence of actions that proved successful in the past (by loading a cognitive program from memory or from language input) and those who can model the world in a manner that has proven useful (loading a belief map from memory or from language input) can more quickly adapt to changes in the environment than can those who rely on mere reinforcement learning. RL, like evolution, is basically a brute-force approach to learning, whereas the encodings created by conscious attention would allow the brain to load and run executable programs more like a computer. Of course, this process is imperfect in humans since most of our evolutionary history has involved brains that depended more on unsupervised learning of world models and reinforcement learning of behavioral policies. Even the hippocampus probably acts more like a replay buffer for training a reinforcement learning algorithm in most species than as a generalized memory system.
Note that this doesn’t imply that an agent is necessarily conscious when it uses language or memory (or even when it uses a model of the self). I think consciousness probably involves pulling together a bunch of different mechanisms (attention, self-modeling, world-modeling, etc.) in order to create the belief maps and cognitive programs that can be reloaded/transmitted later. It’s the encoding process itself, not the reloading or communication necessarily. Of course, one could be conscious of those other processes, but it’s not strictly necessary. People who enter a “flow” state seem to be relying on purely unconscious cognitive processes (more like what non-human animals rely on all the time), since conscious encoding/reloading is very expensive.
I’m no expert on any of this, though, so please feel free to poke holes in this model. I just think that consciousness and qualia aren’t things that anyone should bother trying to program directly. It’s more likely, in my opinion, that they will come about naturally as a result of designing AI with more sophisticated cognitive abilities, just like what happened in human evolution.