Just as if it were looking into the universe from outside it, it would presumably be able to understand anything in the world, as a (third-person) fact about the world, including that humans have self-awareness, that there is a project to build a self-unaware AI without it, and so on. We would program it with strict separation between the world-model and the reflective, meta-level information about how the world-model is being constructed and processed. Thus the thought “Maybe they’re talking about me” cannot occur, there’s nothing in the world-model to grab onto as a referent for the word “me”. Exactly how this strict separation would be programmed, and whether you can make a strong practical world-modeling system with such a separation, are things I’m still trying to understand.
A possible (not realistic) example is: We enumerate a vast collection of possible world-models, which we construct by varying any of a vast number of adjustable parameters, describing what exists in the world, how things relate to each other, what’s going on right now, and so on. Nothing in any of the models has anything in it with a special flag labeled “me”, “my knowledge”, “my actions”, etc., by construction. Now, we put a probability distribution over this vast space of models, and initialize it to be uniform (or whatever). With each timestep of self-supervised learning, a controller propagates each of the models forward, inspects the next bit in the datastream, and adjusts the probability distribution over models based on whether that new bit is what we expected. After watching 100,000 years of YouTube videos and reading every document ever written, the controller outputs the one best world-model. Now we have a powerful world-model, in which there are deep insights about how everything works. We can use this world-model for whatever purpose we like. Note that the “learning” process here is a dumb thing that just uses the transition rules of the world-models, it doesn’t involve setting up the world-models themselves to be capable of intelligent introspection. So it seems to me like this process ought to generate a self-unaware world model.
Just as if it were looking into the universe from outside it, it would presumably be able to understand anything in the world, as a (third-person) fact about the world, including that humans have self-awareness, that there is a project to build a self-unaware AI without it, and so on. We would program it with strict separation between the world-model and the reflective, meta-level information about how the world-model is being constructed and processed. Thus the thought “Maybe they’re talking about me” cannot occur, there’s nothing in the world-model to grab onto as a referent for the word “me”. Exactly how this strict separation would be programmed, and whether you can make a strong practical world-modeling system with such a separation, are things I’m still trying to understand.
A possible (not realistic) example is: We enumerate a vast collection of possible world-models, which we construct by varying any of a vast number of adjustable parameters, describing what exists in the world, how things relate to each other, what’s going on right now, and so on. Nothing in any of the models has anything in it with a special flag labeled “me”, “my knowledge”, “my actions”, etc., by construction. Now, we put a probability distribution over this vast space of models, and initialize it to be uniform (or whatever). With each timestep of self-supervised learning, a controller propagates each of the models forward, inspects the next bit in the datastream, and adjusts the probability distribution over models based on whether that new bit is what we expected. After watching 100,000 years of YouTube videos and reading every document ever written, the controller outputs the one best world-model. Now we have a powerful world-model, in which there are deep insights about how everything works. We can use this world-model for whatever purpose we like. Note that the “learning” process here is a dumb thing that just uses the transition rules of the world-models, it doesn’t involve setting up the world-models themselves to be capable of intelligent introspection. So it seems to me like this process ought to generate a self-unaware world model.