I certainly agree that even amongst programs that don’t have the structure above there are still ones that are more coherent or less coherent. I’m mostly saying that this seems not very related to whether such a system takes over the world and so I don’t think about it that much.
I think it can make sense to talk about an agent having coherent preferences over its internal state, and whether that’s a useful abstraction depends on why you’re analyzing that agent and more concrete details of the setup.
For instance say you have an oracle AI that takes compute steps to minimize incoherence in its world models. It doesn’t evaluate or compare steps before taking them, atleast as understood in the “agent” paradigm. But it is intelligent in some sense and it somehow takes the correct steps a lot of the time.
If you connect this AI to a secondary memory, it seems reasonable to predict that it will probably start writing to the new memory too, to have larger but coherent models.
I don’t buy this. You could finetune a language model today to do chain-of-thought reasoning, which sounds like “an oracle AI that takes compute steps to minimize incoherence in world models”. I predict that if you then add an additional head that can do read/write to an external memory, or you allow it to output sentences like “Write [X] to memory slot [Y]” or “Read from memory slot [Y]” that we support via an external database, it does not then start using that memory in some particularly useful way.
You could say that this is because current language models are too dumb, but I don’t particularly see why this will necessarily change in the future (besides that we will probably specifically train the models to use external memory). Overall I’m at “sure maybe a sufficiently intelligent oracle AI would do this but it seems like whether it happens depends on details of how the AI works”.
I certainly agree that even amongst programs that don’t have the structure above there are still ones that are more coherent or less coherent. I’m mostly saying that this seems not very related to whether such a system takes over the world and so I don’t think about it that much.
I think it can make sense to talk about an agent having coherent preferences over its internal state, and whether that’s a useful abstraction depends on why you’re analyzing that agent and more concrete details of the setup.
I don’t buy this. You could finetune a language model today to do chain-of-thought reasoning, which sounds like “an oracle AI that takes compute steps to minimize incoherence in world models”. I predict that if you then add an additional head that can do read/write to an external memory, or you allow it to output sentences like “Write [X] to memory slot [Y]” or “Read from memory slot [Y]” that we support via an external database, it does not then start using that memory in some particularly useful way.
You could say that this is because current language models are too dumb, but I don’t particularly see why this will necessarily change in the future (besides that we will probably specifically train the models to use external memory). Overall I’m at “sure maybe a sufficiently intelligent oracle AI would do this but it seems like whether it happens depends on details of how the AI works”.