In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn’t depend on it’s utility function. And it can’t be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model.
AIXI doesn’t exist in a vacuum. Even if AIXI itself can’t be said to have self-generated motivations, it is build in a way that reflects the motivations of its creators, so it is still infused with motivations. Choices had to be made to build AIXI one way rather than another (or not at all). The generators of those choices are were the motivations behind what AIXI does lie.
If the world model is seriously broken, the agent is just non functional. The workings of the world model isn’t a choice for the agent. It’s a choice for whatever made the agent.
Yes, although some agents seem to have some amount of self-reflective ability to change their motivations.
AIXI doesn’t exist in a vacuum. Even if AIXI itself can’t be said to have self-generated motivations, it is build in a way that reflects the motivations of its creators, so it is still infused with motivations. Choices had to be made to build AIXI one way rather than another (or not at all). The generators of those choices are were the motivations behind what AIXI does lie.
Yes, although some agents seem to have some amount of self-reflective ability to change their motivations.