And that means whatever we want to claim to be true is ultimately motivated by whatever it is we care about that led us to choose the definition of truth we use.
People who speak different languages don’t use the symbols “truth”. To what extent are people using different definitions of “truth” just choosing to define a word in different ways and talk about different things.
In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn’t depend on it’s utility function. And it can’t be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model.
If the world model is seriously broken, the agent is just non functional. The workings of the world model isn’t a choice for the agent. It’s a choice for whatever made the agent.
In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn’t depend on it’s utility function. And it can’t be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model.
AIXI doesn’t exist in a vacuum. Even if AIXI itself can’t be said to have self-generated motivations, it is build in a way that reflects the motivations of its creators, so it is still infused with motivations. Choices had to be made to build AIXI one way rather than another (or not at all). The generators of those choices are were the motivations behind what AIXI does lie.
If the world model is seriously broken, the agent is just non functional. The workings of the world model isn’t a choice for the agent. It’s a choice for whatever made the agent.
Yes, although some agents seem to have some amount of self-reflective ability to change their motivations.
People who speak different languages don’t use the symbols “truth”. To what extent are people using different definitions of “truth” just choosing to define a word in different ways and talk about different things.
In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn’t depend on it’s utility function. And it can’t be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model.
If the world model is seriously broken, the agent is just non functional. The workings of the world model isn’t a choice for the agent. It’s a choice for whatever made the agent.
AIXI doesn’t exist in a vacuum. Even if AIXI itself can’t be said to have self-generated motivations, it is build in a way that reflects the motivations of its creators, so it is still infused with motivations. Choices had to be made to build AIXI one way rather than another (or not at all). The generators of those choices are were the motivations behind what AIXI does lie.
Yes, although some agents seem to have some amount of self-reflective ability to change their motivations.