The code of the function world() is expected to include one or more calls to agent()
Not necessarily (or not exactly). Where I used agent2() for a predictor, I could use agent3() for the agent as well. See this comment.
Extensions are possible, but not relevant to the main point of the model. It doesn’t seem natural to model different players with the same agent-program. Observations are not part of the model, but you could consider the agent as producing a program with parameters (instead of an unstructured constant) as a result of its decision.
The other use for inputs is information passed to agents through sensory observation. The extension is extremely natural.
“random numbers”: random numbers in nature are really pseudorandom numbers—numbers we lack the computational power to predict. In a full model of the universe, random numbers are not necessary—but different possible worlds with different laws of physics and some probability mass assigned to each, are.
Any randomness in the agent would really be pseudorandomness.
The other use for inputs is information passed to agents through sensory observation.
I don’t believe you can obtain proper information about the world through observation (unless you are a human and don’t know what you want). Preference is unchanging and is defined subjectively, so formal agents can’t learn new things about it (apart from resolution of logical uncertainty). Observations play a role in how plans play out (plans are prepared to be conditional on observations), or in priority for preparing said plans as observations come in, but not as criteria for making decisions.
On the other hand, observations could probably be naturally seen as constructing new agents from existing ones (without changing their preference/concept of environment). And some notion of observation needs to be introduced at some point, just as a notion of computational time.
I was using “information” loosely. Define it as “that thing you get from observation” if you want.
The point is, you will make different choices if you get different sensory experiences, because the sensory experiences imply something about how you control the world program. You’re right that this could be modeled with different agent functions instead of parameters. Interesting—that seems rather deeply meaningful.
The point is, you will make different choices if you get different sensory experiences
You can see it as unconditionally making a single conditional choice. The choice is itself a program with parameters, and goes different ways depending on observation, but is made without regard for observation. As an option, the choice is a parametrized constructor for new agents, which upon being constructed will make further choices, again without parameters.
There are several different possible formulations here. It seems like they will lead to the same results. The best one is, I suppose, the most elegant one.
So it sounds like you are saying that the agent() program only represents the decision-theory portion of the agent. The Bayesian cognitive portion of the agent and the part of the agent that prefers some things over others are both modeled in world(). Communication with other agents, logic, all these features of agency are in world(), not in agent().
May I suggest that “agent()” has been misnamed? Shouldn’t it be called something more like choice() or decision()? And shouldn’t “world()” be named “expected-value-of-decision-by-agent()”, or something like that?
Note that world() is part of the agent(). Certainly, world() will become preference in the next post (and cease to be necessarily a program, while agent() must remain a program), but historically this part was always “world program”, and preference replaces (subsumes) that, which is not an obvious step.
Not necessarily (or not exactly). Where I used agent2() for a predictor, I could use agent3() for the agent as well. See this comment.
Extensions are possible, but not relevant to the main point of the model. It doesn’t seem natural to model different players with the same agent-program. Observations are not part of the model, but you could consider the agent as producing a program with parameters (instead of an unstructured constant) as a result of its decision.
The other use for inputs is information passed to agents through sensory observation. The extension is extremely natural.
“random numbers”: random numbers in nature are really pseudorandom numbers—numbers we lack the computational power to predict. In a full model of the universe, random numbers are not necessary—but different possible worlds with different laws of physics and some probability mass assigned to each, are.
Any randomness in the agent would really be pseudorandomness.
I don’t believe you can obtain proper information about the world through observation (unless you are a human and don’t know what you want). Preference is unchanging and is defined subjectively, so formal agents can’t learn new things about it (apart from resolution of logical uncertainty). Observations play a role in how plans play out (plans are prepared to be conditional on observations), or in priority for preparing said plans as observations come in, but not as criteria for making decisions.
On the other hand, observations could probably be naturally seen as constructing new agents from existing ones (without changing their preference/concept of environment). And some notion of observation needs to be introduced at some point, just as a notion of computational time.
I was using “information” loosely. Define it as “that thing you get from observation” if you want.
The point is, you will make different choices if you get different sensory experiences, because the sensory experiences imply something about how you control the world program. You’re right that this could be modeled with different agent functions instead of parameters. Interesting—that seems rather deeply meaningful.
You can see it as unconditionally making a single conditional choice. The choice is itself a program with parameters, and goes different ways depending on observation, but is made without regard for observation. As an option, the choice is a parametrized constructor for new agents, which upon being constructed will make further choices, again without parameters.
There are several different possible formulations here. It seems like they will lead to the same results. The best one is, I suppose, the most elegant one.
Not sure which is.
So it sounds like you are saying that the agent() program only represents the decision-theory portion of the agent. The Bayesian cognitive portion of the agent and the part of the agent that prefers some things over others are both modeled in world(). Communication with other agents, logic, all these features of agency are in world(), not in agent().
May I suggest that “agent()” has been misnamed? Shouldn’t it be called something more like choice() or decision()? And shouldn’t “world()” be named “expected-value-of-decision-by-agent()”, or something like that?
Note that world() is part of the agent(). Certainly, world() will become preference in the next post (and cease to be necessarily a program, while agent() must remain a program), but historically this part was always “world program”, and preference replaces (subsumes) that, which is not an obvious step.
I want to see it!