“Systems that would adapt their policy if their actions would influence the world in a different way”
Does the teacup pass this test? It doesn’t necessarily seem like it.
We might want to model the system as “Heat bath of Air → teacup → Socrates’ tea”. The teacup “listens to” the temperature of the air on its outside, and according to some equation transmits some heat to the inside. In turn the tea listens to this transmitted heat and determines its temperature.
You can consider the counterfactual world where the air is cold instead of hot. Or the counterfactual world where you replace “Socrates’ tea” with “Meletus’ tea”, or with a frog that will jump out of the cup, or whatever. But in all cases the teacup does not actually change its “policy”, which is just to transmit heat to the inside of the cup according to the laws of physics.
To put it in the terminology of “Discovering Agents”, one can add mechanism variables Ma,Mc,Mt going into the object level variables. But there are no arrows between these, so there’s no agent.
Of course, my model here is bad and wrong physically speaking, even if it does capture crude cause-effect intuition about the effect of air temperature on beverages. However I’d be somewhat surprised if a more physically correct model would introduce an agent to the system where there is none.
But in all cases the teacup does not actually change its “policy”, which is just to transmit heat to the inside of the cup according to the laws of physics.
This kind of description depends completely on how you characterize things. If the policy is “transmit heat according to physics” the policy doesn’t change. If the policy is “get hotter” this policy changes to “get colder”. It’s the same thing, described differently.
“Systems that would adapt their policy if their actions would influence the world in a different way”
Does the teacup pass this test? It doesn’t necessarily seem like it.
We might want to model the system as “Heat bath of Air → teacup → Socrates’ tea”. The teacup “listens to” the temperature of the air on its outside, and according to some equation transmits some heat to the inside. In turn the tea listens to this transmitted heat and determines its temperature.
You can consider the counterfactual world where the air is cold instead of hot. Or the counterfactual world where you replace “Socrates’ tea” with “Meletus’ tea”, or with a frog that will jump out of the cup, or whatever. But in all cases the teacup does not actually change its “policy”, which is just to transmit heat to the inside of the cup according to the laws of physics.
To put it in the terminology of “Discovering Agents”, one can add mechanism variables Ma,Mc,Mt going into the object level variables. But there are no arrows between these, so there’s no agent.
Of course, my model here is bad and wrong physically speaking, even if it does capture crude cause-effect intuition about the effect of air temperature on beverages. However I’d be somewhat surprised if a more physically correct model would introduce an agent to the system where there is none.
This kind of description depends completely on how you characterize things. If the policy is “transmit heat according to physics” the policy doesn’t change. If the policy is “get hotter” this policy changes to “get colder”. It’s the same thing, described differently.