What about an agent that cares only about some causal nodes that, , contingently, are very close to itself? I guess an answer could be that, since those nodes are related to the rest of the world, in almost all cases the agent will end up needing to optimize some distant nodes (even if only as instrumental goals), to keep optimizing the close ones.
Nope, my answer is that, insofar as the agent doesn’t care about nodes far away (including instrumentally), I probably just don’t care about that agent. It will hang out in its tiny corner of the universe, and mostly not be relevant to me.
What about an agent that cares only about some causal nodes that, , contingently, are very close to itself? I guess an answer could be that, since those nodes are related to the rest of the world, in almost all cases the agent will end up needing to optimize some distant nodes (even if only as instrumental goals), to keep optimizing the close ones.
Nope, my answer is that, insofar as the agent doesn’t care about nodes far away (including instrumentally), I probably just don’t care about that agent. It will hang out in its tiny corner of the universe, and mostly not be relevant to me.