By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
Indeed. I would consider it a given that you should model the objects in your world if you want to predict and influence the world.