Making maps is practical even when they are not as precise as the whole territory. The point is, path dependence happens in some space of possibilities, and it’s possible to make maps of that whole space and to make use of them to navigate the possibilities jointly, as opposed to getting caught in any one of them. This doesn’t need to involve global coherence across all possibilities (of moral reflection, in this case), just as optimization of the world doesn’t need to involve steamrolling it into repetition of some perfect pattern. But some parts will have similarities and shared issues with other parts, and can inform each other in their development.
Updatelessness closer to something practical is consulting an external map of possibilities that gives advice on acting in the current situation and explains how following its advice influences the possibilities (in their further development that results from following the advice). That is, you don’t need to yourself “be updateless”, the essential observation is that a single computation can exist in many possible situations, and by being the same thing its evaluation will give the same results in all these situations, coordinating what happens in them (without the use of causal influence of some physical thing). This computation doesn’t need to be the whole agent, for example a calculator on Mars computes the same results as a calculator (of a different make) on Earth, and both implementing the same computation thus coordinate what happens on Mars with what happens on Earth without a need to physically communicate. This becomes a matter of decision theory when the coordinating computation is itself an agent. But it doesn’t need to be the same agent as a user of this decision theory as a whole, it doesn’t need to be something like a human, it can be much smaller and more legible, more like a calculator.
Making maps is practical even when they are not as precise as the whole territory. The point is, path dependence happens in some space of possibilities, and it’s possible to make maps of that whole space and to make use of them to navigate the possibilities jointly, as opposed to getting caught in any one of them. This doesn’t need to involve global coherence across all possibilities (of moral reflection, in this case), just as optimization of the world doesn’t need to involve steamrolling it into repetition of some perfect pattern. But some parts will have similarities and shared issues with other parts, and can inform each other in their development.
Updatelessness closer to something practical is consulting an external map of possibilities that gives advice on acting in the current situation and explains how following its advice influences the possibilities (in their further development that results from following the advice). That is, you don’t need to yourself “be updateless”, the essential observation is that a single computation can exist in many possible situations, and by being the same thing its evaluation will give the same results in all these situations, coordinating what happens in them (without the use of causal influence of some physical thing). This computation doesn’t need to be the whole agent, for example a calculator on Mars computes the same results as a calculator (of a different make) on Earth, and both implementing the same computation thus coordinate what happens on Mars with what happens on Earth without a need to physically communicate. This becomes a matter of decision theory when the coordinating computation is itself an agent. But it doesn’t need to be the same agent as a user of this decision theory as a whole, it doesn’t need to be something like a human, it can be much smaller and more legible, more like a calculator.