For agents, the “large-scale property” of interest is maximizing utility over some stuff “far away”—e.g. far in the future, for the examples in this post.
One consideration that coherence theorems often seem to lack:
It seems to me that often, optimizers establish a boundary and do most of their optimization within that boundary. E.g. animals have a skin that they maintain homeostasis under, companies have offices and factories where they perform their work, states have borders and people have homes.
These don’t entirely dodge coherence theorems—typically a substantial part of the point of these boundaries is to optimize some other thing in the future. But they do set something up I feel.
One consideration that coherence theorems often seem to lack:
It seems to me that often, optimizers establish a boundary and do most of their optimization within that boundary. E.g. animals have a skin that they maintain homeostasis under, companies have offices and factories where they perform their work, states have borders and people have homes.
These don’t entirely dodge coherence theorems—typically a substantial part of the point of these boundaries is to optimize some other thing in the future. But they do set something up I feel.