Ah, that makes more sense. There’s several answers; the main answer is that the internal/external division is not arbitrary.
First: at least for coherence-type theorems, they need to work for any choice of system which satisfies the basic type signature (i.e. the environment offers choices, the system “decides” between them, for some notion of decision). The theorem has to hold regardless of where we draw the box. On the other hand, you could argue that some theorems are more useful than others and therefore we should draw our boxes to use those theorems—even if it means fewer “systems” qualify. But then we can get into trouble if there’s a specific system we want to talk about which doesn’t qualify—e.g. a human.
Second: in this context, when we talk about “internal” variables, that’s not an arbitrary modelling choice—the “external” vs “internal” terminology is hiding a functionally-important difference. Specifically, the “external” variables are anything which the system chooses between, anything we could offer in a trade. It’s not about the limits of observation, it’s about the limits of trade or tradeoffs or choices. The distribution of wealth within a market is “internal” not because we can’t observe it (to a large extent we can), but because it’s not something that the market itself is capable of choosing, even in principle.
Now, it may be that there are other things in the external world which the market can’t make choice about as a practical matter, like the radius of the moon. But if it somehow became possible to change the radius of the moon, then there’s no inherent reason why the market can’t make a choice on that—as opposed to the internal wealth distribution, where any choice would completely break the market mechanism itself.
That leads into a third answer: think of the “internal” variables as gears, pieces causally involved in making the decision. The system as a whole can have preferences over the entire state of the external world, but if it has preferences about the gears which are used to make decisions… well, then we’re going to end up in a self-referential mess. Which is not to say that it wouldn’t be useful to think about such self-referential messes; it would be an interesting embedded agency problem.
Ah, that makes more sense. There’s several answers; the main answer is that the internal/external division is not arbitrary.
First: at least for coherence-type theorems, they need to work for any choice of system which satisfies the basic type signature (i.e. the environment offers choices, the system “decides” between them, for some notion of decision). The theorem has to hold regardless of where we draw the box. On the other hand, you could argue that some theorems are more useful than others and therefore we should draw our boxes to use those theorems—even if it means fewer “systems” qualify. But then we can get into trouble if there’s a specific system we want to talk about which doesn’t qualify—e.g. a human.
Second: in this context, when we talk about “internal” variables, that’s not an arbitrary modelling choice—the “external” vs “internal” terminology is hiding a functionally-important difference. Specifically, the “external” variables are anything which the system chooses between, anything we could offer in a trade. It’s not about the limits of observation, it’s about the limits of trade or tradeoffs or choices. The distribution of wealth within a market is “internal” not because we can’t observe it (to a large extent we can), but because it’s not something that the market itself is capable of choosing, even in principle.
Now, it may be that there are other things in the external world which the market can’t make choice about as a practical matter, like the radius of the moon. But if it somehow became possible to change the radius of the moon, then there’s no inherent reason why the market can’t make a choice on that—as opposed to the internal wealth distribution, where any choice would completely break the market mechanism itself.
That leads into a third answer: think of the “internal” variables as gears, pieces causally involved in making the decision. The system as a whole can have preferences over the entire state of the external world, but if it has preferences about the gears which are used to make decisions… well, then we’re going to end up in a self-referential mess. Which is not to say that it wouldn’t be useful to think about such self-referential messes; it would be an interesting embedded agency problem.