I recommend thinking about the market example. The difficulty for markets is not that the preferences are “conditioned on the environment”; exactly the opposite. The problem is that the preferences are conditional on internal state; they can’t be captured only by looking at the external environment.
For examples like pepperoni vs mushroom pizza, where we’re just thinking about partial preferences directly, it’s reasonable to say that the problem is partial specification. Presumably the system does something when it has to choose between pepperoni and mushroom—see Donald Hobson’s comment for more on that. But path dependence is a different beast. Once we start thinking about internal state and path dependence, partial preferences are no longer just due to partial specification—they’re due to the system having internal variables which it doesn’t “want” to change.
The problem is that the preferences are conditional on internal state; they can’t be captured only by looking at the external environment.
I think I wasn’t clear enough about what I meant. I mean to question specifically why excluding such so-called “internal” state is the right choice. Yes, it’s difficult and inconvenient to work with that which we cannot externally observe, but I think much of the problem is that our models leave this part of the world out because it can’t be easily observed with sufficient fidelity (yet). The division between internal and external is somewhat arbitrary in that it exists at the limit of our observation powers, not generally as a natural limit of the system independent of our knowledge of it, so I question whether it makes sense to then allow that limit to determine the model we use, rather than stepping back and finding a way to make the model larger such that it can include the epistemological limits that create partial preferences as a consequence rather than being ontologically basic to the model.
Ah, that makes more sense. There’s several answers; the main answer is that the internal/external division is not arbitrary.
First: at least for coherence-type theorems, they need to work for any choice of system which satisfies the basic type signature (i.e. the environment offers choices, the system “decides” between them, for some notion of decision). The theorem has to hold regardless of where we draw the box. On the other hand, you could argue that some theorems are more useful than others and therefore we should draw our boxes to use those theorems—even if it means fewer “systems” qualify. But then we can get into trouble if there’s a specific system we want to talk about which doesn’t qualify—e.g. a human.
Second: in this context, when we talk about “internal” variables, that’s not an arbitrary modelling choice—the “external” vs “internal” terminology is hiding a functionally-important difference. Specifically, the “external” variables are anything which the system chooses between, anything we could offer in a trade. It’s not about the limits of observation, it’s about the limits of trade or tradeoffs or choices. The distribution of wealth within a market is “internal” not because we can’t observe it (to a large extent we can), but because it’s not something that the market itself is capable of choosing, even in principle.
Now, it may be that there are other things in the external world which the market can’t make choice about as a practical matter, like the radius of the moon. But if it somehow became possible to change the radius of the moon, then there’s no inherent reason why the market can’t make a choice on that—as opposed to the internal wealth distribution, where any choice would completely break the market mechanism itself.
That leads into a third answer: think of the “internal” variables as gears, pieces causally involved in making the decision. The system as a whole can have preferences over the entire state of the external world, but if it has preferences about the gears which are used to make decisions… well, then we’re going to end up in a self-referential mess. Which is not to say that it wouldn’t be useful to think about such self-referential messes; it would be an interesting embedded agency problem.
I recommend thinking about the market example. The difficulty for markets is not that the preferences are “conditioned on the environment”; exactly the opposite. The problem is that the preferences are conditional on internal state; they can’t be captured only by looking at the external environment.
For examples like pepperoni vs mushroom pizza, where we’re just thinking about partial preferences directly, it’s reasonable to say that the problem is partial specification. Presumably the system does something when it has to choose between pepperoni and mushroom—see Donald Hobson’s comment for more on that. But path dependence is a different beast. Once we start thinking about internal state and path dependence, partial preferences are no longer just due to partial specification—they’re due to the system having internal variables which it doesn’t “want” to change.
I think I wasn’t clear enough about what I meant. I mean to question specifically why excluding such so-called “internal” state is the right choice. Yes, it’s difficult and inconvenient to work with that which we cannot externally observe, but I think much of the problem is that our models leave this part of the world out because it can’t be easily observed with sufficient fidelity (yet). The division between internal and external is somewhat arbitrary in that it exists at the limit of our observation powers, not generally as a natural limit of the system independent of our knowledge of it, so I question whether it makes sense to then allow that limit to determine the model we use, rather than stepping back and finding a way to make the model larger such that it can include the epistemological limits that create partial preferences as a consequence rather than being ontologically basic to the model.
Ah, that makes more sense. There’s several answers; the main answer is that the internal/external division is not arbitrary.
First: at least for coherence-type theorems, they need to work for any choice of system which satisfies the basic type signature (i.e. the environment offers choices, the system “decides” between them, for some notion of decision). The theorem has to hold regardless of where we draw the box. On the other hand, you could argue that some theorems are more useful than others and therefore we should draw our boxes to use those theorems—even if it means fewer “systems” qualify. But then we can get into trouble if there’s a specific system we want to talk about which doesn’t qualify—e.g. a human.
Second: in this context, when we talk about “internal” variables, that’s not an arbitrary modelling choice—the “external” vs “internal” terminology is hiding a functionally-important difference. Specifically, the “external” variables are anything which the system chooses between, anything we could offer in a trade. It’s not about the limits of observation, it’s about the limits of trade or tradeoffs or choices. The distribution of wealth within a market is “internal” not because we can’t observe it (to a large extent we can), but because it’s not something that the market itself is capable of choosing, even in principle.
Now, it may be that there are other things in the external world which the market can’t make choice about as a practical matter, like the radius of the moon. But if it somehow became possible to change the radius of the moon, then there’s no inherent reason why the market can’t make a choice on that—as opposed to the internal wealth distribution, where any choice would completely break the market mechanism itself.
That leads into a third answer: think of the “internal” variables as gears, pieces causally involved in making the decision. The system as a whole can have preferences over the entire state of the external world, but if it has preferences about the gears which are used to make decisions… well, then we’re going to end up in a self-referential mess. Which is not to say that it wouldn’t be useful to think about such self-referential messes; it would be an interesting embedded agency problem.