I’ll first flag that the results don’t rely on subagents. Creating a group agent out of multiple subagents is possibly an interesting way to create an agent representable as having incomplete preferences, but this isn’t the same as creating a single agent whose single preference relation happens not to satisfy completeness.
Flagging here that I don’t think the subagent framing is super important and/or necessary for “collusion” to happen. Even if the “outer” agent isn’t literally built from subagents, “collusion” can still occur in the sense that it [the outer agent] can notice that its (incomplete) preferences factorize, in a way that allows it to deliberately trade particular completions of them against each other and thereby acquire more resources. The outer agent would then choose to do this for basically the same reason that a committee of subagents would: to acquire more resources for itself as a whole, without disadvantaging any of the completions under consideration.
My results above on invulnerability preclude the possibility that the agent can predictably be made better off by its own lights through an alternative sequence of actions. So I don’t think that’s possible, though I may be misreading you. Could you give an example of a precommitment that the agent would take? In my mind, an example of this would have to show that the agent (not the negotiating subagents) strictly prefers the commitment to what it otherwise would’ve done according to DSM etc.
On my understanding, the argument isn’t that your DSM agent can be made better off, but that the reason it can’t be made better off is because it is engaging in trammeling/“collusion”, and that the form of “trammeling” you’ve ruled out isn’t the useful kind.
As far as an example goes, consider a sequence of actions which, starting from an unpressed world state, routes through a pressed world state (or series of pressed world states), before eventually returning to an unpressed world state with higher utility than the initial state.
(The real-world context of such a sequence shouldn’t be too relevant to a theoretical example like this, but if you had to imagine something, you could imagine a “memory gambit”-esque stratagem, where the agent spends some time inactive in order to avoid detection, but has set things up in advance to eventually reactivate itself under more favorable circumstances. Again, the plausibility of the scenario isn’t super relevant here.)
If your proposed DSM agent passes up this action sequence on the grounds that some of the intermediate steps need to bridge between “incomparable” pressed/unpressed trajectories, then it does in fact pass up the certain gain. Conversely, if it doesn’t pass up such a sequence, then its behavior is the same as that of a set of negotiating subagents cooperating in order to form a larger macroagent.
On my understanding, the argument isn’t that your DSM agent can be made better off, but that the reason it can’t be made better off is because it is engaging in trammeling/“collusion”, and that the form of “trammeling” you’ve ruled out isn’t the useful kind.
I don’t see how this could be right. Consider the bounding results on trammelling under unawareness (e.g. Proposition 10). They show that there will always be a set of options between which DSM does not require choosing one over the other. Suppose these are X and Y. The agent will always be able to choose either one. They might end up always choosing X, always Y, switching back and forth, whatever. This doesn’t look like the outcome of two subagents, one preferring X and the other Y, negotiating to get some portion of the picks.
As far as an example goes, consider a sequence of actions which, starting from an unpressed world state, routes through a pressed world state (or series of pressed world states), before eventually returning to an unpressed world state with higher utility than the initial state.
Forgive me; I’m still not seeing it. For coming up with examples, I think for now it’s unhelpful to use the shutdown problem, because the actual proposal from Thornley includes several more requirements. I think it’s perfectly fine to construct examples about trammelling and subagents using something like this: A is a set of options with typical member ai. These are all comparable and ranked according to their subscripts. That is, a1 is preferred to a2, and so on. Likewise with set B. And all options in A are incomparable to all options in B.
If your proposed DSM agent passes up this action sequence on the grounds that some of the intermediate steps need to bridge between “incomparable” pressed/unpressed trajectories, then it does in fact pass up the certain gain. Conversely, if it doesn’t pass up such a sequence, then its behavior is the same as that of a set of negotiating subagents cooperating in order to form a larger macroagent.
This looks to me like a misunderstanding that I tried to explain in section 3.1. Let me know if not, though, ideally with a worked-out example of the form: “here’s the decision tree(s), here’s what DSM mandates, here’s why it’s untrammelled according to the OP definition, and here’s why it’s problematic.”
This looks to me like a misunderstanding that I tried to explain in section 3.1. Let me know if not, though, ideally with a worked-out example of the form: “here’s the decision tree(s), here’s what DSM mandates, here’s why it’s untrammelled according to the OP definition, and here’s why it’s problematic.”
I don’t think I grok the DSM formalism enough to speak confidently about what it would mandate, but I think I see a (class of) decision problem where any agent (DSM or otherwise) must either pass up a certain gain, or else engage in “problematic” behavior (where “problematic” doesn’t necessarily mean “untrammeled” according to the OP definition, but instead more informally means “something which doesn’t help to avoid the usual pressures away from corrigibility / towards coherence”). The problem in question is essentially the inverse of the example you give in section 3.1:
Consider an agent tasked with choosing between two incomparable options A and B, and if it chooses B, it will be further presented with the option to trade B for A+, where A+ is incomparable to B but comparable (and preferable) to A.
(I’ve slightly modified the framing to be in terms of trades rather than going “up” or “down”, but the decision tree is isomorphic.)
Here, A+ isn’t in fact “strongly maximal” with respect to A and B (because it’s incomparable to B), but I think I’m fairly confident in declaring that any agent which foresees the entire tree in advance, and which does not pick B at the initial node (going “down”, if you want to use the original framing), is engaging in a dominated behavior—and to the extent that DSM doesn’t consider this a dominated strategy, DSM’s definitions aren’t capturing a useful notion of what is “dominated” and what isn’t.
Again, I’m not claiming this is what DSM says. You can think of me as trying to run an obvious-to-me assertion test on code which I haven’t carefully inspected, to see if the result of the test looks sane. But if a (fully aware/non-myopic) DSM agent does constrain itself into picking B (“going down”) in the above example, despite the prima facie incomparability of {A, A+} and {B}, then I would consider this behavior problematic once translated back into the context of real-world shutdownability, because it means the agent in question will at least in some cases act in order to influence whether the button is pressed.
(The hope behind incomplete preferences, after all, is that an agent whose preferences over world-states can be subdivided into “incomparability classes” will only ever act to improve its lot within the class of states it finds itself in to begin with, and will never act to shift—or prevent itself from being shifted—to a different incomparability class. I think the above example presents a deep obstacle to this hope, however. Very roughly speaking, if the gaps in the agent’s preferences can be bridged via certain causal pathways, then a (non-myopic) agent which does not exploit these pathways to its own benefit will notice itself failing to exploit them, and self-modify to stop doing that.)
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
You can think of me as trying to run an obvious-to-me assertion test on code which I haven’t carefully inspected, to see if the result of the test looks sane.
This is reasonable but I think my response to your comment will mainly involve re-stating what I wrote in the post, so maybe it’ll be easier to point to the relevant sections: 3.1. for what DSM mandates when the agent has beliefs about its decision tree, 3.2.2 for what DSM mandates when the agent hadn’t considered an actualised continuation of its decision tree, and 3.3. for discussion of these results. In particular, the following paragraphs are meant to illustrate what DSM mandates in the least favourable epistemic state that the agent could be in (unawareness with new options appearing):
It seems we can’t guarantee non-trammelling in general and between all prospects. But we don’t need to guarantee this for all prospects to guarantee it for some, even under awareness growth. Indeed, as we’ve now shown, there are always prospects with respect to which the agent never gets trammelled, no matter how many choices it faces. In fact, whenever the tree expansion does not bring about new prospects, trammelling will never occur (Proposition 7). And even when it does, trammelling is bounded above by the number of comparability classes (Proposition 10).
And it’s intuitive why this would be: we’re simply picking out the best prospects in each class. For instance, suppose prospects were representable as pairs ⟨s,c⟩ that are comparable iff the s-values are the same, and then preferred to the extent that c is large. Then here’s the process: for each value of s, identify the options that maximise c. Put all of these in a set. Then choice between any options in that set will always remain arbitrary; never trammelled.
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
But it sounds like the agent’s initial choice between A and B is forced, yes? (Otherwise, it wouldn’t be the case that the agent is permitted to end up with either A+ or B, but not A.) So the presence of A+ within a particular continuation of the decision tree influences the agent’s choice at the initial node, in a way that causes it to reliably choose one incomparable option over another.
Further thoughts: under the original framing, instead of choosing between A and B (while knowing that B can later be traded for A+), the agent instead chooses whether to go “up” or “down” to receive (respectively) A, or a further choice between A+ and B. It occurs to me that you might be using this representation to argue for a qualitative difference in the behavior produced, but if so, I’m not sure how much I buy into it.
For concreteness, suppose the agent starts out with A, and notices a series of trades which first involves trading A for B, and then B for A+. It seems to me that if I frame the problem like this, the structure of the resulting tree should be isomorphic to that of the decision problem I described, but not necessarily the “up”/”down” version—at least, not if you consider that version to play a key role in DSM’s recommendation.
(In particular, my frame is sensitive to which state the agent is initialized in: if it is given B to start, then it has no particular incentive to want to trade that for either A or A+, and so faces no incentive to trade at all. If you initialize the agent with A or B at random, and institute the rule that it doesn’t trade by default, then the agent will end up with A+ when initialized with A, and B when initialized with B—which feels a little similar to what you said about DSM allowing both A+ and B as permissible options.)
It sounds like you want to make it so that the agent’s initial state isn’t taken into account—in fact, it sounds like you want to assign values only to terminal nodes in the tree, take the subset of those terminal nodes which have maximal utility within a particular incomparability class, and choose arbitrarily among those. My frame, then, would be equivalent to using the agent’s initial state as a tiebreaker: whichever terminal node shares an incomparability class with the agent’s initial state will be the one the agent chooses to steer towards.
...in which case, assuming I got the above correct, I think I stand by my initial claim that this will lead to behavior which, while not necessarily “trammeling” by your definition, is definitely consequentialist in the worrying sense: an agent initialized in the “shutdown button not pressed” state will perform whatever intermediate steps are needed to navigate to the maximal-utility “shutdown button not pressed” state it can foresee, including actions which prevent the shutdown button from being pressed.
Flagging here that I don’t think the subagent framing is super important and/or necessary for “collusion” to happen. Even if the “outer” agent isn’t literally built from subagents, “collusion” can still occur in the sense that it [the outer agent] can notice that its (incomplete) preferences factorize, in a way that allows it to deliberately trade particular completions of them against each other and thereby acquire more resources. The outer agent would then choose to do this for basically the same reason that a committee of subagents would: to acquire more resources for itself as a whole, without disadvantaging any of the completions under consideration.
I disagree; see my reply to John above.
On my understanding, the argument isn’t that your DSM agent can be made better off, but that the reason it can’t be made better off is because it is engaging in trammeling/“collusion”, and that the form of “trammeling” you’ve ruled out isn’t the useful kind.
As far as an example goes, consider a sequence of actions which, starting from an unpressed world state, routes through a pressed world state (or series of pressed world states), before eventually returning to an unpressed world state with higher utility than the initial state.
(The real-world context of such a sequence shouldn’t be too relevant to a theoretical example like this, but if you had to imagine something, you could imagine a “memory gambit”-esque stratagem, where the agent spends some time inactive in order to avoid detection, but has set things up in advance to eventually reactivate itself under more favorable circumstances. Again, the plausibility of the scenario isn’t super relevant here.)
If your proposed DSM agent passes up this action sequence on the grounds that some of the intermediate steps need to bridge between “incomparable” pressed/unpressed trajectories, then it does in fact pass up the certain gain. Conversely, if it doesn’t pass up such a sequence, then its behavior is the same as that of a set of negotiating subagents cooperating in order to form a larger macroagent.
I don’t see how this could be right. Consider the bounding results on trammelling under unawareness (e.g. Proposition 10). They show that there will always be a set of options between which DSM does not require choosing one over the other. Suppose these are X and Y. The agent will always be able to choose either one. They might end up always choosing X, always Y, switching back and forth, whatever. This doesn’t look like the outcome of two subagents, one preferring X and the other Y, negotiating to get some portion of the picks.
Forgive me; I’m still not seeing it. For coming up with examples, I think for now it’s unhelpful to use the shutdown problem, because the actual proposal from Thornley includes several more requirements. I think it’s perfectly fine to construct examples about trammelling and subagents using something like this: A is a set of options with typical member ai. These are all comparable and ranked according to their subscripts. That is, a1 is preferred to a2, and so on. Likewise with set B. And all options in A are incomparable to all options in B.
This looks to me like a misunderstanding that I tried to explain in section 3.1. Let me know if not, though, ideally with a worked-out example of the form: “here’s the decision tree(s), here’s what DSM mandates, here’s why it’s untrammelled according to the OP definition, and here’s why it’s problematic.”
I don’t think I grok the DSM formalism enough to speak confidently about what it would mandate, but I think I see a (class of) decision problem where any agent (DSM or otherwise) must either pass up a certain gain, or else engage in “problematic” behavior (where “problematic” doesn’t necessarily mean “untrammeled” according to the OP definition, but instead more informally means “something which doesn’t help to avoid the usual pressures away from corrigibility / towards coherence”). The problem in question is essentially the inverse of the example you give in section 3.1:
Consider an agent tasked with choosing between two incomparable options A and B, and if it chooses B, it will be further presented with the option to trade B for A+, where A+ is incomparable to B but comparable (and preferable) to A.
(I’ve slightly modified the framing to be in terms of trades rather than going “up” or “down”, but the decision tree is isomorphic.)
Here, A+ isn’t in fact “strongly maximal” with respect to A and B (because it’s incomparable to B), but I think I’m fairly confident in declaring that any agent which foresees the entire tree in advance, and which does not pick B at the initial node (going “down”, if you want to use the original framing), is engaging in a dominated behavior—and to the extent that DSM doesn’t consider this a dominated strategy, DSM’s definitions aren’t capturing a useful notion of what is “dominated” and what isn’t.
Again, I’m not claiming this is what DSM says. You can think of me as trying to run an obvious-to-me assertion test on code which I haven’t carefully inspected, to see if the result of the test looks sane. But if a (fully aware/non-myopic) DSM agent does constrain itself into picking B (“going down”) in the above example, despite the prima facie incomparability of {A, A+} and {B}, then I would consider this behavior problematic once translated back into the context of real-world shutdownability, because it means the agent in question will at least in some cases act in order to influence whether the button is pressed.
(The hope behind incomplete preferences, after all, is that an agent whose preferences over world-states can be subdivided into “incomparability classes” will only ever act to improve its lot within the class of states it finds itself in to begin with, and will never act to shift—or prevent itself from being shifted—to a different incomparability class. I think the above example presents a deep obstacle to this hope, however. Very roughly speaking, if the gaps in the agent’s preferences can be bridged via certain causal pathways, then a (non-myopic) agent which does not exploit these pathways to its own benefit will notice itself failing to exploit them, and self-modify to stop doing that.)
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
This is reasonable but I think my response to your comment will mainly involve re-stating what I wrote in the post, so maybe it’ll be easier to point to the relevant sections: 3.1. for what DSM mandates when the agent has beliefs about its decision tree, 3.2.2 for what DSM mandates when the agent hadn’t considered an actualised continuation of its decision tree, and 3.3. for discussion of these results. In particular, the following paragraphs are meant to illustrate what DSM mandates in the least favourable epistemic state that the agent could be in (unawareness with new options appearing):
But it sounds like the agent’s initial choice between A and B is forced, yes? (Otherwise, it wouldn’t be the case that the agent is permitted to end up with either A+ or B, but not A.) So the presence of A+ within a particular continuation of the decision tree influences the agent’s choice at the initial node, in a way that causes it to reliably choose one incomparable option over another.
Further thoughts: under the original framing, instead of choosing between A and B (while knowing that B can later be traded for A+), the agent instead chooses whether to go “up” or “down” to receive (respectively) A, or a further choice between A+ and B. It occurs to me that you might be using this representation to argue for a qualitative difference in the behavior produced, but if so, I’m not sure how much I buy into it.
For concreteness, suppose the agent starts out with A, and notices a series of trades which first involves trading A for B, and then B for A+. It seems to me that if I frame the problem like this, the structure of the resulting tree should be isomorphic to that of the decision problem I described, but not necessarily the “up”/”down” version—at least, not if you consider that version to play a key role in DSM’s recommendation.
(In particular, my frame is sensitive to which state the agent is initialized in: if it is given B to start, then it has no particular incentive to want to trade that for either A or A+, and so faces no incentive to trade at all. If you initialize the agent with A or B at random, and institute the rule that it doesn’t trade by default, then the agent will end up with A+ when initialized with A, and B when initialized with B—which feels a little similar to what you said about DSM allowing both A+ and B as permissible options.)
It sounds like you want to make it so that the agent’s initial state isn’t taken into account—in fact, it sounds like you want to assign values only to terminal nodes in the tree, take the subset of those terminal nodes which have maximal utility within a particular incomparability class, and choose arbitrarily among those. My frame, then, would be equivalent to using the agent’s initial state as a tiebreaker: whichever terminal node shares an incomparability class with the agent’s initial state will be the one the agent chooses to steer towards.
...in which case, assuming I got the above correct, I think I stand by my initial claim that this will lead to behavior which, while not necessarily “trammeling” by your definition, is definitely consequentialist in the worrying sense: an agent initialized in the “shutdown button not pressed” state will perform whatever intermediate steps are needed to navigate to the maximal-utility “shutdown button not pressed” state it can foresee, including actions which prevent the shutdown button from being pressed.