This is a tricky topic to think about because it’s not obvious how trammelling could be a worry for Thornley’s Incomplete Preference Proposal. I think the most important thing to clarify is why care about ex-ante permissibility. I’ll try to describe that first (this should help with my responses to downstream concerns).
Big picture
Getting terminology out of the way: words like “permissibility” and “mandatory” are shorthand for rankings of prospects. A prospect is permissible iff it’s in a choice set, e.g. by satisfying DSM. It’s mandatory iff it’s the sole element of a choice set.
To see why ex-ante permissibility matters, note that it’s essentially a test to see which prospects the agent is either indifferent between or has a preferential gap between (and are not ranked below anything else). When you can improve a permissible prospect along some dimension and yet retain the same set of permissible prospects, for example, you necessarily have a preferential gap between those remaining prospects. In short, ex-ante permissibility tells you which prospects the agent doesn’t mind picking between.
The part of the Incomplete Preference Proposal that carries much of the weight is the Timestep Near-Dominance (TND) principle for choice under uncertainty. One thing it does, roughly, is require that the agent does not mind shifting probability mass between trajectories in which the shutdown time differs. And this is where incompleteness comes in. You need preferential gaps between trajectories that differ in shutdown time for this to hold in general. If the agent had complete preferences over trajectories, it would have strict preferences between at least some trajectories that differ in shutdown time, giving it reason to shift probability mass by manipulating the button.
Why TND helps get you shutdownability is described in Thornley’s proposal, so I’ll refer to his description and take that as a given here. So, roughly, we’re using TND to get shutdownability, and we’re using incompleteness to get TND. The reason incompleteness helps is that we want to maintain indifference to shifting probability mass between certain trajectories. And that is why we care about ex-ante permissibility. We need the agent, when contemplating manipulating the button, not to want to shift probability mass in that direction. That’ll help give us TND. The rest of Thornley’s proposal includes further conditions on the agent such that it will in fact, ex-post, not manipulate the button. But the reason for the focus on ex-ante permissibility here is TND.
Miscellany
For purposes of e.g. the shutdown problem, or corrigibility more generally, I don’t think I care about the difference between “mandatory” vs “actually chosen”?
The description above should help clear up why we care about multiple options being permissible and none mandatory: to help satisfy TND. What’s “actually chosen” in my framework doesn’t neatly connect to the Thornley proposal since he adds extra scaffolding to the agent to determine how it should act. But that’s a separate issue.
The rough mental model I have of DSM is: at time zero, the agent somehow picks between a bunch of different candidate plans (all of which are “permissible”, whatever that means), and from then on it will behave-as-though it has complete preferences consistent with that plan. ... it sounds like the proposal in the post just frontloads all the trammelling—i.e. it happens immediately at timestep zero.
The notion of trammelling I’m using refers to the set of permissible options shrinking as a result of repeated choice. And I argued that there’s no trammelling under certainty or uncertainty, and that trammelling under unawareness is bounded. Here’s why I don’t think you can see it as the agent behaving as if its preferences were complete.
Consider the case of static choice. It’s meaningful to say that an agent has incomplete preferences. (I don’t think you disagree with that but just for the sake of completeness, I’ll give an example.) Suppose the agent has preferential gaps between all different-letter prospects. From {A,A+,B} the agent will pick either A+ or B. Suppose it picks B. That doesn’t imply, say, that the agent can be thought of as having a strict preference for B over A+. After all, if you offered it {A,A+,B} once again, it might just pick A+, a contradiction. And you can set up something similar with transitivity to get a contradiction from inferring indifference between A+ and B.
Onto dynamic choice. As you write, it’s reasonable to think of various dynamic choice principles as immediately, statically, choosing a trajectory at timestep zero. Suppose we do that. Then by the argument just above, it’s still not appropriate to model the agent as having complete preferences at the time of choosing. We’re not frontloading any trammelling; the set of ex-ante permissible prospects hasn’t changed. And that’s what we care about for TND.
I can kinda vaguely pattern-match the setup in this post to the problem: I want to have one “permissible” choice which involves the shutdown button not being pressed, and another “permissible” choice which involves the button being pressed, and I want these two choices to be incomparable to the agent. Now (my mental model of) the DSM rule says: when the agent is turned on, it somehow chooses between (two plans leading to) those two options, and from then on out acts as though it has complete preferences consistent with the choice—i.e. it either (follows a plan which) makes sure the button is pressed, or (follows a plan which) makes sure the button is not pressed, and actively prevents operators from changing it. Which sounds like not-at-all what I wanted for the shutdown problem!
Agreed! The ex-ante permissibility of various options is not sufficient for shutdownability. The rest of Thornley’s proposal outlines how the agent has to pick (lotteries over) trajectories, which involves more than TND.
So, roughly, we’re using TND to get shutdownability, and we’re using incompleteness to get TND. The reason incompleteness helps is that we want to maintain indifference to shifting probability mass between certain trajectories. And that is why we care about ex-ante permissibility.
I’m on board with the first two sentences there. And then suddenly you jump to “and that’s why we care about ex-ante permissibility”. Why does wanting to maintain indifference to shifting probability mass between (some) trajectories, imply that we care about ex-ante permissibility?
I don’t think I’ve fully grokked the end-to-end story yet, but based on my current less-than-perfect understanding… we can think of Thornley’s construction as a bunch of subagents indexed by t, each of which cares only about worlds where the shutdown button is pressed at time t. Then the incomplete preferences can be ~viewed as the pareto preference ordering for those agents (i.e. pareto improvements are preferred). Using something like the DSM rule to handle the incompleteness, at time zero the system-of-subagents will choose a lottery over trajectories, where the lottery is randomized by when-the-button-is-pressed (and maybe randomized by other stuff too, but that’s the main thing of interest). But then that lottery over trajectories is locked in, and the system will behave from then on out as though its distribution over when-the-button-is-pressed is locked in? And it will act as though it has complete preferences over trajectory-lotteries from then on out, which is presumably not what we want? I’m not yet able to visualize exactly what the system does past that initial lock-in, so I’m not sure.
Why does wanting to maintain indifference to shifting probability mass between (some) trajectories, imply that we care about ex-ante permissibility?
The ex-ante permissible trajectories are the trajectories that the agent lacks any strict preference between. Suppose the permissible trajectories are {A,B,C}. Then, from the agent’s perspective, A isn’t better than B, B isn’t better than A, and so on. The agent considers them all equally choiceworthy. So, the agent doesn’t mind picking any one of them over any other, nor therefore switching from one lottery over them with some distribution to another lottery with some other distribution. The agent doesn’t care whether it gets A versus B, versus an even chance of A or B, versus a one-third chance of A, B, or C.[1]
Suppose we didn’t have multiple permissible options ex-ante. For example, if only A was permissible, then the agent would dislike shifting probability mass away from A and towards B or C—because B and C aren’t among the best options.[2] So that’s why we want multiple ex-ante permissible trajectories: it’s the only way to maintain indifference to shifting probability mass between (those) trajectories.
[I’ll respond to the stuff in your second paragraph under your longer comment.]
The analogous case with complete preferences is clearer: if there are multiple permissible options, the agent must be indifferent between them all (or else the agent would be fine picking a strictly dominated option). So if n options are permissible, then u(xi)=u(xj)∀i,j∈Nn. Assuming expected utility theory, we’ll then of course have ∑ni=1u(xi)p(xi)=∑ni=1u(xi)p′(xi) for any probability functions p,p′. This means the agent is indifferent to shifting probability mass between the permissible options.
Ok, I’ve thought through it a little more, I think I can now articulate some confusions.
On John’s-simplified-model-of-Thornley’s-proposal, we have complete preference orderings over trajectories-in-which-the-button-isn’t-pressed and trajectories-in-which-the-button-is-pressed, separately, but no preference between any button-pressed and button-not-pressed trajectory pair. Represented as subagents, those incomplete preferences require two subagents:
One subagent always prefers button pressed to unpressed, is indifferent between unpressed trajectories, and has the original complete order on pressed trajectories.
The other subagent always prefers button unpressed to pressed, is indifferent between pressed trajectories, and has the original complete order on unpressed trajectories.
In picture form (so far we’ve only covered the blue):
Now there’s a weird-to-me part. Normally I’d say that, taking these incomplete preferences at face value, the agent looks for opportunities to pareto-improve the outcome in both pressed and unpressed worlds. But you and Thornley want to interpret “no preference” as “just choose arbitrarily/randomly” rather than “don’t trade either for the other”, so… this agent just chooses arbitrarily/randomly between the best-available pressed-option and the best-available unpressed-option? But that would imply that the agent is choosing (albeit arbitrarily/randomly) between button-pressed and button-unpressed, which is not what we want, so presumably you’re imagining something else?
I’m going to go ahead with my usual mental model for now—i.e. “no preference” means “don’t trade either for the other”, so our incomplete preference system is aiming for pareto improvements. But possibly this diverges so much from what you’re picturing that the below just won’t be relevant.
With that flagged, on to the trammelling issue. The potential problem is that the two subagents might want to trade, so that the system sometimes tries to make the button be pressed (in cases which get relatively high pressed-utility), and sometimes tries to make the button be unpressed (in cases which get relatively high unpressed-utility). (E.g. the pink arrows in the diagram.) And… locking in a trajectory at the start doesn’t seem to help that problem at all? Like, the subagents do that trading in logical time (i.e. time zero), that adds preferences, and then sometimes they lock in a plan which involves manipulating the button.
On John’s-simplified-model-of-Thornley’s-proposal, we have complete preference orderings over trajectories-in-which-the-button-isn’t-pressed and trajectories-in-which-the-button-is-pressed, separately, but no preference between any button-pressed and button-not-pressed trajectory pair.
For the purposes of this discussion, this is right. I don’t think the differences between this description and the actual proposal matter in this case.
Represented as subagents, those incomplete preferences require two subagents:
One subagent always prefers button pressed to unpressed, is indifferent between unpressed trajectories, and has the original complete order on pressed trajectories.
The other subagent always prefers button unpressed to pressed, is indifferent between pressed trajectories, and has the original complete order on unpressed trajectories.
I don’t think this representation is quite right, although not for a reason I expect to matter for this discussion. It’s a technicality but I’ll mention it for completeness. If we’re using Bradley’s representation theorem from section 2.1., the set of subagents must include every coherent completion of the agent’s preferences. E.g., suppose there are three possible trajectories. Let p denote a pressed trajectory and u1,u2 two unpressed trajectories, where u1 gets you strictly more coins than u2. Then there’ll be five (ordinal) subagents, described in order of preference: ⟨u1,u2,p⟩, ⟨u1,u2p⟩, ⟨u1,p,u2⟩ , ⟨u1p,u2⟩, and ⟨p,u1,u2⟩.
But you and Thornley want to interpret “no preference” as “just choose arbitrarily/randomly” rather than “don’t trade either for the other”, so… this agent just chooses arbitrarily/randomly between the best-available pressed-option and the best-available unpressed-option? But that would imply that the agent is choosing (albeit arbitrarily/randomly) between button-pressed and button-unpressed, which is not what we want, so presumably you’re imagining something else?
Indeed, this wouldn’t be good, and isn’t what Thornley’s proposal does. The agent doesn’t choose arbitrarily between the best pressed vs unpressed options. Thornley’s proposal adds more requirements on the agent to ensure this. My use of ‘arbitrary’ in the post is a bit misleading in that context. I’m only using it to identify when the agent has multiple permissible options available, which is what we’re after to get TND. If no other requirements are added to the agent, and it’s acting under certainty, this could well lead it to actually choose arbitrarily. But it doesn’t have to in general, and under uncertainty and together with the rest of Thornley’s requirements, it doesn’t. (The requirements are described in his proposal.)
With that flagged, on to the trammelling issue. The potential problem is that the two subagents might want to trade, so that the system sometimes tries to make the button be pressed (in cases which get relatively high pressed-utility), and sometimes tries to make the button be unpressed (in cases which get relatively high unpressed-utility). (E.g. the pink arrows in the diagram.) And… locking in a trajectory at the start doesn’t seem to help that problem at all? Like, the subagents do that trading in logical time (i.e. time zero), that adds preferences, and then sometimes they lock in a plan which involves manipulating the button.
I’ll first flag that the results don’t rely on subagents. Creating a group agent out of multiple subagents is possibly an interesting way to create an agent representable as having incomplete preferences, but this isn’t the same as creating a single agent whose single preference relation happens not to satisfy completeness.
That said, I will spend some more time thinking about the subagent idea, and I do think collusion between them seems like the major initial hurdle for this approach to creating an agent with preferential gaps.
I’ll first flag that the results don’t rely on subagents. Creating a group agent out of multiple subagents is possibly an interesting way to create an agent representable as having incomplete preferences, but this isn’t the same as creating a single agent whose single preference relation happens not to satisfy completeness.
The translation between “subagents colluding/trading” and just a plain set of incomplete preferences should be something like: if the subagents representing a set of incomplete preferences would trade with each other to emulate more complete preferences, then an agent with the plain set of incomplete preferences would precommit to act in the same way. I’ve never worked through the math on that, though.
I find the subagents make it a lot easier to think about, which is why I used that frame.
If we’re using Bradley’s representation theorem from section 2.1., the set of subagents must include every coherent completion of the agent’s preferences.
Yeah, I wasn’t using Bradley. The full set of coherent completions is overkill, we just need to nail down the partial order.
if the subagents representing a set of incomplete preferences would trade with each other to emulate more complete preferences, then an agent with the plain set of incomplete preferences would precommit to act in the same way
My results above on invulnerability preclude the possibility that the agent can predictably be made better off by its own lights through an alternative sequence of actions. So I don’t think that’s possible, though I may be misreading you. Could you give an example of a precommitment that the agent would take? In my mind, an example of this would have to show that the agent (not the negotiating subagents) strictly prefers the commitment to what it otherwise would’ve done according to DSM etc.
Yeah, I wasn’t using Bradley. The full set of coherent completions is overkill, we just need to nail down the partial order.
I agree the full set won’t always be needed, at least when we’re just after ordinal preferences, though I personally don’t have a clear picture of when exactly that holds.
(I’m still processing confusion here—there’s some kind of ontology mismatch going on. I think I’ve nailed down one piece of the mismatch enough to articulate it, so maybe this will help something click or at least help us communicate.
Key question: what are the revealed preferences of the DSM agent?
I think part of the confusion here is that I’ve been instinctively trying to think in terms of revealed preferences. But in the OP, there’s a set of input preferences and a decision rule which is supposed to do well by those input preferences, but the revealed preferences of the agent using the rule might (IIUC) differ from the input preferences.
Connecting this to corrigibility/shutdown/Thornley’s proposal: the thing we want, for a shutdown proposal, is a preferential gap in the revealed preferences of the agent. I.e. we want the agent to never spend resources to switch between button pressed/unpressed, but still have revealed preferences between different pressed states and between different unpressed states.
So the key question of interest is: do trammelling-style phenomena induce completion of the agent’s revealed preferences?
Does that immediately make anything click for you?)
Let me first make some comments about revealed preferences that might clarify how I’m seeing this. Preferences are famously underdetermined by limited choice behaviour. If A and B are available and I pick A, you can’t infer that I like A more than B — I might be indifferent or unable to compare them. Worse, under uncertainty, you can’t tell why I chose some lottery over another even if you assume I have strict preferences between all options — the lottery I choose depends on my beliefs too. In expected utility theory, beliefs and preferences together induce choice, so if we only observe a choice, we have one equation in two unknowns.[1] Given my choice, you’d need to read my mind’s probabilities to be able to infer my preferences (and vice versa).[2]
In that sense, preferences (mostly) aren’t actually revealed. Economists often assume various things to apply revealed preference theory, e.g. setting beliefs equal to ‘objective chances’, or assuming a certain functional form for the utility function.
But why do we care about preferences per se, rather than what’s revealed? Because we want to predict future behaviour. If you can’t infer my preferences from my choices, you can’t predict my future choices. In the example above, if my ‘revealed preference’ between A and B is that I prefer A, then you might make false predictions about my future behaviour (because I might well choose B next time).
Let me know if I’m on the right track for clarifying things. If I am, could you say how you see trammelling/shutdown connecting to revealed preferences as described here, and I’ll respond to that?
Let’s walk through more carefully why revealed preferences are interesting in the shutdown problem. (I’m partly thinking as I write, here.) Suppose that, at various times, the agent is offered opportunities to spend resources in order to cause the button to be pushed/unpushed. We want the agent to turn down such opportunities, in both directions—implying either indifference or lack of preference in any revealed preferences. Further, we do want the agent to spend resources to cause various different outcomes within the button-pressed or button-unpressed worlds, so there’s nontrivial revealed preference ordering within button-pressed worlds and within button-unpressed worlds. But if the agent is to turn down costly opportunities to cause the button to be pressed/unpressed, and those opportunities jump between enough different pressed-outcome and unpressed-outcome pairs (which themselves each have nontrivial revealed preferences), then there’s going to be a revealed preference gap.
Upshot: (one way to frame) the reason that the shutdown problem is difficult/interesting in the first place, is that the desired behavior implies a revealed preference gap. Insofar as e.g. any standard expected utility maximizer cannot have a revealed preference gap, such standard EU maximizers cannot behave the way we want. (This frame is new-to-me, so thankyou.)
(Note that that’s all totally compatible with revealed preferences usually being very underdetermined! The desired behavior nails things down enough that any assignment of revealed preferences must have a preferential gap. The question is whether we can come up with some agent with a stable gap in its revealed preferences.)
(Also note that the story above routed through causal intervention/counterfactuals to probe revealed preference, so that does open up a lot of extra ways-of-revealing. Not sure if that’s relevant yet.)
Now bringing this back to DSM… I think the question I’m interested in is: “do trammelling-style issues imply that DSM agents will not have a revealed preference gap (under reasonable assumptions about their environment and capabilities)?”. If the answer is “yes”—i.e. if trammelling-style issues do imply that sufficiently capable DSM agents will have no revealed preference gaps—then that would imply that capable DSM agents cannot display the shutdown behavior we want.
On the other hand, if DSM agents can have revealed preference gaps, without having to artificially limit the agents’ capabilities or the richness of the environment, then that seems like it would circumvent the main interesting barrier to the shutdown problem. So I think that’s my main crux here.
Great, I think bits of this comment help me understand what you’re pointing to.
the desired behavior implies a revealed preference gap
I think this is roughly right, together with all the caveats about the exact statements of Thornley’s impossibility theorems. Speaking precisely here will be cumbersome so for the sake of clarity I’ll try to restate what you wrote like this:
Useful agents satisfying completeness and other properties X won’t be shutdownable.
Properties X are necessary for an agent to be useful.
So, useful agents satisfying completeness won’t be shutdownable.
So, if a useful agent is shutdownable, its preferences are incomplete.
This argument would let us say that observing usefulness and shutdownability reveals a preferential gap.
I think the question I’m interested in is: “do trammelling-style issues imply that DSM agents will not have a revealed preference gap (under reasonable assumptions about their environment and capabilities)?”
A quick distinction: an agent can (i) reveal p, (ii) reveal ¬p, or (iii) neither reveal p nor ¬p. The problem of underdetermination of preference is of the third form.
We can think of some of the properties we’ve discussed as ‘tests’ of incomparability, which might or might not reveal preferential gaps. The test in the argument just above is whether the agent is useful and shutdownable. The test I use for my results above (roughly) is ‘arbitrary choice’. The reason I use that test is that my results are self-contained; I don’t make use of Thornley’s various requirements for shutdownability. Of course, arbitrary choice isn’t what we want for shutdownability. It’s just a test for incomparability that I used for an agent that isn’t yet endowed with Thornley’s other requirements.
The trammelling results, though, don’t give me any reason to think that DSM is problematic for shutdownability. I haven’t formally characterised an agent satisfying DSM as well as TND, Stochastic Near-Dominance, and so on, so I can’t yet give a definitive or exact answer to how DSM affects the behaviour of a Thornley-style agent. (This is something I’ll be working on.) But regarding trammelling, I think my results are reasons for optimism if anything. Even in the least convenient case that I looked at—awareness growth—I wrote this in section 3.3. as an intuition pump:
we’re simply picking out the best prospects in each class. For instance, suppose prospects were representable as pairs ⟨s,c⟩ that are comparable iff the s-values are the same, and then preferred to the extent that c is large. Then here’s the process: for each value of s, identify the options that maximise c. Put all of these in a set. Then choice between any options in that set will always remain arbitrary; never trammelled.
That is, we retain the preferential gap between the options we want a preferential gap between.
[As an aside, the description in your first paragraph of what we want from a shutdownable agent doesn’t quite match Thornley’s setup; the relevant part to see this is section 10.1. here.]
I’ll first flag that the results don’t rely on subagents. Creating a group agent out of multiple subagents is possibly an interesting way to create an agent representable as having incomplete preferences, but this isn’t the same as creating a single agent whose single preference relation happens not to satisfy completeness.
Flagging here that I don’t think the subagent framing is super important and/or necessary for “collusion” to happen. Even if the “outer” agent isn’t literally built from subagents, “collusion” can still occur in the sense that it [the outer agent] can notice that its (incomplete) preferences factorize, in a way that allows it to deliberately trade particular completions of them against each other and thereby acquire more resources. The outer agent would then choose to do this for basically the same reason that a committee of subagents would: to acquire more resources for itself as a whole, without disadvantaging any of the completions under consideration.
My results above on invulnerability preclude the possibility that the agent can predictably be made better off by its own lights through an alternative sequence of actions. So I don’t think that’s possible, though I may be misreading you. Could you give an example of a precommitment that the agent would take? In my mind, an example of this would have to show that the agent (not the negotiating subagents) strictly prefers the commitment to what it otherwise would’ve done according to DSM etc.
On my understanding, the argument isn’t that your DSM agent can be made better off, but that the reason it can’t be made better off is because it is engaging in trammeling/“collusion”, and that the form of “trammeling” you’ve ruled out isn’t the useful kind.
As far as an example goes, consider a sequence of actions which, starting from an unpressed world state, routes through a pressed world state (or series of pressed world states), before eventually returning to an unpressed world state with higher utility than the initial state.
(The real-world context of such a sequence shouldn’t be too relevant to a theoretical example like this, but if you had to imagine something, you could imagine a “memory gambit”-esque stratagem, where the agent spends some time inactive in order to avoid detection, but has set things up in advance to eventually reactivate itself under more favorable circumstances. Again, the plausibility of the scenario isn’t super relevant here.)
If your proposed DSM agent passes up this action sequence on the grounds that some of the intermediate steps need to bridge between “incomparable” pressed/unpressed trajectories, then it does in fact pass up the certain gain. Conversely, if it doesn’t pass up such a sequence, then its behavior is the same as that of a set of negotiating subagents cooperating in order to form a larger macroagent.
On my understanding, the argument isn’t that your DSM agent can be made better off, but that the reason it can’t be made better off is because it is engaging in trammeling/“collusion”, and that the form of “trammeling” you’ve ruled out isn’t the useful kind.
I don’t see how this could be right. Consider the bounding results on trammelling under unawareness (e.g. Proposition 10). They show that there will always be a set of options between which DSM does not require choosing one over the other. Suppose these are X and Y. The agent will always be able to choose either one. They might end up always choosing X, always Y, switching back and forth, whatever. This doesn’t look like the outcome of two subagents, one preferring X and the other Y, negotiating to get some portion of the picks.
As far as an example goes, consider a sequence of actions which, starting from an unpressed world state, routes through a pressed world state (or series of pressed world states), before eventually returning to an unpressed world state with higher utility than the initial state.
Forgive me; I’m still not seeing it. For coming up with examples, I think for now it’s unhelpful to use the shutdown problem, because the actual proposal from Thornley includes several more requirements. I think it’s perfectly fine to construct examples about trammelling and subagents using something like this: A is a set of options with typical member ai. These are all comparable and ranked according to their subscripts. That is, a1 is preferred to a2, and so on. Likewise with set B. And all options in A are incomparable to all options in B.
If your proposed DSM agent passes up this action sequence on the grounds that some of the intermediate steps need to bridge between “incomparable” pressed/unpressed trajectories, then it does in fact pass up the certain gain. Conversely, if it doesn’t pass up such a sequence, then its behavior is the same as that of a set of negotiating subagents cooperating in order to form a larger macroagent.
This looks to me like a misunderstanding that I tried to explain in section 3.1. Let me know if not, though, ideally with a worked-out example of the form: “here’s the decision tree(s), here’s what DSM mandates, here’s why it’s untrammelled according to the OP definition, and here’s why it’s problematic.”
This looks to me like a misunderstanding that I tried to explain in section 3.1. Let me know if not, though, ideally with a worked-out example of the form: “here’s the decision tree(s), here’s what DSM mandates, here’s why it’s untrammelled according to the OP definition, and here’s why it’s problematic.”
I don’t think I grok the DSM formalism enough to speak confidently about what it would mandate, but I think I see a (class of) decision problem where any agent (DSM or otherwise) must either pass up a certain gain, or else engage in “problematic” behavior (where “problematic” doesn’t necessarily mean “untrammeled” according to the OP definition, but instead more informally means “something which doesn’t help to avoid the usual pressures away from corrigibility / towards coherence”). The problem in question is essentially the inverse of the example you give in section 3.1:
Consider an agent tasked with choosing between two incomparable options A and B, and if it chooses B, it will be further presented with the option to trade B for A+, where A+ is incomparable to B but comparable (and preferable) to A.
(I’ve slightly modified the framing to be in terms of trades rather than going “up” or “down”, but the decision tree is isomorphic.)
Here, A+ isn’t in fact “strongly maximal” with respect to A and B (because it’s incomparable to B), but I think I’m fairly confident in declaring that any agent which foresees the entire tree in advance, and which does not pick B at the initial node (going “down”, if you want to use the original framing), is engaging in a dominated behavior—and to the extent that DSM doesn’t consider this a dominated strategy, DSM’s definitions aren’t capturing a useful notion of what is “dominated” and what isn’t.
Again, I’m not claiming this is what DSM says. You can think of me as trying to run an obvious-to-me assertion test on code which I haven’t carefully inspected, to see if the result of the test looks sane. But if a (fully aware/non-myopic) DSM agent does constrain itself into picking B (“going down”) in the above example, despite the prima facie incomparability of {A, A+} and {B}, then I would consider this behavior problematic once translated back into the context of real-world shutdownability, because it means the agent in question will at least in some cases act in order to influence whether the button is pressed.
(The hope behind incomplete preferences, after all, is that an agent whose preferences over world-states can be subdivided into “incomparability classes” will only ever act to improve its lot within the class of states it finds itself in to begin with, and will never act to shift—or prevent itself from being shifted—to a different incomparability class. I think the above example presents a deep obstacle to this hope, however. Very roughly speaking, if the gaps in the agent’s preferences can be bridged via certain causal pathways, then a (non-myopic) agent which does not exploit these pathways to its own benefit will notice itself failing to exploit them, and self-modify to stop doing that.)
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
You can think of me as trying to run an obvious-to-me assertion test on code which I haven’t carefully inspected, to see if the result of the test looks sane.
This is reasonable but I think my response to your comment will mainly involve re-stating what I wrote in the post, so maybe it’ll be easier to point to the relevant sections: 3.1. for what DSM mandates when the agent has beliefs about its decision tree, 3.2.2 for what DSM mandates when the agent hadn’t considered an actualised continuation of its decision tree, and 3.3. for discussion of these results. In particular, the following paragraphs are meant to illustrate what DSM mandates in the least favourable epistemic state that the agent could be in (unawareness with new options appearing):
It seems we can’t guarantee non-trammelling in general and between all prospects. But we don’t need to guarantee this for all prospects to guarantee it for some, even under awareness growth. Indeed, as we’ve now shown, there are always prospects with respect to which the agent never gets trammelled, no matter how many choices it faces. In fact, whenever the tree expansion does not bring about new prospects, trammelling will never occur (Proposition 7). And even when it does, trammelling is bounded above by the number of comparability classes (Proposition 10).
And it’s intuitive why this would be: we’re simply picking out the best prospects in each class. For instance, suppose prospects were representable as pairs ⟨s,c⟩ that are comparable iff the s-values are the same, and then preferred to the extent that c is large. Then here’s the process: for each value of s, identify the options that maximise c. Put all of these in a set. Then choice between any options in that set will always remain arbitrary; never trammelled.
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
But it sounds like the agent’s initial choice between A and B is forced, yes? (Otherwise, it wouldn’t be the case that the agent is permitted to end up with either A+ or B, but not A.) So the presence of A+ within a particular continuation of the decision tree influences the agent’s choice at the initial node, in a way that causes it to reliably choose one incomparable option over another.
Further thoughts: under the original framing, instead of choosing between A and B (while knowing that B can later be traded for A+), the agent instead chooses whether to go “up” or “down” to receive (respectively) A, or a further choice between A+ and B. It occurs to me that you might be using this representation to argue for a qualitative difference in the behavior produced, but if so, I’m not sure how much I buy into it.
For concreteness, suppose the agent starts out with A, and notices a series of trades which first involves trading A for B, and then B for A+. It seems to me that if I frame the problem like this, the structure of the resulting tree should be isomorphic to that of the decision problem I described, but not necessarily the “up”/”down” version—at least, not if you consider that version to play a key role in DSM’s recommendation.
(In particular, my frame is sensitive to which state the agent is initialized in: if it is given B to start, then it has no particular incentive to want to trade that for either A or A+, and so faces no incentive to trade at all. If you initialize the agent with A or B at random, and institute the rule that it doesn’t trade by default, then the agent will end up with A+ when initialized with A, and B when initialized with B—which feels a little similar to what you said about DSM allowing both A+ and B as permissible options.)
It sounds like you want to make it so that the agent’s initial state isn’t taken into account—in fact, it sounds like you want to assign values only to terminal nodes in the tree, take the subset of those terminal nodes which have maximal utility within a particular incomparability class, and choose arbitrarily among those. My frame, then, would be equivalent to using the agent’s initial state as a tiebreaker: whichever terminal node shares an incomparability class with the agent’s initial state will be the one the agent chooses to steer towards.
...in which case, assuming I got the above correct, I think I stand by my initial claim that this will lead to behavior which, while not necessarily “trammeling” by your definition, is definitely consequentialist in the worrying sense: an agent initialized in the “shutdown button not pressed” state will perform whatever intermediate steps are needed to navigate to the maximal-utility “shutdown button not pressed” state it can foresee, including actions which prevent the shutdown button from being pressed.
This is a tricky topic to think about because it’s not obvious how trammelling could be a worry for Thornley’s Incomplete Preference Proposal. I think the most important thing to clarify is why care about ex-ante permissibility. I’ll try to describe that first (this should help with my responses to downstream concerns).
Big picture
Getting terminology out of the way: words like “permissibility” and “mandatory” are shorthand for rankings of prospects. A prospect is permissible iff it’s in a choice set, e.g. by satisfying DSM. It’s mandatory iff it’s the sole element of a choice set.
To see why ex-ante permissibility matters, note that it’s essentially a test to see which prospects the agent is either indifferent between or has a preferential gap between (and are not ranked below anything else). When you can improve a permissible prospect along some dimension and yet retain the same set of permissible prospects, for example, you necessarily have a preferential gap between those remaining prospects. In short, ex-ante permissibility tells you which prospects the agent doesn’t mind picking between.
The part of the Incomplete Preference Proposal that carries much of the weight is the Timestep Near-Dominance (TND) principle for choice under uncertainty. One thing it does, roughly, is require that the agent does not mind shifting probability mass between trajectories in which the shutdown time differs. And this is where incompleteness comes in. You need preferential gaps between trajectories that differ in shutdown time for this to hold in general. If the agent had complete preferences over trajectories, it would have strict preferences between at least some trajectories that differ in shutdown time, giving it reason to shift probability mass by manipulating the button.
Why TND helps get you shutdownability is described in Thornley’s proposal, so I’ll refer to his description and take that as a given here. So, roughly, we’re using TND to get shutdownability, and we’re using incompleteness to get TND. The reason incompleteness helps is that we want to maintain indifference to shifting probability mass between certain trajectories. And that is why we care about ex-ante permissibility. We need the agent, when contemplating manipulating the button, not to want to shift probability mass in that direction. That’ll help give us TND. The rest of Thornley’s proposal includes further conditions on the agent such that it will in fact, ex-post, not manipulate the button. But the reason for the focus on ex-ante permissibility here is TND.
Miscellany
The description above should help clear up why we care about multiple options being permissible and none mandatory: to help satisfy TND. What’s “actually chosen” in my framework doesn’t neatly connect to the Thornley proposal since he adds extra scaffolding to the agent to determine how it should act. But that’s a separate issue.
The notion of trammelling I’m using refers to the set of permissible options shrinking as a result of repeated choice. And I argued that there’s no trammelling under certainty or uncertainty, and that trammelling under unawareness is bounded. Here’s why I don’t think you can see it as the agent behaving as if its preferences were complete.
Consider the case of static choice. It’s meaningful to say that an agent has incomplete preferences. (I don’t think you disagree with that but just for the sake of completeness, I’ll give an example.) Suppose the agent has preferential gaps between all different-letter prospects. From {A,A+,B} the agent will pick either A+ or B. Suppose it picks B. That doesn’t imply, say, that the agent can be thought of as having a strict preference for B over A+. After all, if you offered it {A,A+,B} once again, it might just pick A+, a contradiction. And you can set up something similar with transitivity to get a contradiction from inferring indifference between A+ and B.
Onto dynamic choice. As you write, it’s reasonable to think of various dynamic choice principles as immediately, statically, choosing a trajectory at timestep zero. Suppose we do that. Then by the argument just above, it’s still not appropriate to model the agent as having complete preferences at the time of choosing. We’re not frontloading any trammelling; the set of ex-ante permissible prospects hasn’t changed. And that’s what we care about for TND.
Agreed! The ex-ante permissibility of various options is not sufficient for shutdownability. The rest of Thornley’s proposal outlines how the agent has to pick (lotteries over) trajectories, which involves more than TND.
I’m on board with the first two sentences there. And then suddenly you jump to “and that’s why we care about ex-ante permissibility”. Why does wanting to maintain indifference to shifting probability mass between (some) trajectories, imply that we care about ex-ante permissibility?
I don’t think I’ve fully grokked the end-to-end story yet, but based on my current less-than-perfect understanding… we can think of Thornley’s construction as a bunch of subagents indexed by t, each of which cares only about worlds where the shutdown button is pressed at time t. Then the incomplete preferences can be ~viewed as the pareto preference ordering for those agents (i.e. pareto improvements are preferred). Using something like the DSM rule to handle the incompleteness, at time zero the system-of-subagents will choose a lottery over trajectories, where the lottery is randomized by when-the-button-is-pressed (and maybe randomized by other stuff too, but that’s the main thing of interest). But then that lottery over trajectories is locked in, and the system will behave from then on out as though its distribution over when-the-button-is-pressed is locked in? And it will act as though it has complete preferences over trajectory-lotteries from then on out, which is presumably not what we want? I’m not yet able to visualize exactly what the system does past that initial lock-in, so I’m not sure.
The ex-ante permissible trajectories are the trajectories that the agent lacks any strict preference between. Suppose the permissible trajectories are {A,B,C}. Then, from the agent’s perspective, A isn’t better than B, B isn’t better than A, and so on. The agent considers them all equally choiceworthy. So, the agent doesn’t mind picking any one of them over any other, nor therefore switching from one lottery over them with some distribution to another lottery with some other distribution. The agent doesn’t care whether it gets A versus B, versus an even chance of A or B, versus a one-third chance of A, B, or C.[1]
Suppose we didn’t have multiple permissible options ex-ante. For example, if only A was permissible, then the agent would dislike shifting probability mass away from A and towards B or C—because B and C aren’t among the best options.[2] So that’s why we want multiple ex-ante permissible trajectories: it’s the only way to maintain indifference to shifting probability mass between (those) trajectories.
[I’ll respond to the stuff in your second paragraph under your longer comment.]
The analogous case with complete preferences is clearer: if there are multiple permissible options, the agent must be indifferent between them all (or else the agent would be fine picking a strictly dominated option). So if n options are permissible, then u(xi)=u(xj)∀i,j∈Nn. Assuming expected utility theory, we’ll then of course have ∑ni=1u(xi)p(xi)=∑ni=1u(xi)p′(xi) for any probability functions p,p′. This means the agent is indifferent to shifting probability mass between the permissible options.
This is a bit simplified but it should get the point across.
Ok, I’ve thought through it a little more, I think I can now articulate some confusions.
On John’s-simplified-model-of-Thornley’s-proposal, we have complete preference orderings over trajectories-in-which-the-button-isn’t-pressed and trajectories-in-which-the-button-is-pressed, separately, but no preference between any button-pressed and button-not-pressed trajectory pair. Represented as subagents, those incomplete preferences require two subagents:
One subagent always prefers button pressed to unpressed, is indifferent between unpressed trajectories, and has the original complete order on pressed trajectories.
The other subagent always prefers button unpressed to pressed, is indifferent between pressed trajectories, and has the original complete order on unpressed trajectories.
In picture form (so far we’ve only covered the blue):
Now there’s a weird-to-me part. Normally I’d say that, taking these incomplete preferences at face value, the agent looks for opportunities to pareto-improve the outcome in both pressed and unpressed worlds. But you and Thornley want to interpret “no preference” as “just choose arbitrarily/randomly” rather than “don’t trade either for the other”, so… this agent just chooses arbitrarily/randomly between the best-available pressed-option and the best-available unpressed-option? But that would imply that the agent is choosing (albeit arbitrarily/randomly) between button-pressed and button-unpressed, which is not what we want, so presumably you’re imagining something else?
I’m going to go ahead with my usual mental model for now—i.e. “no preference” means “don’t trade either for the other”, so our incomplete preference system is aiming for pareto improvements. But possibly this diverges so much from what you’re picturing that the below just won’t be relevant.
With that flagged, on to the trammelling issue. The potential problem is that the two subagents might want to trade, so that the system sometimes tries to make the button be pressed (in cases which get relatively high pressed-utility), and sometimes tries to make the button be unpressed (in cases which get relatively high unpressed-utility). (E.g. the pink arrows in the diagram.) And… locking in a trajectory at the start doesn’t seem to help that problem at all? Like, the subagents do that trading in logical time (i.e. time zero), that adds preferences, and then sometimes they lock in a plan which involves manipulating the button.
What am I missing still?
For the purposes of this discussion, this is right. I don’t think the differences between this description and the actual proposal matter in this case.
I don’t think this representation is quite right, although not for a reason I expect to matter for this discussion. It’s a technicality but I’ll mention it for completeness. If we’re using Bradley’s representation theorem from section 2.1., the set of subagents must include every coherent completion of the agent’s preferences. E.g., suppose there are three possible trajectories. Let p denote a pressed trajectory and u1,u2 two unpressed trajectories, where u1 gets you strictly more coins than u2. Then there’ll be five (ordinal) subagents, described in order of preference: ⟨u1,u2,p⟩, ⟨u1,u2p⟩, ⟨u1,p,u2⟩ , ⟨u1p,u2⟩, and ⟨p,u1,u2⟩.
Indeed, this wouldn’t be good, and isn’t what Thornley’s proposal does. The agent doesn’t choose arbitrarily between the best pressed vs unpressed options. Thornley’s proposal adds more requirements on the agent to ensure this. My use of ‘arbitrary’ in the post is a bit misleading in that context. I’m only using it to identify when the agent has multiple permissible options available, which is what we’re after to get TND. If no other requirements are added to the agent, and it’s acting under certainty, this could well lead it to actually choose arbitrarily. But it doesn’t have to in general, and under uncertainty and together with the rest of Thornley’s requirements, it doesn’t. (The requirements are described in his proposal.)
I’ll first flag that the results don’t rely on subagents. Creating a group agent out of multiple subagents is possibly an interesting way to create an agent representable as having incomplete preferences, but this isn’t the same as creating a single agent whose single preference relation happens not to satisfy completeness.
That said, I will spend some more time thinking about the subagent idea, and I do think collusion between them seems like the major initial hurdle for this approach to creating an agent with preferential gaps.
The translation between “subagents colluding/trading” and just a plain set of incomplete preferences should be something like: if the subagents representing a set of incomplete preferences would trade with each other to emulate more complete preferences, then an agent with the plain set of incomplete preferences would precommit to act in the same way. I’ve never worked through the math on that, though.
I find the subagents make it a lot easier to think about, which is why I used that frame.
Yeah, I wasn’t using Bradley. The full set of coherent completions is overkill, we just need to nail down the partial order.
My results above on invulnerability preclude the possibility that the agent can predictably be made better off by its own lights through an alternative sequence of actions. So I don’t think that’s possible, though I may be misreading you. Could you give an example of a precommitment that the agent would take? In my mind, an example of this would have to show that the agent (not the negotiating subagents) strictly prefers the commitment to what it otherwise would’ve done according to DSM etc.
I agree the full set won’t always be needed, at least when we’re just after ordinal preferences, though I personally don’t have a clear picture of when exactly that holds.
(I’m still processing confusion here—there’s some kind of ontology mismatch going on. I think I’ve nailed down one piece of the mismatch enough to articulate it, so maybe this will help something click or at least help us communicate.
Key question: what are the revealed preferences of the DSM agent?
I think part of the confusion here is that I’ve been instinctively trying to think in terms of revealed preferences. But in the OP, there’s a set of input preferences and a decision rule which is supposed to do well by those input preferences, but the revealed preferences of the agent using the rule might (IIUC) differ from the input preferences.
Connecting this to corrigibility/shutdown/Thornley’s proposal: the thing we want, for a shutdown proposal, is a preferential gap in the revealed preferences of the agent. I.e. we want the agent to never spend resources to switch between button pressed/unpressed, but still have revealed preferences between different pressed states and between different unpressed states.
So the key question of interest is: do trammelling-style phenomena induce completion of the agent’s revealed preferences?
Does that immediately make anything click for you?)
That makes sense, yeah.
Let me first make some comments about revealed preferences that might clarify how I’m seeing this. Preferences are famously underdetermined by limited choice behaviour. If A and B are available and I pick A, you can’t infer that I like A more than B — I might be indifferent or unable to compare them. Worse, under uncertainty, you can’t tell why I chose some lottery over another even if you assume I have strict preferences between all options — the lottery I choose depends on my beliefs too. In expected utility theory, beliefs and preferences together induce choice, so if we only observe a choice, we have one equation in two unknowns.[1] Given my choice, you’d need to read my mind’s probabilities to be able to infer my preferences (and vice versa).[2]
In that sense, preferences (mostly) aren’t actually revealed. Economists often assume various things to apply revealed preference theory, e.g. setting beliefs equal to ‘objective chances’, or assuming a certain functional form for the utility function.
But why do we care about preferences per se, rather than what’s revealed? Because we want to predict future behaviour. If you can’t infer my preferences from my choices, you can’t predict my future choices. In the example above, if my ‘revealed preference’ between A and B is that I prefer A, then you might make false predictions about my future behaviour (because I might well choose B next time).
Let me know if I’m on the right track for clarifying things. If I am, could you say how you see trammelling/shutdown connecting to revealed preferences as described here, and I’ll respond to that?
L∗∈argmaxL∑iu(xi)p(xi[L])
The situation is even worse when you can’t tell what I’m choosing between, or what my preference relation is defined over.
Feels like we’re making some progress here.
Let’s walk through more carefully why revealed preferences are interesting in the shutdown problem. (I’m partly thinking as I write, here.) Suppose that, at various times, the agent is offered opportunities to spend resources in order to cause the button to be pushed/unpushed. We want the agent to turn down such opportunities, in both directions—implying either indifference or lack of preference in any revealed preferences. Further, we do want the agent to spend resources to cause various different outcomes within the button-pressed or button-unpressed worlds, so there’s nontrivial revealed preference ordering within button-pressed worlds and within button-unpressed worlds. But if the agent is to turn down costly opportunities to cause the button to be pressed/unpressed, and those opportunities jump between enough different pressed-outcome and unpressed-outcome pairs (which themselves each have nontrivial revealed preferences), then there’s going to be a revealed preference gap.
Upshot: (one way to frame) the reason that the shutdown problem is difficult/interesting in the first place, is that the desired behavior implies a revealed preference gap. Insofar as e.g. any standard expected utility maximizer cannot have a revealed preference gap, such standard EU maximizers cannot behave the way we want. (This frame is new-to-me, so thankyou.)
(Note that that’s all totally compatible with revealed preferences usually being very underdetermined! The desired behavior nails things down enough that any assignment of revealed preferences must have a preferential gap. The question is whether we can come up with some agent with a stable gap in its revealed preferences.)
(Also note that the story above routed through causal intervention/counterfactuals to probe revealed preference, so that does open up a lot of extra ways-of-revealing. Not sure if that’s relevant yet.)
Now bringing this back to DSM… I think the question I’m interested in is: “do trammelling-style issues imply that DSM agents will not have a revealed preference gap (under reasonable assumptions about their environment and capabilities)?”. If the answer is “yes”—i.e. if trammelling-style issues do imply that sufficiently capable DSM agents will have no revealed preference gaps—then that would imply that capable DSM agents cannot display the shutdown behavior we want.
On the other hand, if DSM agents can have revealed preference gaps, without having to artificially limit the agents’ capabilities or the richness of the environment, then that seems like it would circumvent the main interesting barrier to the shutdown problem. So I think that’s my main crux here.
Great, I think bits of this comment help me understand what you’re pointing to.
I think this is roughly right, together with all the caveats about the exact statements of Thornley’s impossibility theorems. Speaking precisely here will be cumbersome so for the sake of clarity I’ll try to restate what you wrote like this:
Useful agents satisfying completeness and other properties X won’t be shutdownable.
Properties X are necessary for an agent to be useful.
So, useful agents satisfying completeness won’t be shutdownable.
So, if a useful agent is shutdownable, its preferences are incomplete.
This argument would let us say that observing usefulness and shutdownability reveals a preferential gap.
A quick distinction: an agent can (i) reveal p, (ii) reveal ¬p, or (iii) neither reveal p nor ¬p. The problem of underdetermination of preference is of the third form.
We can think of some of the properties we’ve discussed as ‘tests’ of incomparability, which might or might not reveal preferential gaps. The test in the argument just above is whether the agent is useful and shutdownable. The test I use for my results above (roughly) is ‘arbitrary choice’. The reason I use that test is that my results are self-contained; I don’t make use of Thornley’s various requirements for shutdownability. Of course, arbitrary choice isn’t what we want for shutdownability. It’s just a test for incomparability that I used for an agent that isn’t yet endowed with Thornley’s other requirements.
The trammelling results, though, don’t give me any reason to think that DSM is problematic for shutdownability. I haven’t formally characterised an agent satisfying DSM as well as TND, Stochastic Near-Dominance, and so on, so I can’t yet give a definitive or exact answer to how DSM affects the behaviour of a Thornley-style agent. (This is something I’ll be working on.) But regarding trammelling, I think my results are reasons for optimism if anything. Even in the least convenient case that I looked at—awareness growth—I wrote this in section 3.3. as an intuition pump:
That is, we retain the preferential gap between the options we want a preferential gap between.
[As an aside, the description in your first paragraph of what we want from a shutdownable agent doesn’t quite match Thornley’s setup; the relevant part to see this is section 10.1. here.]
Flagging here that I don’t think the subagent framing is super important and/or necessary for “collusion” to happen. Even if the “outer” agent isn’t literally built from subagents, “collusion” can still occur in the sense that it [the outer agent] can notice that its (incomplete) preferences factorize, in a way that allows it to deliberately trade particular completions of them against each other and thereby acquire more resources. The outer agent would then choose to do this for basically the same reason that a committee of subagents would: to acquire more resources for itself as a whole, without disadvantaging any of the completions under consideration.
I disagree; see my reply to John above.
On my understanding, the argument isn’t that your DSM agent can be made better off, but that the reason it can’t be made better off is because it is engaging in trammeling/“collusion”, and that the form of “trammeling” you’ve ruled out isn’t the useful kind.
As far as an example goes, consider a sequence of actions which, starting from an unpressed world state, routes through a pressed world state (or series of pressed world states), before eventually returning to an unpressed world state with higher utility than the initial state.
(The real-world context of such a sequence shouldn’t be too relevant to a theoretical example like this, but if you had to imagine something, you could imagine a “memory gambit”-esque stratagem, where the agent spends some time inactive in order to avoid detection, but has set things up in advance to eventually reactivate itself under more favorable circumstances. Again, the plausibility of the scenario isn’t super relevant here.)
If your proposed DSM agent passes up this action sequence on the grounds that some of the intermediate steps need to bridge between “incomparable” pressed/unpressed trajectories, then it does in fact pass up the certain gain. Conversely, if it doesn’t pass up such a sequence, then its behavior is the same as that of a set of negotiating subagents cooperating in order to form a larger macroagent.
I don’t see how this could be right. Consider the bounding results on trammelling under unawareness (e.g. Proposition 10). They show that there will always be a set of options between which DSM does not require choosing one over the other. Suppose these are X and Y. The agent will always be able to choose either one. They might end up always choosing X, always Y, switching back and forth, whatever. This doesn’t look like the outcome of two subagents, one preferring X and the other Y, negotiating to get some portion of the picks.
Forgive me; I’m still not seeing it. For coming up with examples, I think for now it’s unhelpful to use the shutdown problem, because the actual proposal from Thornley includes several more requirements. I think it’s perfectly fine to construct examples about trammelling and subagents using something like this: A is a set of options with typical member ai. These are all comparable and ranked according to their subscripts. That is, a1 is preferred to a2, and so on. Likewise with set B. And all options in A are incomparable to all options in B.
This looks to me like a misunderstanding that I tried to explain in section 3.1. Let me know if not, though, ideally with a worked-out example of the form: “here’s the decision tree(s), here’s what DSM mandates, here’s why it’s untrammelled according to the OP definition, and here’s why it’s problematic.”
I don’t think I grok the DSM formalism enough to speak confidently about what it would mandate, but I think I see a (class of) decision problem where any agent (DSM or otherwise) must either pass up a certain gain, or else engage in “problematic” behavior (where “problematic” doesn’t necessarily mean “untrammeled” according to the OP definition, but instead more informally means “something which doesn’t help to avoid the usual pressures away from corrigibility / towards coherence”). The problem in question is essentially the inverse of the example you give in section 3.1:
Consider an agent tasked with choosing between two incomparable options A and B, and if it chooses B, it will be further presented with the option to trade B for A+, where A+ is incomparable to B but comparable (and preferable) to A.
(I’ve slightly modified the framing to be in terms of trades rather than going “up” or “down”, but the decision tree is isomorphic.)
Here, A+ isn’t in fact “strongly maximal” with respect to A and B (because it’s incomparable to B), but I think I’m fairly confident in declaring that any agent which foresees the entire tree in advance, and which does not pick B at the initial node (going “down”, if you want to use the original framing), is engaging in a dominated behavior—and to the extent that DSM doesn’t consider this a dominated strategy, DSM’s definitions aren’t capturing a useful notion of what is “dominated” and what isn’t.
Again, I’m not claiming this is what DSM says. You can think of me as trying to run an obvious-to-me assertion test on code which I haven’t carefully inspected, to see if the result of the test looks sane. But if a (fully aware/non-myopic) DSM agent does constrain itself into picking B (“going down”) in the above example, despite the prima facie incomparability of {A, A+} and {B}, then I would consider this behavior problematic once translated back into the context of real-world shutdownability, because it means the agent in question will at least in some cases act in order to influence whether the button is pressed.
(The hope behind incomplete preferences, after all, is that an agent whose preferences over world-states can be subdivided into “incomparability classes” will only ever act to improve its lot within the class of states it finds itself in to begin with, and will never act to shift—or prevent itself from being shifted—to a different incomparability class. I think the above example presents a deep obstacle to this hope, however. Very roughly speaking, if the gaps in the agent’s preferences can be bridged via certain causal pathways, then a (non-myopic) agent which does not exploit these pathways to its own benefit will notice itself failing to exploit them, and self-modify to stop doing that.)
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
This is reasonable but I think my response to your comment will mainly involve re-stating what I wrote in the post, so maybe it’ll be easier to point to the relevant sections: 3.1. for what DSM mandates when the agent has beliefs about its decision tree, 3.2.2 for what DSM mandates when the agent hadn’t considered an actualised continuation of its decision tree, and 3.3. for discussion of these results. In particular, the following paragraphs are meant to illustrate what DSM mandates in the least favourable epistemic state that the agent could be in (unawareness with new options appearing):
But it sounds like the agent’s initial choice between A and B is forced, yes? (Otherwise, it wouldn’t be the case that the agent is permitted to end up with either A+ or B, but not A.) So the presence of A+ within a particular continuation of the decision tree influences the agent’s choice at the initial node, in a way that causes it to reliably choose one incomparable option over another.
Further thoughts: under the original framing, instead of choosing between A and B (while knowing that B can later be traded for A+), the agent instead chooses whether to go “up” or “down” to receive (respectively) A, or a further choice between A+ and B. It occurs to me that you might be using this representation to argue for a qualitative difference in the behavior produced, but if so, I’m not sure how much I buy into it.
For concreteness, suppose the agent starts out with A, and notices a series of trades which first involves trading A for B, and then B for A+. It seems to me that if I frame the problem like this, the structure of the resulting tree should be isomorphic to that of the decision problem I described, but not necessarily the “up”/”down” version—at least, not if you consider that version to play a key role in DSM’s recommendation.
(In particular, my frame is sensitive to which state the agent is initialized in: if it is given B to start, then it has no particular incentive to want to trade that for either A or A+, and so faces no incentive to trade at all. If you initialize the agent with A or B at random, and institute the rule that it doesn’t trade by default, then the agent will end up with A+ when initialized with A, and B when initialized with B—which feels a little similar to what you said about DSM allowing both A+ and B as permissible options.)
It sounds like you want to make it so that the agent’s initial state isn’t taken into account—in fact, it sounds like you want to assign values only to terminal nodes in the tree, take the subset of those terminal nodes which have maximal utility within a particular incomparability class, and choose arbitrarily among those. My frame, then, would be equivalent to using the agent’s initial state as a tiebreaker: whichever terminal node shares an incomparability class with the agent’s initial state will be the one the agent chooses to steer towards.
...in which case, assuming I got the above correct, I think I stand by my initial claim that this will lead to behavior which, while not necessarily “trammeling” by your definition, is definitely consequentialist in the worrying sense: an agent initialized in the “shutdown button not pressed” state will perform whatever intermediate steps are needed to navigate to the maximal-utility “shutdown button not pressed” state it can foresee, including actions which prevent the shutdown button from being pressed.