đ¤ I was about to say that I felt like my approach could still be done in terms of state rewards, and that itâs just that my approach violates some of the technical assumptions in the OP. After all, you could just reward for being in a state such that the various counterfactuals apply when rolling out from this state; this would assign higher utility to the blue states than the red states, encouraging corrigibility, and contradicting TurnTroutâs assumption that utility would be assigned solely based on the letter.
But then I realized that this introduces a policy dependence to the reward function; the way you roll out from a state depends on which policy you have. (Well, in principle; in practice some MDPs may not have much dependence on it.) The special thing about state-based rewards is that you can assign utilities to trajectories without considering the policy that generates the trajectory at all. (Which to me seems bad for corrigibility, since corrigibility depends on the reasons for the trajectories, and not just the trajectories themselves.)
But now consider the following: If you have the policy, you can figure out which actions were taken, just by applying the policy to the state/âhistory. And instrumental convergence does not apply to utility functions over action-observation histories. So therefore it doesnât apply to utility functions over (policies, observation histories). (I think?? At least if the set of policies is closed under replacing an action under a specified condition, and thereâs no Newcombian issues that creates non-causal dependencies between policies and observation histories?).
So a lot of the instrumental convergence power comes from restricting the things you can consider in the utility function. u-AOH is clearly too broad, since it allows assigning utilities to arbitrary sequences of actions with identical effects, and simultaneously u-AOH, u-OH, and ordinary state-based reward functions (can we call that u-S?) are all too narrow, since none of them allow assigning utilities to counterfactuals, which is required in order to phrase things like âhumans have control over the AIâ (as this is a causal statement and thus depends on the AI).
We could consider u-P, utility functions over policies. This is the most general sort of utility function (I think??), and as such it is also way way too general, just like u-AOH is. I think maybe what I should try to do is define some causal/âcounterfactual generalizations of u-AOH, u-OH, and u-S, which allow better behaved utility functions.
I think instrumental convergence should still apply to some utility functions over policies, specifically the ones that seem to produce âsmartâ or âpowerfulâ behavior from simple rules. But I donât know how to formalize this or if anyone else has.
Since you can convert a utility function over states or observation-histories into a utility function over policies (well, as long as you have a model for measuring the utility of a policy), and since utility functions over states/âobservation-histories do satisfy instrumental convergence, yes you are correct.
I feel like in a way, one could see the restriction to defining it in terms of e.g. states as a definition of âsmartâ behavior; if you define a reward in terms of states, then the policy must âsmartlyâ generate those states, rather than just yield some sort of arbitrary behavior.
đ¤ I wonder if this approach could generalize TurnTroutâs approach. Iâm not entirely sure how, but we might imagine that a structured utility function u(Ď) over policies could be decomposed into r(f(Ď)), where f is the features that the utility function pays attention to, and r is the utility function expressed in terms of those features. E.g. for state-based rewards, one might take f to be a model that yields the distribution of states visited by the policy, and r to be the reward function for the individual states (some sort of modification would have to be made to address the fact that f outputs a distribution but r takes in a single state⌠I guess this could be handled by working in the category of vector spaces and linear transformations but Iâm not sure if thatâs the best approach in generalâthough since Set can be embedded into this category, it surely canât hurt too much).
Then the power-seeking situation boils down to that the vast majority of policies Ď lead to essentially the same features f(Ď), but that there is a small set of power-seeking policies that lead to a vastly greater range of different features? And so for most r, a Ď that optimizes/âsatisfices/âetc.râf will come from this small set of power-seeking policies.
Iâm not sure how to formalize this. I think it wonât hold for generic vector spaces, since almost all linear transformations are invertible? But it seems to me that in reality, thereâs a great degree of non-injectivity. The idea of âchaos inducing abstractionsâ seems relevant, in the sense that parameter changes in Ď will mostly tend to lead to completely unpredictable/âunsystematic/âdissipated effects, and partly tend to lead to predictable and systematic effects. If most of the effects are unpredictable/âunsystematic, then f must be extremely non-injective, and this non-injectivity then generates power-seeking.
(Or does it? I guess youâd have to have some sort of interaction effect, where some parameters control the degree to which the function is injective with regards to other parameters. But that seems to holds in practice.)
Iâm not sure whether Iâve said anything new or useful.
though since Set can be embedded into [Vect], it surely canât hurt too much
As an aside, can you link to/âsay more about this? Do you mean that there exists a faithful functor from Set to Vect (the category of vector spaces)? If you mean that, then every concrete category can be embedded into Vect, no? And if thatâs what youâre saying, maybe the functor Set â Vect is something like the âGroup to its group algebra over field kâ functor.
As an aside, can you link to/âsay more about this? Do you mean that there exists a faithful functor from Set to Vect (the category of vector spaces)? If you mean that, then every concrete category can be embedded into Vect, no?
Yes, the free vector space functor. For a finite set X, itâs just the functions XâR, with operations defined pointwise. For infinite sets, it is the subset of those functions that have finite support. Itâs essentially the same as what youâve been doing by considering Rd for an outcome set with d outcomes, except with members of a set as indices, rather than numerically numbering the outcomes.
Actually I just realized I should probably clarify how it lifts functions to linear transformations too, because it doesnât do so in the obvious way. If F is the free vector space functor and f:XâY is a function, then F(f):F(X)âF(Y) is given by F(f)(y)=âxâfâ1({y})f(x). (One way of understanding why the functions XâR must have finite support is in ensuring that this sum is well-defined. Though there are alternatives to requiring finite support, as long as one is willing to embed a more structured category than Set into a more structured category than Vect.)
It may be more intuitive to see the free vector space over X as containing formal sums c0x0+âŻ+cnxn for xiâX and ciâR. The downside to this is that it requires a bunch of quotients, e.g. to ensure commutativity, associativity, distributivity, etc..
Imagine that policies decompose into two components, Ď=ĎâĎ. For instance, they may be different sets of parameters in a neural network. We can then talk about the effect of one of the components by considering how it influences the power/âinjectivity of the features with respect to the other component.
Suppose, for instance, that Ď is such that the policy just ends up acting in a completely random-twitching way. Technically Ď has a lot of effect too, in that it chaotically controls the pattern of the twitching, but in terms of the features f, Ď is basically constant. This is a low power situation, and if one actually specified what f would be, then a TurnTrout-style argument could probably prove that such values of Ď would be avoided for power-seeking reasons. On the other hand, if Ď made the policy act like an optimizer which optimizes a utility function over the features of f with the utility function being specified by Ď, then that would lead to a lot more power/âinjectivity.
On the other hand, I wonder if thereâs a limit to this style of argument. Too much noninjectivity would require crazy interaction effects to fill out the space in a Hilbert-curve-style way, which would be hard to optimize?
I think instrumental convergence should still apply to some utility functions over policies, specifically the ones that seem to produce âsmartâ or âpowerfulâ behavior from simple rules.
I share an intuition in this area, but âpowerfulâ behavior tendencies seems nearly equivalent to instrumental convergence to me. It feels logically downstream of instrumental convergence.
from simple rules
I already have a (somewhat weak) result on power-seeking wrt the simplicity prior over state-based reward functions. This isnât about utility functions over policies, though.
So a lot of the instrumental convergence power comes from restricting the things you can consider in the utility function. u-AOH is clearly too broad, since it allows assigning utilities to arbitrary sequences of actions with identical effects, and simultaneously u-AOH, u-OH, and ordinary state-based reward functions (can we call that u-S?) are all too narrow, since none of them allow assigning utilities to counterfactuals, which is required in order to phrase things like âhumans have control over the AIâ (as this is a causal statement and thus depends on the AI).
Note that we can get a u-AOH which mostly solves ABC-corrigibility:
u(history):={0if disable taken in historyR(last state)else
(Credit to AI_WAIFU on the EleutherAI Discord)
Where R is some positive reward function over terminal states. Do note that there isnât a âget yourself corrected on your ownâ incentive. EDIT note that manipulation can still be weakly optimal.
This seems hacky; weâre just ruling out the incorrigible policies directly. We arenât doing any counterfactual reasoning, we just pick out the âbad action.â
đ¤ I was about to say that I felt like my approach could still be done in terms of state rewards, and that itâs just that my approach violates some of the technical assumptions in the OP. After all, you could just reward for being in a state such that the various counterfactuals apply when rolling out from this state; this would assign higher utility to the blue states than the red states, encouraging corrigibility, and contradicting TurnTroutâs assumption that utility would be assigned solely based on the letter.
But then I realized that this introduces a policy dependence to the reward function; the way you roll out from a state depends on which policy you have. (Well, in principle; in practice some MDPs may not have much dependence on it.) The special thing about state-based rewards is that you can assign utilities to trajectories without considering the policy that generates the trajectory at all. (Which to me seems bad for corrigibility, since corrigibility depends on the reasons for the trajectories, and not just the trajectories themselves.)
But now consider the following: If you have the policy, you can figure out which actions were taken, just by applying the policy to the state/âhistory. And instrumental convergence does not apply to utility functions over action-observation histories. So therefore it doesnât apply to utility functions over (policies, observation histories). (I think?? At least if the set of policies is closed under replacing an action under a specified condition, and thereâs no Newcombian issues that creates non-causal dependencies between policies and observation histories?).
So a lot of the instrumental convergence power comes from restricting the things you can consider in the utility function. u-AOH is clearly too broad, since it allows assigning utilities to arbitrary sequences of actions with identical effects, and simultaneously u-AOH, u-OH, and ordinary state-based reward functions (can we call that u-S?) are all too narrow, since none of them allow assigning utilities to counterfactuals, which is required in order to phrase things like âhumans have control over the AIâ (as this is a causal statement and thus depends on the AI).
We could consider u-P, utility functions over policies. This is the most general sort of utility function (I think??), and as such it is also way way too general, just like u-AOH is. I think maybe what I should try to do is define some causal/âcounterfactual generalizations of u-AOH, u-OH, and u-S, which allow better behaved utility functions.
I think instrumental convergence should still apply to some utility functions over policies, specifically the ones that seem to produce âsmartâ or âpowerfulâ behavior from simple rules. But I donât know how to formalize this or if anyone else has.
Since you can convert a utility function over states or observation-histories into a utility function over policies (well, as long as you have a model for measuring the utility of a policy), and since utility functions over states/âobservation-histories do satisfy instrumental convergence, yes you are correct.
I feel like in a way, one could see the restriction to defining it in terms of e.g. states as a definition of âsmartâ behavior; if you define a reward in terms of states, then the policy must âsmartlyâ generate those states, rather than just yield some sort of arbitrary behavior.
đ¤ I wonder if this approach could generalize TurnTroutâs approach. Iâm not entirely sure how, but we might imagine that a structured utility function u(Ď) over policies could be decomposed into r(f(Ď)), where f is the features that the utility function pays attention to, and r is the utility function expressed in terms of those features. E.g. for state-based rewards, one might take f to be a model that yields the distribution of states visited by the policy, and r to be the reward function for the individual states (some sort of modification would have to be made to address the fact that f outputs a distribution but r takes in a single state⌠I guess this could be handled by working in the category of vector spaces and linear transformations but Iâm not sure if thatâs the best approach in generalâthough since Set can be embedded into this category, it surely canât hurt too much).
Then the power-seeking situation boils down to that the vast majority of policies Ď lead to essentially the same features f(Ď), but that there is a small set of power-seeking policies that lead to a vastly greater range of different features? And so for most r, a Ď that optimizes/âsatisfices/âetc.râf will come from this small set of power-seeking policies.
Iâm not sure how to formalize this. I think it wonât hold for generic vector spaces, since almost all linear transformations are invertible? But it seems to me that in reality, thereâs a great degree of non-injectivity. The idea of âchaos inducing abstractionsâ seems relevant, in the sense that parameter changes in Ď will mostly tend to lead to completely unpredictable/âunsystematic/âdissipated effects, and partly tend to lead to predictable and systematic effects. If most of the effects are unpredictable/âunsystematic, then f must be extremely non-injective, and this non-injectivity then generates power-seeking.
(Or does it? I guess youâd have to have some sort of interaction effect, where some parameters control the degree to which the function is injective with regards to other parameters. But that seems to holds in practice.)
Iâm not sure whether Iâve said anything new or useful.
As an aside, can you link to/âsay more about this? Do you mean that there exists a faithful functor from Set to Vect (the category of vector spaces)? If you mean that, then every concrete category can be embedded into Vect, no? And if thatâs what youâre saying, maybe the functor Set â Vect is something like the âGroup to its group algebra over field kâ functor.
Yes, the free vector space functor. For a finite set X, itâs just the functions XâR, with operations defined pointwise. For infinite sets, it is the subset of those functions that have finite support. Itâs essentially the same as what youâve been doing by considering Rd for an outcome set with d outcomes, except with members of a set as indices, rather than numerically numbering the outcomes.
Actually I just realized I should probably clarify how it lifts functions to linear transformations too, because it doesnât do so in the obvious way. If F is the free vector space functor and f:XâY is a function, then F(f):F(X)âF(Y) is given by F(f)(y)=âxâfâ1({y})f(x). (One way of understanding why the functions XâR must have finite support is in ensuring that this sum is well-defined. Though there are alternatives to requiring finite support, as long as one is willing to embed a more structured category than Set into a more structured category than Vect.)
It may be more intuitive to see the free vector space over X as containing formal sums c0x0+âŻ+cnxn for xiâX and ciâR. The downside to this is that it requires a bunch of quotients, e.g. to ensure commutativity, associativity, distributivity, etc..
Imagine that policies decompose into two components, Ď=ĎâĎ. For instance, they may be different sets of parameters in a neural network. We can then talk about the effect of one of the components by considering how it influences the power/âinjectivity of the features with respect to the other component.
Suppose, for instance, that Ď is such that the policy just ends up acting in a completely random-twitching way. Technically Ď has a lot of effect too, in that it chaotically controls the pattern of the twitching, but in terms of the features f, Ď is basically constant. This is a low power situation, and if one actually specified what f would be, then a TurnTrout-style argument could probably prove that such values of Ď would be avoided for power-seeking reasons. On the other hand, if Ď made the policy act like an optimizer which optimizes a utility function over the features of f with the utility function being specified by Ď, then that would lead to a lot more power/âinjectivity.
On the other hand, I wonder if thereâs a limit to this style of argument. Too much noninjectivity would require crazy interaction effects to fill out the space in a Hilbert-curve-style way, which would be hard to optimize?
Actually upon thinking further I donât think this argument works, at least not as it is written right now.
I share an intuition in this area, but âpowerfulâ behavior tendencies seems nearly equivalent to instrumental convergence to me. It feels logically downstream of instrumental convergence.
I already have a (somewhat weak) result on power-seeking wrt the simplicity prior over state-based reward functions. This isnât about utility functions over policies, though.
Note that we can get a u-AOH which mostly solves ABC-corrigibility:
u(history):={0if disable taken in historyR(last state)else(Credit to AI_WAIFU on the EleutherAI Discord)
Where R is some positive reward function over terminal states. Do note that there isnât a âget yourself corrected on your ownâ incentive. EDIT note that manipulation can still be weakly optimal.
This seems hacky; weâre just ruling out the incorrigible policies directly. We arenât doing any counterfactual reasoning, we just pick out the âbad action.â