I think instrumental convergence should still apply to some utility functions over policies, specifically the ones that seem to produce “smart” or “powerful” behavior from simple rules. But I don’t know how to formalize this or if anyone else has.
Since you can convert a utility function over states or observation-histories into a utility function over policies (well, as long as you have a model for measuring the utility of a policy), and since utility functions over states/observation-histories do satisfy instrumental convergence, yes you are correct.
I feel like in a way, one could see the restriction to defining it in terms of e.g. states as a definition of “smart” behavior; if you define a reward in terms of states, then the policy must “smartly” generate those states, rather than just yield some sort of arbitrary behavior.
🤔 I wonder if this approach could generalize TurnTrout’s approach. I’m not entirely sure how, but we might imagine that a structured utility function u(π) over policies could be decomposed into r(f(π)), where f is the features that the utility function pays attention to, and r is the utility function expressed in terms of those features. E.g. for state-based rewards, one might take f to be a model that yields the distribution of states visited by the policy, and r to be the reward function for the individual states (some sort of modification would have to be made to address the fact that f outputs a distribution but r takes in a single state… I guess this could be handled by working in the category of vector spaces and linear transformations but I’m not sure if that’s the best approach in general—though since Set can be embedded into this category, it surely can’t hurt too much).
Then the power-seeking situation boils down to that the vast majority of policies π lead to essentially the same features f(π), but that there is a small set of power-seeking policies that lead to a vastly greater range of different features? And so for most r, a π that optimizes/satisfices/etc.r∘f will come from this small set of power-seeking policies.
I’m not sure how to formalize this. I think it won’t hold for generic vector spaces, since almost all linear transformations are invertible? But it seems to me that in reality, there’s a great degree of non-injectivity. The idea of “chaos inducing abstractions” seems relevant, in the sense that parameter changes in π will mostly tend to lead to completely unpredictable/unsystematic/dissipated effects, and partly tend to lead to predictable and systematic effects. If most of the effects are unpredictable/unsystematic, then f must be extremely non-injective, and this non-injectivity then generates power-seeking.
(Or does it? I guess you’d have to have some sort of interaction effect, where some parameters control the degree to which the function is injective with regards to other parameters. But that seems to holds in practice.)
I’m not sure whether I’ve said anything new or useful.
though since Set can be embedded into [Vect], it surely can’t hurt too much
As an aside, can you link to/say more about this? Do you mean that there exists a faithful functor from Set to Vect (the category of vector spaces)? If you mean that, then every concrete category can be embedded into Vect, no? And if that’s what you’re saying, maybe the functor Set → Vect is something like the “Group to its group algebra over field k” functor.
As an aside, can you link to/say more about this? Do you mean that there exists a faithful functor from Set to Vect (the category of vector spaces)? If you mean that, then every concrete category can be embedded into Vect, no?
Yes, the free vector space functor. For a finite set X, it’s just the functions X→R, with operations defined pointwise. For infinite sets, it is the subset of those functions that have finite support. It’s essentially the same as what you’ve been doing by considering Rd for an outcome set with d outcomes, except with members of a set as indices, rather than numerically numbering the outcomes.
Actually I just realized I should probably clarify how it lifts functions to linear transformations too, because it doesn’t do so in the obvious way. If F is the free vector space functor and f:X→Y is a function, then F(f):F(X)→F(Y) is given by F(f)(y)=∑x∈f−1({y})f(x). (One way of understanding why the functions X→R must have finite support is in ensuring that this sum is well-defined. Though there are alternatives to requiring finite support, as long as one is willing to embed a more structured category than Set into a more structured category than Vect.)
It may be more intuitive to see the free vector space over X as containing formal sums c0x0+⋯+cnxn for xi∈X and ci∈R. The downside to this is that it requires a bunch of quotients, e.g. to ensure commutativity, associativity, distributivity, etc..
Imagine that policies decompose into two components, π=ρ⊗σ. For instance, they may be different sets of parameters in a neural network. We can then talk about the effect of one of the components by considering how it influences the power/injectivity of the features with respect to the other component.
Suppose, for instance, that ρ is such that the policy just ends up acting in a completely random-twitching way. Technically σ has a lot of effect too, in that it chaotically controls the pattern of the twitching, but in terms of the features f, σ is basically constant. This is a low power situation, and if one actually specified what f would be, then a TurnTrout-style argument could probably prove that such values of ρ would be avoided for power-seeking reasons. On the other hand, if ρ made the policy act like an optimizer which optimizes a utility function over the features of f with the utility function being specified by σ, then that would lead to a lot more power/injectivity.
On the other hand, I wonder if there’s a limit to this style of argument. Too much noninjectivity would require crazy interaction effects to fill out the space in a Hilbert-curve-style way, which would be hard to optimize?
I think instrumental convergence should still apply to some utility functions over policies, specifically the ones that seem to produce “smart” or “powerful” behavior from simple rules.
I share an intuition in this area, but “powerful” behavior tendencies seems nearly equivalent to instrumental convergence to me. It feels logically downstream of instrumental convergence.
from simple rules
I already have a (somewhat weak) result on power-seeking wrt the simplicity prior over state-based reward functions. This isn’t about utility functions over policies, though.
I think instrumental convergence should still apply to some utility functions over policies, specifically the ones that seem to produce “smart” or “powerful” behavior from simple rules. But I don’t know how to formalize this or if anyone else has.
Since you can convert a utility function over states or observation-histories into a utility function over policies (well, as long as you have a model for measuring the utility of a policy), and since utility functions over states/observation-histories do satisfy instrumental convergence, yes you are correct.
I feel like in a way, one could see the restriction to defining it in terms of e.g. states as a definition of “smart” behavior; if you define a reward in terms of states, then the policy must “smartly” generate those states, rather than just yield some sort of arbitrary behavior.
🤔 I wonder if this approach could generalize TurnTrout’s approach. I’m not entirely sure how, but we might imagine that a structured utility function u(π) over policies could be decomposed into r(f(π)), where f is the features that the utility function pays attention to, and r is the utility function expressed in terms of those features. E.g. for state-based rewards, one might take f to be a model that yields the distribution of states visited by the policy, and r to be the reward function for the individual states (some sort of modification would have to be made to address the fact that f outputs a distribution but r takes in a single state… I guess this could be handled by working in the category of vector spaces and linear transformations but I’m not sure if that’s the best approach in general—though since Set can be embedded into this category, it surely can’t hurt too much).
Then the power-seeking situation boils down to that the vast majority of policies π lead to essentially the same features f(π), but that there is a small set of power-seeking policies that lead to a vastly greater range of different features? And so for most r, a π that optimizes/satisfices/etc.r∘f will come from this small set of power-seeking policies.
I’m not sure how to formalize this. I think it won’t hold for generic vector spaces, since almost all linear transformations are invertible? But it seems to me that in reality, there’s a great degree of non-injectivity. The idea of “chaos inducing abstractions” seems relevant, in the sense that parameter changes in π will mostly tend to lead to completely unpredictable/unsystematic/dissipated effects, and partly tend to lead to predictable and systematic effects. If most of the effects are unpredictable/unsystematic, then f must be extremely non-injective, and this non-injectivity then generates power-seeking.
(Or does it? I guess you’d have to have some sort of interaction effect, where some parameters control the degree to which the function is injective with regards to other parameters. But that seems to holds in practice.)
I’m not sure whether I’ve said anything new or useful.
As an aside, can you link to/say more about this? Do you mean that there exists a faithful functor from Set to Vect (the category of vector spaces)? If you mean that, then every concrete category can be embedded into Vect, no? And if that’s what you’re saying, maybe the functor Set → Vect is something like the “Group to its group algebra over field k” functor.
Yes, the free vector space functor. For a finite set X, it’s just the functions X→R, with operations defined pointwise. For infinite sets, it is the subset of those functions that have finite support. It’s essentially the same as what you’ve been doing by considering Rd for an outcome set with d outcomes, except with members of a set as indices, rather than numerically numbering the outcomes.
Actually I just realized I should probably clarify how it lifts functions to linear transformations too, because it doesn’t do so in the obvious way. If F is the free vector space functor and f:X→Y is a function, then F(f):F(X)→F(Y) is given by F(f)(y)=∑x∈f−1({y})f(x). (One way of understanding why the functions X→R must have finite support is in ensuring that this sum is well-defined. Though there are alternatives to requiring finite support, as long as one is willing to embed a more structured category than Set into a more structured category than Vect.)
It may be more intuitive to see the free vector space over X as containing formal sums c0x0+⋯+cnxn for xi∈X and ci∈R. The downside to this is that it requires a bunch of quotients, e.g. to ensure commutativity, associativity, distributivity, etc..
Imagine that policies decompose into two components, π=ρ⊗σ. For instance, they may be different sets of parameters in a neural network. We can then talk about the effect of one of the components by considering how it influences the power/injectivity of the features with respect to the other component.
Suppose, for instance, that ρ is such that the policy just ends up acting in a completely random-twitching way. Technically σ has a lot of effect too, in that it chaotically controls the pattern of the twitching, but in terms of the features f, σ is basically constant. This is a low power situation, and if one actually specified what f would be, then a TurnTrout-style argument could probably prove that such values of ρ would be avoided for power-seeking reasons. On the other hand, if ρ made the policy act like an optimizer which optimizes a utility function over the features of f with the utility function being specified by σ, then that would lead to a lot more power/injectivity.
On the other hand, I wonder if there’s a limit to this style of argument. Too much noninjectivity would require crazy interaction effects to fill out the space in a Hilbert-curve-style way, which would be hard to optimize?
Actually upon thinking further I don’t think this argument works, at least not as it is written right now.
I share an intuition in this area, but “powerful” behavior tendencies seems nearly equivalent to instrumental convergence to me. It feels logically downstream of instrumental convergence.
I already have a (somewhat weak) result on power-seeking wrt the simplicity prior over state-based reward functions. This isn’t about utility functions over policies, though.