Now that I feel like we’re at least on the same page, I’ll give some thoughts.
This is a neat idea, and one that I hadn’t thought of before. Thanks!
I think I particularly like the way in which it might be a way of naturally naming constraints that might be useful to point at.
I am unsure how much these constraints actually get strongly reified in practice. When planning in simple contexts, I expect forward-checking to be more common. The centrality of forward-checking in my conception of the relationship between terminal and instrumental goals is a big part of where I think I originally got confused and misunderstood you.
One of the big reasons I don’t focus so much on constraints when thinking about corrigibility is because I think constraints are usually either brittle or crippling. I think corrigible agents will, for example, try to keep their actions reversible, but I don’t see a way to instantiate this as a constraint in a way that both allows normal action and forbids Goodharting. Instead, I tend to think about heuristics that fall-back on getting help from the principal. (“I have a rough sense of how reversible things should normally be, and if it looks like I might be going outside the normal bounds I’ll stop and check.”)
Thus, my guess is that if one naively tries to implement an agent that is genuinely constrained according to the natural set of “instrumental constraints” or whatever we want to call them, it’ll end up effectively paralyzing them.
The thing that allows a corrigible agent not to be paralyzed, in my mind, is the presence of a principal. But if I’m understanding you right, “instrumental constraint” satisfying agents don’t (necessarily) have a principal. This seems like a major difference between this idea and corrigibility.
I have some additional thoughts on how exactly the Scylla and Charybdis of being paralyzed by constraints and cleverly bypassing constraints kills you, for example with regard to resource accumulation/protection, but I think I want to end by noting a sense that naively implementing these in some kind of straightforward constrained-optimizer isn’t where the value of this idea lies. Instead, I am most interested in whether this frame can be used as a generator for corrigibility heuristics (and/or a corrigibility dataset). 🤔
Yup, exactly, and good job explaining it too.
:)
Now that I feel like we’re at least on the same page, I’ll give some thoughts.
This is a neat idea, and one that I hadn’t thought of before. Thanks!
I think I particularly like the way in which it might be a way of naturally naming constraints that might be useful to point at.
I am unsure how much these constraints actually get strongly reified in practice. When planning in simple contexts, I expect forward-checking to be more common. The centrality of forward-checking in my conception of the relationship between terminal and instrumental goals is a big part of where I think I originally got confused and misunderstood you.
One of the big reasons I don’t focus so much on constraints when thinking about corrigibility is because I think constraints are usually either brittle or crippling. I think corrigible agents will, for example, try to keep their actions reversible, but I don’t see a way to instantiate this as a constraint in a way that both allows normal action and forbids Goodharting. Instead, I tend to think about heuristics that fall-back on getting help from the principal. (“I have a rough sense of how reversible things should normally be, and if it looks like I might be going outside the normal bounds I’ll stop and check.”)
Thus, my guess is that if one naively tries to implement an agent that is genuinely constrained according to the natural set of “instrumental constraints” or whatever we want to call them, it’ll end up effectively paralyzing them.
The thing that allows a corrigible agent not to be paralyzed, in my mind, is the presence of a principal. But if I’m understanding you right, “instrumental constraint” satisfying agents don’t (necessarily) have a principal. This seems like a major difference between this idea and corrigibility.
I have some additional thoughts on how exactly the Scylla and Charybdis of being paralyzed by constraints and cleverly bypassing constraints kills you, for example with regard to resource accumulation/protection, but I think I want to end by noting a sense that naively implementing these in some kind of straightforward constrained-optimizer isn’t where the value of this idea lies. Instead, I am most interested in whether this frame can be used as a generator for corrigibility heuristics (and/or a corrigibility dataset). 🤔