Suppose I am interested in finding a program M whose input-output behavior has some property P that I can probabilistically check relatively quickly (e.g. I want to check whether M implements a sparse cut of some large implicit graph). I believe there is some simple and fast program M that does the trick. But even this relatively simple M is much more complex than the specification of the property P.
Now suppose I search for the simplest program running in time T that has property P. If T is sufficiently large, then I will end up getting the program “Search for the simplest program running in time T’ that has property P, then run that.” (Or something even simpler, but the point is that it will make no reference to the intended program M since encoding P is cheaper.)
I may be happy enough with this outcome, but there’s some intuitive sense in which something weird and undesirable has happened here (and I may get in a distinctive kind of trouble if P is an approximate evaluation). I think this is likely to be a useful maximally-simplified example to think about.
The speed prior is calibrated such that this never happens if the learned optimizer is just using brute force—if it needs to search over 1 extra bit then it will take 2x longer, offsetting the gains.
That means that in the regime where P is simple, the speed prior is the “least you can reasonably care about speed”—if you care even less, you will just end up pushing the optimization into an inner process that is more concerned with speed and is therefore able to try a bunch of options.
(However, this is very mild, since the speed prior cares only a tiny bit about speed. Adding 100 bits to your program is the same as letting it run 2^100 times longer, so you are basically just optimizing for simplicity.)
To make this concrete, suppose that I instead used the kind-of-speed prior, where taking 4x longer is equivalent to using 1 extra bit of description complexity. And suppose that P is very simple relative to the complexities of the other objects involved. Suppose that the “object-level” program M has 1000 bits and runs in 2^2000 time, so has kind-of-speed complexity 2000 bits. A search that uses the speed prior will be able to find this algorithm in 2^3000 time, and so will have a kind-of-speed complexity of 1500 bits. So the kind-of-speed prior will just end up delegating to the speed prior.
The speed prior still delegates to better search algorithms though. For example, suppose that someone is able to fill in a 1000 bit program using only 2^500 steps of local search. Then the local search algorithm has speed prior complexity 500 bits, so will beat the object-level program. And the prior we’d end up using is basically “2x longer = 2 more bits” instead of “2x longer = 1 more bit,” i.e. we end up caring more about speed because we delegated.
The actual limit on how much you care about speed is given by whatever search algorithms work best. I think it’s likely possible to “expose” what is going on to the outer optimizer (so that it finds a hypothesis like “This local search algorithm is good” and then uses it to find an object-level program, rather than directly finding a program that bundles both of them together). But I’d guess intuitively that it’s just not even meaningful to talk about the “simplest” programs or any prior that cares less about speed than the optimal search algorithm.
I think multi-level search may help here. To the extent that you can get a lower-confidence estimate of P much more quickly, you can budget your total search time such that you examine many programs and then re-examine only the good candidates. If your confidence is linear with time/complexity of evaluation, this probably doesn’t help.
[Mainly] Several proposals for avoiding the instrumental policy work by penalizing computation. But I have a really shaky philosophical grip on why that’s a reasonable thing to do, and so all of those solutions end up feeling weird to me. I can still evaluate them based on what works on concrete examples, but things are slippery enough that plan A is getting a handle on why this is a good idea.
In the long run I expect to have to handle learned optimizers by having the outer optimizer instead directly learn whatever the inner optimizer would have learned. This is an interesting setting to look at how that works out. (For example, in this case the outer optimizer just needs to be able to represent the hypothesis “There is a program that has property P and runs in time T’ ” and then do its own search over that space of faster programs.)
In traditional settings, we are searching for a program M that is simpler than the property P. For example, the number of parameters in our model should be smaller than the size of the dataset we are trying to fit if we want the model to generalize. (This isn’t true for modern DL because of subtleties with SGD optimizing imperfectly and implicit regularization and so on, but spiritually I think it’s still fine..)
But this breaks down if we start doing something like imposing consistency checks and hoping that those change the result of learning. Intuitively it’s also often not true for scientific explanations—even simple properties can be surprising and require explanation, and can be used to support theories that are much more complex than the observation itself.
Some thoughts:
It’s quite plausible that in these cases we want to be doing something other than searching over programs. This is pretty clear in the “scientific explanation” case, and maybe it’s the way to go for the kinds of alignment problems I’ve been thinking about recently.
A basic challenge with searching over programs is that we have to interpret the other data. For example, if “correspondence between two models of physics” is some kind of different object like a description in natural language, then some amplified human is going to have to be thinking about that correspondence to see if it explains the facts. If we search over correspondences, some of them will be “attacks” on the human that basically convince them to run a general computation in order to explain the data. So we have two options: (i) perfectly harden the evaluation process against such attacks, (ii) try to ensure that there is always some way to just directly do whatever the attacker convinced the human to do. But (i) seems quite hard, and (ii) basically requires us to put all of the generic programs in our search space.
It’s also quite plausible that we’ll just give up on things like consistency conditions. But those come up frequently enough in intuitive alignment schemes that I at least want to give them a fair shake.
Just spit-balling here, but it seems that T’ will have to be much smaller than T in order for the inner search to run each candidate program (multiple times for statistical guarantees) up to T’ to check whether it satisfies P. Because of this, you should be able to just run the inner search and see what it comes up with (which, if it’s yet another search, will have an even shorter run time) and pretty quickly (relative to T) find the actual M.
Suppose I am interested in finding a program M whose input-output behavior has some property P that I can probabilistically check relatively quickly (e.g. I want to check whether M implements a sparse cut of some large implicit graph). I believe there is some simple and fast program M that does the trick. But even this relatively simple M is much more complex than the specification of the property P.
Now suppose I search for the simplest program running in time T that has property P. If T is sufficiently large, then I will end up getting the program “Search for the simplest program running in time T’ that has property P, then run that.” (Or something even simpler, but the point is that it will make no reference to the intended program M since encoding P is cheaper.)
I may be happy enough with this outcome, but there’s some intuitive sense in which something weird and undesirable has happened here (and I may get in a distinctive kind of trouble if P is an approximate evaluation). I think this is likely to be a useful maximally-simplified example to think about.
The speed prior is calibrated such that this never happens if the learned optimizer is just using brute force—if it needs to search over 1 extra bit then it will take 2x longer, offsetting the gains.
That means that in the regime where P is simple, the speed prior is the “least you can reasonably care about speed”—if you care even less, you will just end up pushing the optimization into an inner process that is more concerned with speed and is therefore able to try a bunch of options.
(However, this is very mild, since the speed prior cares only a tiny bit about speed. Adding 100 bits to your program is the same as letting it run 2^100 times longer, so you are basically just optimizing for simplicity.)
To make this concrete, suppose that I instead used the kind-of-speed prior, where taking 4x longer is equivalent to using 1 extra bit of description complexity. And suppose that P is very simple relative to the complexities of the other objects involved. Suppose that the “object-level” program M has 1000 bits and runs in 2^2000 time, so has kind-of-speed complexity 2000 bits. A search that uses the speed prior will be able to find this algorithm in 2^3000 time, and so will have a kind-of-speed complexity of 1500 bits. So the kind-of-speed prior will just end up delegating to the speed prior.
The speed prior still delegates to better search algorithms though. For example, suppose that someone is able to fill in a 1000 bit program using only 2^500 steps of local search. Then the local search algorithm has speed prior complexity 500 bits, so will beat the object-level program. And the prior we’d end up using is basically “2x longer = 2 more bits” instead of “2x longer = 1 more bit,” i.e. we end up caring more about speed because we delegated.
The actual limit on how much you care about speed is given by whatever search algorithms work best. I think it’s likely possible to “expose” what is going on to the outer optimizer (so that it finds a hypothesis like “This local search algorithm is good” and then uses it to find an object-level program, rather than directly finding a program that bundles both of them together). But I’d guess intuitively that it’s just not even meaningful to talk about the “simplest” programs or any prior that cares less about speed than the optimal search algorithm.
I think multi-level search may help here. To the extent that you can get a lower-confidence estimate of P much more quickly, you can budget your total search time such that you examine many programs and then re-examine only the good candidates. If your confidence is linear with time/complexity of evaluation, this probably doesn’t help.
This is interesting to me for two reasons:
[Mainly] Several proposals for avoiding the instrumental policy work by penalizing computation. But I have a really shaky philosophical grip on why that’s a reasonable thing to do, and so all of those solutions end up feeling weird to me. I can still evaluate them based on what works on concrete examples, but things are slippery enough that plan A is getting a handle on why this is a good idea.
In the long run I expect to have to handle learned optimizers by having the outer optimizer instead directly learn whatever the inner optimizer would have learned. This is an interesting setting to look at how that works out. (For example, in this case the outer optimizer just needs to be able to represent the hypothesis “There is a program that has property P and runs in time T’ ” and then do its own search over that space of faster programs.)
In traditional settings, we are searching for a program M that is simpler than the property P. For example, the number of parameters in our model should be smaller than the size of the dataset we are trying to fit if we want the model to generalize. (This isn’t true for modern DL because of subtleties with SGD optimizing imperfectly and implicit regularization and so on, but spiritually I think it’s still fine..)
But this breaks down if we start doing something like imposing consistency checks and hoping that those change the result of learning. Intuitively it’s also often not true for scientific explanations—even simple properties can be surprising and require explanation, and can be used to support theories that are much more complex than the observation itself.
Some thoughts:
It’s quite plausible that in these cases we want to be doing something other than searching over programs. This is pretty clear in the “scientific explanation” case, and maybe it’s the way to go for the kinds of alignment problems I’ve been thinking about recently.
A basic challenge with searching over programs is that we have to interpret the other data. For example, if “correspondence between two models of physics” is some kind of different object like a description in natural language, then some amplified human is going to have to be thinking about that correspondence to see if it explains the facts. If we search over correspondences, some of them will be “attacks” on the human that basically convince them to run a general computation in order to explain the data. So we have two options: (i) perfectly harden the evaluation process against such attacks, (ii) try to ensure that there is always some way to just directly do whatever the attacker convinced the human to do. But (i) seems quite hard, and (ii) basically requires us to put all of the generic programs in our search space.
It’s also quite plausible that we’ll just give up on things like consistency conditions. But those come up frequently enough in intuitive alignment schemes that I at least want to give them a fair shake.
Just spit-balling here, but it seems that T’ will have to be much smaller than T in order for the inner search to run each candidate program (multiple times for statistical guarantees) up to T’ to check whether it satisfies P. Because of this, you should be able to just run the inner search and see what it comes up with (which, if it’s yet another search, will have an even shorter run time) and pretty quickly (relative to T) find the actual M.