In traditional settings, we are searching for a program M that is simpler than the property P. For example, the number of parameters in our model should be smaller than the size of the dataset we are trying to fit if we want the model to generalize. (This isn’t true for modern DL because of subtleties with SGD optimizing imperfectly and implicit regularization and so on, but spiritually I think it’s still fine..)
But this breaks down if we start doing something like imposing consistency checks and hoping that those change the result of learning. Intuitively it’s also often not true for scientific explanations—even simple properties can be surprising and require explanation, and can be used to support theories that are much more complex than the observation itself.
Some thoughts:
It’s quite plausible that in these cases we want to be doing something other than searching over programs. This is pretty clear in the “scientific explanation” case, and maybe it’s the way to go for the kinds of alignment problems I’ve been thinking about recently.
A basic challenge with searching over programs is that we have to interpret the other data. For example, if “correspondence between two models of physics” is some kind of different object like a description in natural language, then some amplified human is going to have to be thinking about that correspondence to see if it explains the facts. If we search over correspondences, some of them will be “attacks” on the human that basically convince them to run a general computation in order to explain the data. So we have two options: (i) perfectly harden the evaluation process against such attacks, (ii) try to ensure that there is always some way to just directly do whatever the attacker convinced the human to do. But (i) seems quite hard, and (ii) basically requires us to put all of the generic programs in our search space.
It’s also quite plausible that we’ll just give up on things like consistency conditions. But those come up frequently enough in intuitive alignment schemes that I at least want to give them a fair shake.
In traditional settings, we are searching for a program M that is simpler than the property P. For example, the number of parameters in our model should be smaller than the size of the dataset we are trying to fit if we want the model to generalize. (This isn’t true for modern DL because of subtleties with SGD optimizing imperfectly and implicit regularization and so on, but spiritually I think it’s still fine..)
But this breaks down if we start doing something like imposing consistency checks and hoping that those change the result of learning. Intuitively it’s also often not true for scientific explanations—even simple properties can be surprising and require explanation, and can be used to support theories that are much more complex than the observation itself.
Some thoughts:
It’s quite plausible that in these cases we want to be doing something other than searching over programs. This is pretty clear in the “scientific explanation” case, and maybe it’s the way to go for the kinds of alignment problems I’ve been thinking about recently.
A basic challenge with searching over programs is that we have to interpret the other data. For example, if “correspondence between two models of physics” is some kind of different object like a description in natural language, then some amplified human is going to have to be thinking about that correspondence to see if it explains the facts. If we search over correspondences, some of them will be “attacks” on the human that basically convince them to run a general computation in order to explain the data. So we have two options: (i) perfectly harden the evaluation process against such attacks, (ii) try to ensure that there is always some way to just directly do whatever the attacker convinced the human to do. But (i) seems quite hard, and (ii) basically requires us to put all of the generic programs in our search space.
It’s also quite plausible that we’ll just give up on things like consistency conditions. But those come up frequently enough in intuitive alignment schemes that I at least want to give them a fair shake.