Very cool! But I think there’s a crisper way to communicate the central point of this piece (or at least, a way that would have been more immediately transparent to me). Here it is:
Say you are going to use Process X to obtain a new Model. Process X can be as simple as “pre-train on this dataset”, or as complex as “use a bureaucracy of Model A to train a new LLM, then have Model B test it, then have Model C scaffold it into a control protocol, then have Model D produce some written arguments for the scaffold being safe, have a human read them, and if they reject delete everything”. Whatever Process X is, you have only two ways to obtain evidence that Process X has a particular property (like “safety”): looking a priori at the spec of Process X (without running it), or running (parts of) Process X and observing its outputs a posteriori. In the former case, you clearly need an argument for why this particular spec has the property. But in the latter case, you also need an argument for why observing those particular outputs ensures the property for this particular spec. (Pedantically speaking, this is just Kuhn’s theory-ladenness of observations.)
Of course, the above reasoning doesn’t rule out the possibility that the required arguments are pretty trivial to make. That’s why you summarize some well-known complications of automation, showing that the argument will not be trivial when Process X contains a lot of automation, and in fact it’d be simpler if we could do away with the automation.
It is also the case that the outputs observed from Process X might themselves be human-readable arguments. While this could indeed alleviate the burden of human argument-generation, we still need a previous (possibly simpler) argument for why “a human accepting those output arguments” actually ensures the property (especially given those arguments could be highly out-of-distribution for the human).
Very cool! But I think there’s a crisper way to communicate the central point of this piece (or at least, a way that would have been more immediately transparent to me). Here it is:
Say you are going to use Process X to obtain a new Model. Process X can be as simple as “pre-train on this dataset”, or as complex as “use a bureaucracy of Model A to train a new LLM, then have Model B test it, then have Model C scaffold it into a control protocol, then have Model D produce some written arguments for the scaffold being safe, have a human read them, and if they reject delete everything”. Whatever Process X is, you have only two ways to obtain evidence that Process X has a particular property (like “safety”): looking a priori at the spec of Process X (without running it), or running (parts of) Process X and observing its outputs a posteriori. In the former case, you clearly need an argument for why this particular spec has the property. But in the latter case, you also need an argument for why observing those particular outputs ensures the property for this particular spec. (Pedantically speaking, this is just Kuhn’s theory-ladenness of observations.)
Of course, the above reasoning doesn’t rule out the possibility that the required arguments are pretty trivial to make. That’s why you summarize some well-known complications of automation, showing that the argument will not be trivial when Process X contains a lot of automation, and in fact it’d be simpler if we could do away with the automation.
It is also the case that the outputs observed from Process X might themselves be human-readable arguments. While this could indeed alleviate the burden of human argument-generation, we still need a previous (possibly simpler) argument for why “a human accepting those output arguments” actually ensures the property (especially given those arguments could be highly out-of-distribution for the human).
Yes, that is a clean alternative framing!