Slide 8 actually points towards a way to use imitation learning to hopefully make a competitive AI: IDA. Yet in this case, I’m not sure that your result implies safety. For IDA isn’t a one shot imitation learning problem; it’s many successive imitation learning problem. Even if you limit the drift for one step of imitation learning, the model could drift further and further at each distillation step.
I don’t think this is a lethal problem. The setting is not one-shot, it’s imitation over some duration of time. IDA just increases the effective duration of time, so you only need to tune how cautious the learning is (which I think is controlled by α in this work) accordingly: there is a cost, but it’s bounded. You also need to deal with non-realizability (after enough amplifications the system is too complex for exact simulation, even if it wasn’t to begin with), but this should be doable using infra-Bayesianism (I already have some notion how that would work). Another problem with imitation-based IDA is that external unaligned AI might leak into the system either from the future or from counterfactual scenarios in which such an AI is instantiated. This is not an issue with amplifying by parallelism (like in the presentation) but at the cost of requiring parallelizability.
I don’t think this is a lethal problem. The setting is not one-shot, it’s imitation over some duration of time. IDA just increases the effective duration of time, so you only need to tune how cautious the learning is (which I think is controlled by α in this work) accordingly: there is a cost, but it’s bounded. You also need to deal with non-realizability (after enough amplifications the system is too complex for exact simulation, even if it wasn’t to begin with), but this should be doable using infra-Bayesianism (I already have some notion how that would work). Another problem with imitation-based IDA is that external unaligned AI might leak into the system either from the future or from counterfactual scenarios in which such an AI is instantiated. This is not an issue with amplifying by parallelism (like in the presentation) but at the cost of requiring parallelizability.