For robustness, you have a dataset that’s drawn from the wrong distribution, and you need to act in a way that you would’ve acted if it was drawn from the correct distribution. If you have an amplification dynamic that moves models towards few attractors, then changing the starting point (training distribution compared to target distribution) probably won’t matter. At that point the issue is for the attractor to be useful with respect to all those starting distributions/models. This doesn’t automatically make sense, comparing models by usefulness doesn’t fall out of the other concepts.
Interesting. Do you have any links discussing this? I read Paul Christiano’s post on reliability amplification, but couldn’t find mention of this. And, alas, I’m having trouble finding other relevant articles online.
Amplification induces a dynamic in the model space, it’s a concept of improving models (or equivalently in this context, distributions). This can be useful when you don’t have good datasets, in various ways. Also it ignores independence when talking about recomputing things
Yes, that’s true. I’m not claiming that iterated amplification doesn’t have advantages. What I’m wondering is if non-iterated amplification is a viable alternative. I haven’t seen non-iterated amplification proposed before for creating algorithm AI. Amplification without iteration has the disadvantage that it may not have the attractor dynamic iterated amplification has, but it also doesn’t have the exponentially increasing unreliability iterated amplification has. So, to me at least, it’s not clear to me if pursuing iterated amplification is a more promising strategy than amplification without iteration.
Interesting. Do you have any links discussing this? I read Paul Christiano’s post on reliability amplification, but couldn’t find mention of this. And, alas, I’m having trouble finding other relevant articles online.
Yes, that’s true. I’m not claiming that iterated amplification doesn’t have advantages. What I’m wondering is if non-iterated amplification is a viable alternative. I haven’t seen non-iterated amplification proposed before for creating algorithm AI. Amplification without iteration has the disadvantage that it may not have the attractor dynamic iterated amplification has, but it also doesn’t have the exponentially increasing unreliability iterated amplification has. So, to me at least, it’s not clear to me if pursuing iterated amplification is a more promising strategy than amplification without iteration.