Why wouldn’t people (and maybe even AIs, at least up to a point) be applying these ever-advancing AI capabilities to developing better and better interpretability tools as well? I.e., what reason is there to expect an “interpretability gap” to develop (unless you believe interpretability is a fundamentally unsolvable problem, in which case no amount of AI power is going to help)?
Why wouldn’t people (and maybe even AIs, at least up to a point) be applying these ever-advancing AI capabilities to developing better and better interpretability tools as well? I.e., what reason is there to expect an “interpretability gap” to develop (unless you believe interpretability is a fundamentally unsolvable problem, in which case no amount of AI power is going to help)?