I mean, I don’t think I’m “redefining” inner alignment, given that I don’t think I’ve ever really changed my definition and I was the one that originally came up with the term (inner alignment was due to me, mesa-optimization was due to Chris van Merwijk). I also certainly agree that there are “more than just inner alignment problems going on in the lack of worst-case guarantees for deep learning/evolutionary search/etc.”—I think that’s exactly the point that I’m making, which is that while there are other issues, inner alignment is what I’m most concerned about. That being said, I also think I was just misunderstanding the setup in the paper—see Rohin’s comment on this chain.
I mean, I don’t think I’m “redefining” inner alignment, given that I don’t think I’ve ever really changed my definition and I was the one that originally came up with the term (inner alignment was due to me, mesa-optimization was due to Chris van Merwijk). I also certainly agree that there are “more than just inner alignment problems going on in the lack of worst-case guarantees for deep learning/evolutionary search/etc.”—I think that’s exactly the point that I’m making, which is that while there are other issues, inner alignment is what I’m most concerned about. That being said, I also think I was just misunderstanding the setup in the paper—see Rohin’s comment on this chain.