As someone who might be seen as engaging in a waterfall approach, these are good points. One additional consideration: not everything you learn in completing a math textbook is knowledge; you also pick up skills, grooves in your mind and habits. In the context of AI alignment research, learning to write proofs is particularly relevant to having a security mindset: one begins to appreciate the rhythm by which one finds all relevant edge cases. This kind of approach is required if you want to say things that hold in all contexts; in my project-based research work, I have found that this groove seriously contributes to reasoning soundly about what an alignment proposal will do, and how it breaks down.
As someone who might be seen as engaging in a waterfall approach, these are good points. One additional consideration: not everything you learn in completing a math textbook is knowledge; you also pick up skills, grooves in your mind and habits. In the context of AI alignment research, learning to write proofs is particularly relevant to having a security mindset: one begins to appreciate the rhythm by which one finds all relevant edge cases. This kind of approach is required if you want to say things that hold in all contexts; in my project-based research work, I have found that this groove seriously contributes to reasoning soundly about what an alignment proposal will do, and how it breaks down.