Some approaches to solving alignment go through teaching ML systems about alignment and getting research assistance from them. Training ML systems needs data, but we might not have enough alignment research to sufficiently fine tune our models, and we might miss out on many concepts which have not been written up. Furthermore, training on the final outputs (AF posts, papers, etc) might be less good at capturing the thought processes which go into hashing out an idea or poking holes in proposals which would be the most useful for a research assistant to be skilled at.
It might be significantly beneficial to capture many of the conversations between researchers, and use them to expand our dataset of alignment content to train models on. Additionally, some researchers may be fine with having their some of their conversations available to the public, in case people want to do a deep dive into their models and research approaches.
The two parts of the system which I’m currently imagining addressing this are:
Clear instructions for setting up a tool which captures audio from calls automatically (either a general tool or platform-specific advice), and makes it as easy as possible to send the right calls to the dataset platform.
Yes, this is a robustly good intervention on the critical path. Have had it on the Alignment Ecosystem Development ideas list for most of a year now.