Mentioned before, but I wonder if progressive/iterative summarization would increase peer review between AI researchers a bunch. I.e. marginal efforts towards summarizing the either adversarial or generative ideas a post is a response to, which gives the original authors and outsiders more surfaces to offer ideas along. I think this would also lead to more settling on shared terminology, which is a significant fraction of progress in a field as far as I can tell from the history of science. If it increased engagement then it would be directly incentivized as soon as people knew/experienced that.
Mentioned before, but I wonder if progressive/iterative summarization would increase peer review between AI researchers a bunch. I.e. marginal efforts towards summarizing the either adversarial or generative ideas a post is a response to, which gives the original authors and outsiders more surfaces to offer ideas along. I think this would also lead to more settling on shared terminology, which is a significant fraction of progress in a field as far as I can tell from the history of science. If it increased engagement then it would be directly incentivized as soon as people knew/experienced that.