Did you see LeCun’s proposal about how to improve academic review here? It strikes me as very good and I’d love if AI safety/x-risk community had a system like this.
I’m suspicious about creating a separate journal, rather than concentrating efforts around existing institutions: LW/AF. I think it would be better to fund LW exactly for this purpose and add monetary incentives for providing good reviews of research writing on LW/AF (and, of course, the research writing itself could be incentivised in this way, too).
Then, turn AF in exactly the kind of “journal” that you proposed, as I described here.
Yeah, LeCun’s proposal seems interesting. I was actually involved in an attempt to modify OpenReview to push along those lines a couple years ago. But it became very much a ‘perfect is the enemy of the good’ situation where the technical complexity grew too fast relative to the amount of engineering effort devoted to it.
What makes you suspicious about a separate journal? Diluting attention? Hard to make new things? Or something else? I’m sympathetic to diluting attention, but bet that making a new thing wouldn’t be that hard.
Attention dilution, exactly. Ultimately, I want (because I think this will be more effective) all relevant work to be syndicated on LW/AF (via Linkposts, and review posts), not the other way around: AI safety researchers had to subscribe to arxiv sanity, google AI blog, all relevant standalone blogs such as Bengio’s and Scott Aaronson’s, etc. etc. etc., all by themselves and separately.
I even think if LW hired part-time staff dedicated to doing this would be very valuable.
Also, alignment newsletters, to further pre-process information, don’t live. Shah tried to revive his newsletter mid last year, but it didn’t survive for long. Part-time could also curate such an “AF newsletter”, I don’t think it takes Shah’s competence to do this well.
FWIW I think doing something like the newsletter well actually does take very rare skills. Summarizing well is really hard. Having relevant/interesting opinions about the papers is even harder.
I strongly agree with most of this.
Did you see LeCun’s proposal about how to improve academic review here? It strikes me as very good and I’d love if AI safety/x-risk community had a system like this.
I’m suspicious about creating a separate journal, rather than concentrating efforts around existing institutions: LW/AF. I think it would be better to fund LW exactly for this purpose and add monetary incentives for providing good reviews of research writing on LW/AF (and, of course, the research writing itself could be incentivised in this way, too).
Then, turn AF in exactly the kind of “journal” that you proposed, as I described here.
Yeah, LeCun’s proposal seems interesting. I was actually involved in an attempt to modify OpenReview to push along those lines a couple years ago. But it became very much a ‘perfect is the enemy of the good’ situation where the technical complexity grew too fast relative to the amount of engineering effort devoted to it.
What makes you suspicious about a separate journal? Diluting attention? Hard to make new things? Or something else? I’m sympathetic to diluting attention, but bet that making a new thing wouldn’t be that hard.
Attention dilution, exactly. Ultimately, I want (because I think this will be more effective) all relevant work to be syndicated on LW/AF (via Linkposts, and review posts), not the other way around: AI safety researchers had to subscribe to arxiv sanity, google AI blog, all relevant standalone blogs such as Bengio’s and Scott Aaronson’s, etc. etc. etc., all by themselves and separately.
I even think if LW hired part-time staff dedicated to doing this would be very valuable.
Also, alignment newsletters, to further pre-process information, don’t live. Shah tried to revive his newsletter mid last year, but it didn’t survive for long. Part-time could also curate such an “AF newsletter”, I don’t think it takes Shah’s competence to do this well.
FWIW I think doing something like the newsletter well actually does take very rare skills. Summarizing well is really hard. Having relevant/interesting opinions about the papers is even harder.