I don’t (and perhaps shouldn’t) have a guaranteed trigger—probably I will learn a lot more about what the trigger should be over the next couple years. But my current picture would be that the following are mostly true:
The AIS field is publishing 3-10x more papers per year as the causal inference field is now.
We have ~3 highly aligned tenured professors at top-10 schools, and ~3 mostly-aligned tenured professors with ~10k citations, who want to be editors of the journal
The number of great papers that can’t get into other top AI journals is >20 per year. I figure it’s currently like ~2.
The chance that some other group creates a similar (worse) journal for safety in the subsequent 3 years is >20%
I agree with Ryan’s comments above on this being somewhat bad timing to start a journal for publishing work like the two examples mentioned at the start of the post above. I have an additional reason, not mentioned by Ryan, for feeling this way.
There is an inherent paradox when you want to confer academic credibility or prestige on much of the work that has appeared on LW/AF, work that was produced from an EA or x-risk driven perspective. Often, the authors chose the specific subject area of the work exactly because at the time, they felt that the subject area was a) important for x-risk while also b) lacking the credibility or prestige in main-stream academia that would have been necessary for academia to produce sufficient work in the subject area.
If condition b) is not satisfied, or becomes satisfied, then the EA or x-risk driven researchers (and EA givers of research funds) will typically move elsewhere.
I can’t see any easy way to overcome this paradox of academic prestige-granting on prestige-avoiding work in an academic-style journal. So I think that energy is better spent elsewhere.
I don’t (and perhaps shouldn’t) have a guaranteed trigger—probably I will learn a lot more about what the trigger should be over the next couple years. But my current picture would be that the following are mostly true:
The AIS field is publishing 3-10x more papers per year as the causal inference field is now.
We have ~3 highly aligned tenured professors at top-10 schools, and ~3 mostly-aligned tenured professors with ~10k citations, who want to be editors of the journal
The number of great papers that can’t get into other top AI journals is >20 per year. I figure it’s currently like ~2.
The chance that some other group creates a similar (worse) journal for safety in the subsequent 3 years is >20%
I agree with Ryan’s comments above on this being somewhat bad timing to start a journal for publishing work like the two examples mentioned at the start of the post above. I have an additional reason, not mentioned by Ryan, for feeling this way.
There is an inherent paradox when you want to confer academic credibility or prestige on much of the work that has appeared on LW/AF, work that was produced from an EA or x-risk driven perspective. Often, the authors chose the specific subject area of the work exactly because at the time, they felt that the subject area was a) important for x-risk while also b) lacking the credibility or prestige in main-stream academia that would have been necessary for academia to produce sufficient work in the subject area.
If condition b) is not satisfied, or becomes satisfied, then the EA or x-risk driven researchers (and EA givers of research funds) will typically move elsewhere.
I can’t see any easy way to overcome this paradox of academic prestige-granting on prestige-avoiding work in an academic-style journal. So I think that energy is better spent elsewhere.