This idea has been discussed before. Though it’s an important one, so I don’t think it’s a bad thing for us to bring it up again. My perspective now and previously is that this would be fairly bad at the moment, but might be good in a couple of years time.
My background understanding is that the purpose of a conference or journal in this case (and in general) is primarily to certify the quality of some work (and to a lesser extent, the field of inquiry). This in-turn helps with growing the AIS field, and the careers of AIS researchers.
This is only effective if the conference or journal is sufficiently prestigious. Presently, publishing AI safety papers in Neurips, AAAI, JMLR, JAIR serves to certify the validity of the work, and boosts the field of AI safety whereas publishing in (for example) Futures or AGI doesn’t. If you create a new publication venue, by default, its prestige would be comparable to, or less than Futures or AGI, and so wouldn’t really help to serve the role of a journal.
Currently, the flow of AIS papers into the likes of Neurips and AAAI (and probably soon JMLR, JAIR) is rapidly improving. New keywords have been created there at several conferences, along the lines of “AI safety and trustworthiness” (I forget the exact wording) so that you can nowadays expect, on average, to receive reviewer who average out to neutral, or even vaguely sympathetic to AIS research. Ten or so papers were published in such journals in the last year, and all these authors will become reviewers under that keyword when the conference comes around next year. Yes, things like “Logical Inductors” or “AI safety via debate” are very hard to publish. There’s some pressure to write research that’s more “normie”. All of that sucks, but it’s an acceptable cost for being in a high-prestige field. And overall, things are getting easier, fairly quickly.
If you create a too low-prestige journal, you can generate blowback. For example, there was some criticism on Twitter about Pearl’s “Journal of Causal Inference”, even though his field is somewhat more advanced than hours.
In 1.5-3 years time, I think the risk-benefit calculus will probably change. The growth of AIS work (which has been fast) may outpace the virtuous cycle that’s currently happening with AI conferences and journals, such that a lot of great papers are getting rejected. There could be enough tenure-track professors at top schools to make the journal decently high-status (moreso than Futures and AGI). We might even be nearing the point where some unilateral actor will go and make a worse journal if we don’t make one. I’d say when a couple of those things are true, that’s when we should pull the trigger and make this kind of conference/journal.
Thanks for the detailed feedback! David already linked the facebook conversation, but it’s pretty useful that you summarize it in a comment like this.
I think that your position makes sense, and you do take into account most of my issues and criticisms about the current model. Do you think you could make really specific statements about what needs to change for a journal to be worth it, detailing more your last paragraph maybe.
Also, to provide a first step without the issues that you pointed, I proposed a review mechanism here in the AF in this comment.
I don’t (and perhaps shouldn’t) have a guaranteed trigger—probably I will learn a lot more about what the trigger should be over the next couple years. But my current picture would be that the following are mostly true:
The AIS field is publishing 3-10x more papers per year as the causal inference field is now.
We have ~3 highly aligned tenured professors at top-10 schools, and ~3 mostly-aligned tenured professors with ~10k citations, who want to be editors of the journal
The number of great papers that can’t get into other top AI journals is >20 per year. I figure it’s currently like ~2.
The chance that some other group creates a similar (worse) journal for safety in the subsequent 3 years is >20%
I agree with Ryan’s comments above on this being somewhat bad timing to start a journal for publishing work like the two examples mentioned at the start of the post above. I have an additional reason, not mentioned by Ryan, for feeling this way.
There is an inherent paradox when you want to confer academic credibility or prestige on much of the work that has appeared on LW/AF, work that was produced from an EA or x-risk driven perspective. Often, the authors chose the specific subject area of the work exactly because at the time, they felt that the subject area was a) important for x-risk while also b) lacking the credibility or prestige in main-stream academia that would have been necessary for academia to produce sufficient work in the subject area.
If condition b) is not satisfied, or becomes satisfied, then the EA or x-risk driven researchers (and EA givers of research funds) will typically move elsewhere.
I can’t see any easy way to overcome this paradox of academic prestige-granting on prestige-avoiding work in an academic-style journal. So I think that energy is better spent elsewhere.
This idea has been discussed before. Though it’s an important one, so I don’t think it’s a bad thing for us to bring it up again. My perspective now and previously is that this would be fairly bad at the moment, but might be good in a couple of years time.
My background understanding is that the purpose of a conference or journal in this case (and in general) is primarily to certify the quality of some work (and to a lesser extent, the field of inquiry). This in-turn helps with growing the AIS field, and the careers of AIS researchers.
This is only effective if the conference or journal is sufficiently prestigious. Presently, publishing AI safety papers in Neurips, AAAI, JMLR, JAIR serves to certify the validity of the work, and boosts the field of AI safety whereas publishing in (for example) Futures or AGI doesn’t. If you create a new publication venue, by default, its prestige would be comparable to, or less than Futures or AGI, and so wouldn’t really help to serve the role of a journal.
Currently, the flow of AIS papers into the likes of Neurips and AAAI (and probably soon JMLR, JAIR) is rapidly improving. New keywords have been created there at several conferences, along the lines of “AI safety and trustworthiness” (I forget the exact wording) so that you can nowadays expect, on average, to receive reviewer who average out to neutral, or even vaguely sympathetic to AIS research. Ten or so papers were published in such journals in the last year, and all these authors will become reviewers under that keyword when the conference comes around next year. Yes, things like “Logical Inductors” or “AI safety via debate” are very hard to publish. There’s some pressure to write research that’s more “normie”. All of that sucks, but it’s an acceptable cost for being in a high-prestige field. And overall, things are getting easier, fairly quickly.
If you create a too low-prestige journal, you can generate blowback. For example, there was some criticism on Twitter about Pearl’s “Journal of Causal Inference”, even though his field is somewhat more advanced than hours.
In 1.5-3 years time, I think the risk-benefit calculus will probably change. The growth of AIS work (which has been fast) may outpace the virtuous cycle that’s currently happening with AI conferences and journals, such that a lot of great papers are getting rejected. There could be enough tenure-track professors at top schools to make the journal decently high-status (moreso than Futures and AGI). We might even be nearing the point where some unilateral actor will go and make a worse journal if we don’t make one. I’d say when a couple of those things are true, that’s when we should pull the trigger and make this kind of conference/journal.
Thanks for the detailed feedback! David already linked the facebook conversation, but it’s pretty useful that you summarize it in a comment like this.
I think that your position makes sense, and you do take into account most of my issues and criticisms about the current model. Do you think you could make really specific statements about what needs to change for a journal to be worth it, detailing more your last paragraph maybe.
Also, to provide a first step without the issues that you pointed, I proposed a review mechanism here in the AF in this comment.
I don’t (and perhaps shouldn’t) have a guaranteed trigger—probably I will learn a lot more about what the trigger should be over the next couple years. But my current picture would be that the following are mostly true:
The AIS field is publishing 3-10x more papers per year as the causal inference field is now.
We have ~3 highly aligned tenured professors at top-10 schools, and ~3 mostly-aligned tenured professors with ~10k citations, who want to be editors of the journal
The number of great papers that can’t get into other top AI journals is >20 per year. I figure it’s currently like ~2.
The chance that some other group creates a similar (worse) journal for safety in the subsequent 3 years is >20%
I agree with Ryan’s comments above on this being somewhat bad timing to start a journal for publishing work like the two examples mentioned at the start of the post above. I have an additional reason, not mentioned by Ryan, for feeling this way.
There is an inherent paradox when you want to confer academic credibility or prestige on much of the work that has appeared on LW/AF, work that was produced from an EA or x-risk driven perspective. Often, the authors chose the specific subject area of the work exactly because at the time, they felt that the subject area was a) important for x-risk while also b) lacking the credibility or prestige in main-stream academia that would have been necessary for academia to produce sufficient work in the subject area.
If condition b) is not satisfied, or becomes satisfied, then the EA or x-risk driven researchers (and EA givers of research funds) will typically move elsewhere.
I can’t see any easy way to overcome this paradox of academic prestige-granting on prestige-avoiding work in an academic-style journal. So I think that energy is better spent elsewhere.