Nothing particularly riveting… Submitted it to a large numbers of philosophy journals in sequence, got either rejections or please modify and resubmit, did re-writings, and these were all eventually rejected. A few times it looked like it might get accepted, but then it was borderline rejected. Basically, they felt it was either not important enough, or they were committed to a probability view of anthropics, and didn’t like the decision based approach.
Or that I was explaining it badly, but the reviewers were not consistent as to what was bad about the explanations.
Thanks for posting this! I wonder how much negative utility academia causes just in terms of this kind of frustrating experience, and how many kids erroneously start down an academic path because they never hear people tell stories like this.
Here’s my own horror story with academic publishing. I was an intern at an industry research lab, and came up with a relatively simple improvement to a widely used cryptographic primitive. I spent a month or two writing it up (along with relevant security arguments) as well as I could using academic language and conventions, etc., with the help of a mentor who worked there and who used to be a professor. Submitted to a top crypto conference and weeks later got back a rejection with comments indicating that all of the reviewers completely failed to understand the main idea. The comments were so short that I had no way to tell how to improve the paper and just got the impression that the reviewers weren’t interested in the idea and made little effort to try to understand it. My mentor acted totally unsurprised and just said something like, “let’s talk about where to submit it next.” That’s the end of the story because I decide if that’s how academia works I wanted to have nothing to do with it when there’s, from my perspective, an obviously better way to do things, i.e., writing up the idea informally, posting it to a mailing list and getting immediate useful feedback/discussions from people who actually understand and are interested in the idea.
writing up the idea informally, posting it to a mailing list and getting immediate useful feedback/discussions from people who actually understand and are interested in the idea.
Note that I was doing that as well. And many academics similarly do both routes.
What’s your and their motivation to also do the academic publishing route (or motivation to go into or remain in academia which forces them to also do the academic publishing route)? I guess I can understand people at FHI doing it for the prestige associated with being in academia which they can convert into policy influence, but why do, say, academic decision theorists do it? Do they want the prestige as a terminal goal? Is it the easiest way for them to make a living while also doing research? Did they go down the academic path not knowing it would be like this and it was too late when they found out?
Assuming the above exhausts the main reasons, it seems like a good idea for someone who doesn’t care much about the prestige and can make money more easily elsewhere to skip academic publishing and use the time to instead do more research, make more money, or just for leisure?
If you’re a published academic in field X, you’re part of the community of published academics in field X, which, in many but not all cases, is the entirety of people doing serious work in field X. “Prestige” mainly translates to “people who know stuff in this area take me seriously”.
When I was involved in crypto there were forums that both published academics and unpublished hobbyists participated in, and took each other seriously. If this isn’t true in a field, it makes me doubt that intellectual progress is still the highest priority in that field. If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously unless it passed peer review by anonymous reviewers whose ideas and interests may be very different from my own. How many of those papers can I possibly come across per year, that I’d justifiably need to outsource my judgment about them to unknown peers?
(I think peer review does have a legitimate purpose in measuring people’s research productivity. University admins have to count something to determine who to hire and promote, and number of papers that pass peer review is perhaps one of the best measure we have. And it can also help outsiders to know who can be trusted as experts in a field, which is what I was thinking of by “prestige”. But there’s no reason for people who are already experts in a field to rely on it instead of their own judgments.)
If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously
Depends on how many cranks there are in anthropic reasoning (lots) and how many semi-serious people post ideas that have already been addressed or refuted in papers already (in philosophy in general, this is huge; in anthropic reasoning, I’m not sure).
Lots of places attract cranks and semi-serious people, including the crypto forums I mentioned, LW, and everything-list which was a mailing list I created to discuss anthropic reasoning as one of the main topics, and they’re not that hard to deal with. Basically it doesn’t take a lot of effort to detect cranks and previously addressed ideas, and everyone can ignore the cranks and the more experienced hobbyists can educate the less experienced hobbyists.
EDIT: For anyone reading this, the discussion continues here.
If specially dedicated journal for anthropic reasoning exists (or say for AI risks and other x-risks), it will probably improve quality of peer review and research? Or it will be no morу useful than Lesswrong?
If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously
But there are no/few philosophers working in “anthropic reasoning”—there are many working in “anthropic probability”, to which my paper is an interesting irrelevance. it’s essentially asking and answering the wrong question, while claiming that their own question is meaningless (and doing so without quoting some of the probability/decision theory stuff which might back up the “anthropic probabilities don’t exist/matter” claim from first principles).
I expected the paper would get published, but I always knew it was a bit of a challenge, because it didn’t fit inside the right silos. And the main problem with academia here is that people tend to stay in their silos.
But there are no/few philosophers working in “anthropic reasoning”—there are many working in “anthropic probability”, to which my paper is an interesting irrelevance. it’s essentially asking and answering the wrong question, while claiming that their own question is meaningless
Seems like a good explanation of what happened to this paper specifically.
(and doing so without quoting some of the probability/decision theory stuff which might back up the “anthropic probabilities don’t exist/matter” claim from first principles)
I guess that would be the thing to try next, if one was intent on pushing this stuff back into academia.
And the main problem with academia here is that people tend to stay in their silos.
By doing that they can better know what the fashionable topics are, what referees want to see in a paper, etc., which help them maximize the chances of getting papers published. This seems to be another downside of the current peer review system as well as the larger publish-or-perish academic culture.
Here’s the list of journals submitted to, btw: Philosophical Quarterly, Philosophical Review, Mind, Synthese, Erkenntnis, Journal of Philosophy, Harvard Review of Philosophy.
Curious if you’re at all updating using MIRIs poor publishing record as evidence of a problem based on the Stuart+Wei’s story below. (Seems like trying to get through journal review might be a huge cost and do little to advance knowledge). Or you think this was an outlier or the class of things MIRI should be publishing is less subject for the kinds of problems mentioned.
I have a lot of respect for Stuart and Wei, so this discussion is very interesting to me. That said, my own experiences with academia were a bit more pleasant, and I know many “normal” folks who regularly get their work published in journals. It certainly takes a ton of “academic market research”, which is hard. But in the end I feel that it makes the work stronger and more appealing.
My hypothesis based on this discussion is that it could be a lot harder to publish stuff outside of current academic mainstream. IIRC Hinton had trouble accepting foundational DL papers in the beginning. MIRI type stuff (and possibly Wei’s crypto, though not sure about the details) could have been far enough from the mainstream to increase the hardness a lot.
Thanks for the reply! Can you tell more about the failure to publish ADT? I know that from arxiv, but don’t know the details.
Nothing particularly riveting… Submitted it to a large numbers of philosophy journals in sequence, got either rejections or please modify and resubmit, did re-writings, and these were all eventually rejected. A few times it looked like it might get accepted, but then it was borderline rejected. Basically, they felt it was either not important enough, or they were committed to a probability view of anthropics, and didn’t like the decision based approach.
Or that I was explaining it badly, but the reviewers were not consistent as to what was bad about the explanations.
Thanks for posting this! I wonder how much negative utility academia causes just in terms of this kind of frustrating experience, and how many kids erroneously start down an academic path because they never hear people tell stories like this.
Here’s my own horror story with academic publishing. I was an intern at an industry research lab, and came up with a relatively simple improvement to a widely used cryptographic primitive. I spent a month or two writing it up (along with relevant security arguments) as well as I could using academic language and conventions, etc., with the help of a mentor who worked there and who used to be a professor. Submitted to a top crypto conference and weeks later got back a rejection with comments indicating that all of the reviewers completely failed to understand the main idea. The comments were so short that I had no way to tell how to improve the paper and just got the impression that the reviewers weren’t interested in the idea and made little effort to try to understand it. My mentor acted totally unsurprised and just said something like, “let’s talk about where to submit it next.” That’s the end of the story because I decide if that’s how academia works I wanted to have nothing to do with it when there’s, from my perspective, an obviously better way to do things, i.e., writing up the idea informally, posting it to a mailing list and getting immediate useful feedback/discussions from people who actually understand and are interested in the idea.
Note that I was doing that as well. And many academics similarly do both routes.
What’s your and their motivation to also do the academic publishing route (or motivation to go into or remain in academia which forces them to also do the academic publishing route)? I guess I can understand people at FHI doing it for the prestige associated with being in academia which they can convert into policy influence, but why do, say, academic decision theorists do it? Do they want the prestige as a terminal goal? Is it the easiest way for them to make a living while also doing research? Did they go down the academic path not knowing it would be like this and it was too late when they found out?
Assuming the above exhausts the main reasons, it seems like a good idea for someone who doesn’t care much about the prestige and can make money more easily elsewhere to skip academic publishing and use the time to instead do more research, make more money, or just for leisure?
If you’re a published academic in field X, you’re part of the community of published academics in field X, which, in many but not all cases, is the entirety of people doing serious work in field X. “Prestige” mainly translates to “people who know stuff in this area take me seriously”.
When I was involved in crypto there were forums that both published academics and unpublished hobbyists participated in, and took each other seriously. If this isn’t true in a field, it makes me doubt that intellectual progress is still the highest priority in that field. If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously unless it passed peer review by anonymous reviewers whose ideas and interests may be very different from my own. How many of those papers can I possibly come across per year, that I’d justifiably need to outsource my judgment about them to unknown peers?
(I think peer review does have a legitimate purpose in measuring people’s research productivity. University admins have to count something to determine who to hire and promote, and number of papers that pass peer review is perhaps one of the best measure we have. And it can also help outsiders to know who can be trusted as experts in a field, which is what I was thinking of by “prestige”. But there’s no reason for people who are already experts in a field to rely on it instead of their own judgments.)
Depends on how many cranks there are in anthropic reasoning (lots) and how many semi-serious people post ideas that have already been addressed or refuted in papers already (in philosophy in general, this is huge; in anthropic reasoning, I’m not sure).
Lots of places attract cranks and semi-serious people, including the crypto forums I mentioned, LW, and everything-list which was a mailing list I created to discuss anthropic reasoning as one of the main topics, and they’re not that hard to deal with. Basically it doesn’t take a lot of effort to detect cranks and previously addressed ideas, and everyone can ignore the cranks and the more experienced hobbyists can educate the less experienced hobbyists.
EDIT: For anyone reading this, the discussion continues here.
This is news to me. Encouraging news.
For anyone reading this, the discussion continues here.
If specially dedicated journal for anthropic reasoning exists (or say for AI risks and other x-risks), it will probably improve quality of peer review and research? Or it will be no morу useful than Lesswrong?
But there are no/few philosophers working in “anthropic reasoning”—there are many working in “anthropic probability”, to which my paper is an interesting irrelevance. it’s essentially asking and answering the wrong question, while claiming that their own question is meaningless (and doing so without quoting some of the probability/decision theory stuff which might back up the “anthropic probabilities don’t exist/matter” claim from first principles).
I expected the paper would get published, but I always knew it was a bit of a challenge, because it didn’t fit inside the right silos. And the main problem with academia here is that people tend to stay in their silos.
Seems like a good explanation of what happened to this paper specifically.
I guess that would be the thing to try next, if one was intent on pushing this stuff back into academia.
By doing that they can better know what the fashionable topics are, what referees want to see in a paper, etc., which help them maximize the chances of getting papers published. This seems to be another downside of the current peer review system as well as the larger publish-or-perish academic culture.
Here’s the list of journals submitted to, btw: Philosophical Quarterly, Philosophical Review, Mind, Synthese, Erkenntnis, Journal of Philosophy, Harvard Review of Philosophy.
Curious if you’re at all updating using MIRIs poor publishing record as evidence of a problem based on the Stuart+Wei’s story below. (Seems like trying to get through journal review might be a huge cost and do little to advance knowledge). Or you think this was an outlier or the class of things MIRI should be publishing is less subject for the kinds of problems mentioned.
I have a lot of respect for Stuart and Wei, so this discussion is very interesting to me. That said, my own experiences with academia were a bit more pleasant, and I know many “normal” folks who regularly get their work published in journals. It certainly takes a ton of “academic market research”, which is hard. But in the end I feel that it makes the work stronger and more appealing.
My hypothesis based on this discussion is that it could be a lot harder to publish stuff outside of current academic mainstream. IIRC Hinton had trouble accepting foundational DL papers in the beginning. MIRI type stuff (and possibly Wei’s crypto, though not sure about the details) could have been far enough from the mainstream to increase the hardness a lot.