When I was involved in crypto there were forums that both published academics and unpublished hobbyists participated in, and took each other seriously. If this isn’t true in a field, it makes me doubt that intellectual progress is still the highest priority in that field. If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously unless it passed peer review by anonymous reviewers whose ideas and interests may be very different from my own. How many of those papers can I possibly come across per year, that I’d justifiably need to outsource my judgment about them to unknown peers?
(I think peer review does have a legitimate purpose in measuring people’s research productivity. University admins have to count something to determine who to hire and promote, and number of papers that pass peer review is perhaps one of the best measure we have. And it can also help outsiders to know who can be trusted as experts in a field, which is what I was thinking of by “prestige”. But there’s no reason for people who are already experts in a field to rely on it instead of their own judgments.)
If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously
Depends on how many cranks there are in anthropic reasoning (lots) and how many semi-serious people post ideas that have already been addressed or refuted in papers already (in philosophy in general, this is huge; in anthropic reasoning, I’m not sure).
Lots of places attract cranks and semi-serious people, including the crypto forums I mentioned, LW, and everything-list which was a mailing list I created to discuss anthropic reasoning as one of the main topics, and they’re not that hard to deal with. Basically it doesn’t take a lot of effort to detect cranks and previously addressed ideas, and everyone can ignore the cranks and the more experienced hobbyists can educate the less experienced hobbyists.
EDIT: For anyone reading this, the discussion continues here.
If specially dedicated journal for anthropic reasoning exists (or say for AI risks and other x-risks), it will probably improve quality of peer review and research? Or it will be no morу useful than Lesswrong?
If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously
But there are no/few philosophers working in “anthropic reasoning”—there are many working in “anthropic probability”, to which my paper is an interesting irrelevance. it’s essentially asking and answering the wrong question, while claiming that their own question is meaningless (and doing so without quoting some of the probability/decision theory stuff which might back up the “anthropic probabilities don’t exist/matter” claim from first principles).
I expected the paper would get published, but I always knew it was a bit of a challenge, because it didn’t fit inside the right silos. And the main problem with academia here is that people tend to stay in their silos.
But there are no/few philosophers working in “anthropic reasoning”—there are many working in “anthropic probability”, to which my paper is an interesting irrelevance. it’s essentially asking and answering the wrong question, while claiming that their own question is meaningless
Seems like a good explanation of what happened to this paper specifically.
(and doing so without quoting some of the probability/decision theory stuff which might back up the “anthropic probabilities don’t exist/matter” claim from first principles)
I guess that would be the thing to try next, if one was intent on pushing this stuff back into academia.
And the main problem with academia here is that people tend to stay in their silos.
By doing that they can better know what the fashionable topics are, what referees want to see in a paper, etc., which help them maximize the chances of getting papers published. This seems to be another downside of the current peer review system as well as the larger publish-or-perish academic culture.
When I was involved in crypto there were forums that both published academics and unpublished hobbyists participated in, and took each other seriously. If this isn’t true in a field, it makes me doubt that intellectual progress is still the highest priority in that field. If I were a professional philosopher working in anthropic reasoning, I don’t see how I can justify not taking a paper about anthropic reasoning seriously unless it passed peer review by anonymous reviewers whose ideas and interests may be very different from my own. How many of those papers can I possibly come across per year, that I’d justifiably need to outsource my judgment about them to unknown peers?
(I think peer review does have a legitimate purpose in measuring people’s research productivity. University admins have to count something to determine who to hire and promote, and number of papers that pass peer review is perhaps one of the best measure we have. And it can also help outsiders to know who can be trusted as experts in a field, which is what I was thinking of by “prestige”. But there’s no reason for people who are already experts in a field to rely on it instead of their own judgments.)
Depends on how many cranks there are in anthropic reasoning (lots) and how many semi-serious people post ideas that have already been addressed or refuted in papers already (in philosophy in general, this is huge; in anthropic reasoning, I’m not sure).
Lots of places attract cranks and semi-serious people, including the crypto forums I mentioned, LW, and everything-list which was a mailing list I created to discuss anthropic reasoning as one of the main topics, and they’re not that hard to deal with. Basically it doesn’t take a lot of effort to detect cranks and previously addressed ideas, and everyone can ignore the cranks and the more experienced hobbyists can educate the less experienced hobbyists.
EDIT: For anyone reading this, the discussion continues here.
This is news to me. Encouraging news.
For anyone reading this, the discussion continues here.
If specially dedicated journal for anthropic reasoning exists (or say for AI risks and other x-risks), it will probably improve quality of peer review and research? Or it will be no morу useful than Lesswrong?
But there are no/few philosophers working in “anthropic reasoning”—there are many working in “anthropic probability”, to which my paper is an interesting irrelevance. it’s essentially asking and answering the wrong question, while claiming that their own question is meaningless (and doing so without quoting some of the probability/decision theory stuff which might back up the “anthropic probabilities don’t exist/matter” claim from first principles).
I expected the paper would get published, but I always knew it was a bit of a challenge, because it didn’t fit inside the right silos. And the main problem with academia here is that people tend to stay in their silos.
Seems like a good explanation of what happened to this paper specifically.
I guess that would be the thing to try next, if one was intent on pushing this stuff back into academia.
By doing that they can better know what the fashionable topics are, what referees want to see in a paper, etc., which help them maximize the chances of getting papers published. This seems to be another downside of the current peer review system as well as the larger publish-or-perish academic culture.