Thanks for listing these examples! They indeed seem to provide counterexamples to the trend of increasing bureaucratization and credentialism, though I must note that they are all several decades old.
However, more importantly, my main point is that bureaucratization tends to have the same negative effect on science as on any other field of human endeavor. The number one tendency in every bureaucracy is that things should be done in such a way that everyone can cover his ass and avoid any personal responsibility no matter what happens. In contrast, productive work always requires personal responsibility: someone must accept the blame if things go wrong, or otherwise there is no incentive to do things right, unless people are driven by sheer personal enthusiasm.
When Max Planck decided to promote Einstein’s work, he was putting his own reputation on the line: his career and prestige would have suffered greatly if it had turned out that he was swindled by a crackpot. But the modern peer-review system absolves everyone of responsibility—everyone involved has his little piece of bureaucratic duty, and no matter what happens, there is no personal responsibility at all.
Some supposedly “scientific” features of the present system are in fact the height of ass-covering perversity, like for example the “double-blind” peer-review—how on Earth can you be an expert capable of reviewing a novel research paper, but unable to figure out who the authors are from the content of the paper? Even with a simple “blind” peer review, it’s a complete farce, considering how small and tightly-knit any bleeding-edge research community necessarily is: either your paper will be given for review to clueless incompetent outsiders, or you can guess pretty reliably who the “anonymous” insider reviewers are. (And in any case, you know who is on the editorial board, and what nepotistic considerations are driving them!)
So, at the end of the day, I don’t see any advantage that the present heavily bureaucratized system might have over the old system that was based on honor and reputation. In my view, the present system functions well only insofar as in many fields, the top people are still driven by enthusiasm and sense of honor and doing their best to keep their field sound. But this is despite the bureaucratizing tendencies, not thanks to them. In fields where the personal enthusiasm of prominent insiders hasn’t been strong enough to keep bureaucratically backed pseudoscience at bay, the mainstream work has long since degenerated into sheer nonsense.
Thanks for listing these examples! They indeed seem to provide counterexamples to the trend of increasing bureaucratization and credentialism, though I must note that they are all several decades old.
True. I thought about acknowledging this in my post, but decided it didn’t make much difference because (1) their age would be partly explained by availability bias, because younger scientists are less well-known just because they haven’t been around as long, and (2) if I move the bureaucratic cutoff date forward from the early 20th century to the mid-1980s, I find your point of view less credible, because science was already much more oriented around credentials & peer review by then than it was c. 1905.
Still, my list of anecdotal examples is no stronger evidence than taw’s. Really, one would have to do a systematic survey of author affiliations in published papers or something to be sure of an actual trend, but I can’t be bothered.
However, more importantly, my main point is that bureaucratization tends to have the same negative effect on science as on any other field of human endeavor. The number one tendency in every bureaucracy is that things should be done in such a way that everyone can cover his ass and avoid any personal responsibility no matter what happens.
That’s endemic in bureaucracies (or at least all those I’ve had to deal with!), but I’d expect incentives for arse-covering in any system where people feel a need to protect their image, and that includes honour-and-rep social networks too.
In contrast, productive work always requires personal responsibility: someone must accept the blame if things go wrong, or otherwise there is no incentive to do things right, unless people are driven by sheer personal enthusiasm.
This seems like a fairly impoverished view of motivation to me. What about professionalism: a simple wish to do one’s job well, whether one has particular enthusiasm for it or not? There must be people with PhDs out there who dragged themselves through the PhD process and got worthwhile results through sheer force of will rather than special enthusiasm for their project; I’d be amazed if there weren’t professional scientists out there fitting that description too. There are probably other motivations too that I can’t think of off-the-cuff.
But the modern peer-review system absolves everyone of responsibility—everyone involved has his little piece of bureaucratic duty, and no matter what happens, there is no personal responsibility at all.
Again, though, I don’t see why the lack of personal responsibility (which I think is a slight exaggeration) is unique to bureaucratic science. There’s no intrinsic reason why journals where, say, the editor glances over a paper themselves and informally shows it to a few friends before deciding whether or not to publish (instead of soliciting formal peer reviews from experts) would be better in this respect.
Some supposedly “scientific” features of the present system are in fact the height of ass-covering perversity, like for example the “double-blind” peer-review—how on Earth can you be an expert capable of reviewing a novel research paper, but unable to figure out who the authors are from the content of the paper?
It’s certainly possible if the authors aren’t already established workers in the expert’s field. If I submitted an econometrics/ecology/history/statistics paper to an econometrics/ecology/history/statistics journal with real double-blind review, I’d bet a lot of money that the reviewer(s) couldn’t guess who I was! But yes, double-blind (and single-blind) review’s often trivial to get around.
So, at the end of the day, I don’t see any advantage that the present heavily bureaucratized system might have over the old system that was based on honor and reputation. In my view, the present system functions well only insofar as in many fields, the top people are still driven by enthusiasm and sense of honor and doing their best to keep their field sound.
But surely this is inevitable in any system of science, as long as science is run by people? Regardless of the system used for accepting diamonds and rejecting turds, as long as people are doing the judging, the quality of what’s published hinges on the competence & motivations of whoever’s running the show, bureaucracy or not.
But this is despite the bureaucratizing tendencies, not thanks to them.
I suspect those tendencies make little difference either way, overall.
In fields where the personal enthusiasm of prominent insiders hasn’t been strong enough to keep bureaucratically backed pseudoscience at bay, the mainstream work has long since degenerated into sheer nonsense.
What I think’s happening here is that you see poor science that’s backed by parts of the establishment, and you’re inferring that because the establishment is bureaucratic, bureaucracy’s to blame for the poor science. But I doubt the chosen social structure is the root cause. I’d expect similar sections of rot in an Einstein-era honour-based system.
I don’t see why the lack of personal responsibility (which I think is a slight exaggeration) is unique to bureaucratic science. There’s no intrinsic reason why journals where, say, the editor glances over a paper themselves and informally shows it to a few friends before deciding whether or not to publish (instead of soliciting formal peer reviews from experts) would be better in this respect.
The contrast I’m pointing out is between a system where each decision and each claim puts the responsible person’s reputation on the line, and a system where decisions are made according to established bureaucratic rules that allow everyone involved to escape any personal responsibility no matter what happens (except if crude malfeasance like data forgery or plagiarism is proven).
Thus, for example, if a junk paper gets published in a journal, this should tarnish the reputation of the both the authors and the editor. Yet, in the present bureaucratic system, the editor can comfortably hide behind the fact that the regular bureaucratic procedure was followed, and even the authors can claim that you can’t really blame them if their false claims sounded convincing enough for the reviewers (who are in turn anonymous and thus completely absolved of any responsibility). If the existing heavily bureaucratized modes of publishing make it difficult to publish criticism (as is often the case), this situation coupled with the usual human tendencies may easily lead to utter corruption covered by an impeccable bureaucratic facade that makes it impossible to put blame on anyone.
It’s certainly possible [that double-blind review works] if the authors aren’t already established workers in the expert’s field. If I submitted an econometrics/ecology/history/statistics paper to an econometrics/ecology/history/statistics journal with real double-blind review, I’d bet a lot of money that the reviewer(s) couldn’t guess who I was! But yes, double-blind (and single-blind) review’s often trivial to get around.
The key problem, however, is that blind review is ultimately another way of eliminating personal responsibility. For the reviewer, there is no incentive whatsoever to do a good job: the work is unpaid, uncredited, and without any repercussions no matter how badly it’s done. On the other hand, considering how tightly-knit specific research communities typically are, the supposed blindness is a farce more often than not.
What I think’s happening here is that you see poor science that’s backed by parts of the establishment, and you’re inferring that because the establishment is bureaucratic, bureaucracy’s to blame for the poor science. But I doubt the chosen social structure is the root cause. I’d expect similar sections of rot in an Einstein-era honour-based system.
Often it’s not about poor science being backed by the establishment for ideological reasons (though this also happens), but merely about the fact that a field can be run by a clique that produces junk science under a veneer of bureaucratic perfection, conscientiously going through all the bureaucratic motions despite the actual substance being worthless (or worse).
But, yes, all sorts of pseudoscience also flourished under the Einstein-era system of honor and reputation. Psychoanalysis is the prime example. The question is whether the subsequent bureaucratization has alleviated or exacerbated these problems. My opinion is that, at best, it hasn’t put any real barriers against pseudoscience, and arguably, it has made things worse by allowing pseudoscience to be given a veneer of respectability (and sources of funding) much more easily.
I’m not sure what exactly you have in mind by “crowd-sourcing” in this context. Do you mean publishing online in a way that’s open for public comments and debate, whose content will come up whenever someone looks up the paper? If that’s what you mean, I do have a favorable view of this approach.
In fact, I’d say that the principal reason why such a system is not implemented in many areas is the sheer desire for ass-covering. Just imagine all the emperors in various degrees of nakedness who are presently hiding behind thick, impressive-sounding publication records, whose validity would however be seriously brought into question by this practice.
The way things are now, a paper can be retracted or marked as invalid only if outright plagiarism or data faking is proved. Otherwise, even those junk papers that have been debunked so convincingly that their authors were forced to publicly admit it still stand proudly in publication archives, both on paper and online, ready to fool any unsuspecting visitor who stumbles onto them.
Thanks for listing these examples! They indeed seem to provide counterexamples to the trend of increasing bureaucratization and credentialism, though I must note that they are all several decades old.
However, more importantly, my main point is that bureaucratization tends to have the same negative effect on science as on any other field of human endeavor. The number one tendency in every bureaucracy is that things should be done in such a way that everyone can cover his ass and avoid any personal responsibility no matter what happens. In contrast, productive work always requires personal responsibility: someone must accept the blame if things go wrong, or otherwise there is no incentive to do things right, unless people are driven by sheer personal enthusiasm.
When Max Planck decided to promote Einstein’s work, he was putting his own reputation on the line: his career and prestige would have suffered greatly if it had turned out that he was swindled by a crackpot. But the modern peer-review system absolves everyone of responsibility—everyone involved has his little piece of bureaucratic duty, and no matter what happens, there is no personal responsibility at all.
Some supposedly “scientific” features of the present system are in fact the height of ass-covering perversity, like for example the “double-blind” peer-review—how on Earth can you be an expert capable of reviewing a novel research paper, but unable to figure out who the authors are from the content of the paper? Even with a simple “blind” peer review, it’s a complete farce, considering how small and tightly-knit any bleeding-edge research community necessarily is: either your paper will be given for review to clueless incompetent outsiders, or you can guess pretty reliably who the “anonymous” insider reviewers are. (And in any case, you know who is on the editorial board, and what nepotistic considerations are driving them!)
So, at the end of the day, I don’t see any advantage that the present heavily bureaucratized system might have over the old system that was based on honor and reputation. In my view, the present system functions well only insofar as in many fields, the top people are still driven by enthusiasm and sense of honor and doing their best to keep their field sound. But this is despite the bureaucratizing tendencies, not thanks to them. In fields where the personal enthusiasm of prominent insiders hasn’t been strong enough to keep bureaucratically backed pseudoscience at bay, the mainstream work has long since degenerated into sheer nonsense.
True. I thought about acknowledging this in my post, but decided it didn’t make much difference because (1) their age would be partly explained by availability bias, because younger scientists are less well-known just because they haven’t been around as long, and (2) if I move the bureaucratic cutoff date forward from the early 20th century to the mid-1980s, I find your point of view less credible, because science was already much more oriented around credentials & peer review by then than it was c. 1905.
Still, my list of anecdotal examples is no stronger evidence than taw’s. Really, one would have to do a systematic survey of author affiliations in published papers or something to be sure of an actual trend, but I can’t be bothered.
That’s endemic in bureaucracies (or at least all those I’ve had to deal with!), but I’d expect incentives for arse-covering in any system where people feel a need to protect their image, and that includes honour-and-rep social networks too.
This seems like a fairly impoverished view of motivation to me. What about professionalism: a simple wish to do one’s job well, whether one has particular enthusiasm for it or not? There must be people with PhDs out there who dragged themselves through the PhD process and got worthwhile results through sheer force of will rather than special enthusiasm for their project; I’d be amazed if there weren’t professional scientists out there fitting that description too. There are probably other motivations too that I can’t think of off-the-cuff.
Again, though, I don’t see why the lack of personal responsibility (which I think is a slight exaggeration) is unique to bureaucratic science. There’s no intrinsic reason why journals where, say, the editor glances over a paper themselves and informally shows it to a few friends before deciding whether or not to publish (instead of soliciting formal peer reviews from experts) would be better in this respect.
It’s certainly possible if the authors aren’t already established workers in the expert’s field. If I submitted an econometrics/ecology/history/statistics paper to an econometrics/ecology/history/statistics journal with real double-blind review, I’d bet a lot of money that the reviewer(s) couldn’t guess who I was! But yes, double-blind (and single-blind) review’s often trivial to get around.
But surely this is inevitable in any system of science, as long as science is run by people? Regardless of the system used for accepting diamonds and rejecting turds, as long as people are doing the judging, the quality of what’s published hinges on the competence & motivations of whoever’s running the show, bureaucracy or not.
I suspect those tendencies make little difference either way, overall.
What I think’s happening here is that you see poor science that’s backed by parts of the establishment, and you’re inferring that because the establishment is bureaucratic, bureaucracy’s to blame for the poor science. But I doubt the chosen social structure is the root cause. I’d expect similar sections of rot in an Einstein-era honour-based system.
satt:
The contrast I’m pointing out is between a system where each decision and each claim puts the responsible person’s reputation on the line, and a system where decisions are made according to established bureaucratic rules that allow everyone involved to escape any personal responsibility no matter what happens (except if crude malfeasance like data forgery or plagiarism is proven).
Thus, for example, if a junk paper gets published in a journal, this should tarnish the reputation of the both the authors and the editor. Yet, in the present bureaucratic system, the editor can comfortably hide behind the fact that the regular bureaucratic procedure was followed, and even the authors can claim that you can’t really blame them if their false claims sounded convincing enough for the reviewers (who are in turn anonymous and thus completely absolved of any responsibility). If the existing heavily bureaucratized modes of publishing make it difficult to publish criticism (as is often the case), this situation coupled with the usual human tendencies may easily lead to utter corruption covered by an impeccable bureaucratic facade that makes it impossible to put blame on anyone.
The key problem, however, is that blind review is ultimately another way of eliminating personal responsibility. For the reviewer, there is no incentive whatsoever to do a good job: the work is unpaid, uncredited, and without any repercussions no matter how badly it’s done. On the other hand, considering how tightly-knit specific research communities typically are, the supposed blindness is a farce more often than not.
Often it’s not about poor science being backed by the establishment for ideological reasons (though this also happens), but merely about the fact that a field can be run by a clique that produces junk science under a veneer of bureaucratic perfection, conscientiously going through all the bureaucratic motions despite the actual substance being worthless (or worse).
But, yes, all sorts of pseudoscience also flourished under the Einstein-era system of honor and reputation. Psychoanalysis is the prime example. The question is whether the subsequent bureaucratization has alleviated or exacerbated these problems. My opinion is that, at best, it hasn’t put any real barriers against pseudoscience, and arguably, it has made things worse by allowing pseudoscience to be given a veneer of respectability (and sources of funding) much more easily.
What do you think of peer review vs. crowd-sourcing?
I’m not sure what exactly you have in mind by “crowd-sourcing” in this context. Do you mean publishing online in a way that’s open for public comments and debate, whose content will come up whenever someone looks up the paper? If that’s what you mean, I do have a favorable view of this approach.
Yes, that’s what I meant by crowd-sourcing.
In fact, I’d say that the principal reason why such a system is not implemented in many areas is the sheer desire for ass-covering. Just imagine all the emperors in various degrees of nakedness who are presently hiding behind thick, impressive-sounding publication records, whose validity would however be seriously brought into question by this practice.
The way things are now, a paper can be retracted or marked as invalid only if outright plagiarism or data faking is proved. Otherwise, even those junk papers that have been debunked so convincingly that their authors were forced to publicly admit it still stand proudly in publication archives, both on paper and online, ready to fool any unsuspecting visitor who stumbles onto them.