My expectation is that the fourth alternative, or some variation thereof, is the dominant answer. This is less a reflection of the quality of the papers, and more a reflection of the limited bandwidth of scientists for reading them.
This problem has been discussed in the modern context because of the explosion in the number of publications and the administrative responsibilities of scientists (for example, teaching and grant writing). But it is also noticed that deep reading of papers is both time consuming and cognitively intensive; troubling to write up a correction still more so. I argue there is still a fundamental bandwidth limit, and the early 20th century scientists still had to abide by it.
Following on the argument that reading papers deeply enough to correct errors and publish those corrections is difficult, I posit that the ‘publish or perish’ mechanism is responsible for corrections being published at all. I expect that even though there are errors, if the objective is to produce the best original work possible it is more efficient to correct them for yourself and then go on to use the corrected version for yourself; it could even be argued that leaving the errors publicly uncorrected is advantageous for being first. I also expect that if the objective shifts to total number of publications, it becomes more efficient to publish corrections because writing up a correction is less difficult than producing original work.
If my expectation is correct, then we should see very few corrections published leading up to World War II, and then an increasing number afterward as the professionalization of science progresses.
One good source for this kind of question would be histories of science and/or math. They do a pretty good job of disentangling what scientists thought and when, because they do the difficult work of going through notes, correspondence, and the published work. The downside is it will usually be from the subject’s perspective, ie thermodynamics, instead of focusing on academia per se.
My expectation is that the fourth alternative, or some variation thereof, is the dominant answer. This is less a reflection of the quality of the papers, and more a reflection of the limited bandwidth of scientists for reading them.
This problem has been discussed in the modern context because of the explosion in the number of publications and the administrative responsibilities of scientists (for example, teaching and grant writing). But it is also noticed that deep reading of papers is both time consuming and cognitively intensive; troubling to write up a correction still more so. I argue there is still a fundamental bandwidth limit, and the early 20th century scientists still had to abide by it.
Following on the argument that reading papers deeply enough to correct errors and publish those corrections is difficult, I posit that the ‘publish or perish’ mechanism is responsible for corrections being published at all. I expect that even though there are errors, if the objective is to produce the best original work possible it is more efficient to correct them for yourself and then go on to use the corrected version for yourself; it could even be argued that leaving the errors publicly uncorrected is advantageous for being first. I also expect that if the objective shifts to total number of publications, it becomes more efficient to publish corrections because writing up a correction is less difficult than producing original work.
If my expectation is correct, then we should see very few corrections published leading up to World War II, and then an increasing number afterward as the professionalization of science progresses.
One good source for this kind of question would be histories of science and/or math. They do a pretty good job of disentangling what scientists thought and when, because they do the difficult work of going through notes, correspondence, and the published work. The downside is it will usually be from the subject’s perspective, ie thermodynamics, instead of focusing on academia per se.