You need a lot of hindsight bias to say that it was clear from the get go which paradigms were going to win over the last century.
Sure. And I think Kuhn’s main point as summarized by Scott really does give a huge blow to the naive view that you can just compare successful predictions to missed predictions, etc.
But to think that you cannot do better than chance at generating successful new hypotheses is obviously wrong. There would be way too many hypotheses to consider, and not enough scientists to test them. From merely observing science’s success, we can conclude that there has to be some kind of skill (Yudkowksy’s take on this is here and here, among other places) that good scientists employ to do better than chance at picking what to work on. And IMO it’s a strange failure of curiosity to not want to get to the bottom of this when studying Kuhn or the history of science.
Most science happens within scientific paradigms. A good scientist looks where progress could be made within his scientific paradigm and seeks to move science forward within it.
Paradigm changes are qualitatively different and betting on a new paradigm emerging paradigm requires different decision making.
What Eliezer says about Phlogiston is wrong. Phlogiston did pay it’s rent and allowed chemistry to advance a lot from the alchemy that precedes it:
If this ash is reheated with charcoal the phlogiston is restored (according to Stahl) and with it the mercury. (In our view the charcoal removes the oxygen restoring the mercury). In a complex series of experiment Stahl turned sulphuric acid into sulphur and back again, explaining the changes once again through the removal and return of phlogiston. Through extension Stahl, an excellent experimental chemist, was able to explain, what we now know as the redox reactions and the acid-base reactions, with his phlogiston theory based on experiment and empirical observation. Stahl’s phlogiston theory was thus the first empirically based ‘scientific’ explanation of a large part of the foundations of chemistry.
But to think that you cannot do better than chance at generating successful new hypotheses is obviously wrong.
It would be an uncharitable reading of Kuhn to interpret him in that way. He does speak of the performance of scientific theories in terms of different epistemic values, and already in SSR he does speak of a scientist having an initial hunch suggesting a given idea is promising.
From merely observing science’s success, we can conclude that there has to be some kind of skill (Yudkowksy’s take on this is here and here, among other places) that good scientists employ to do better than chance at picking what to work on.
There is actually a whole part of philosophy of science that deals with this topic, it goes under the name of the preliminary evaluation of scientific theories, their pursuit-worthiness, endorsement, etc.
A good scientist looks where progress could be made within his scientific paradigm
For an excellent recent historical and philosophical study of the Chemical Revolution I recommend Hasok Chang’s book “Is Water H2O?”, who argues that the phlogistic chemistry was indeed worthy of pursuit at the time when it was abandoned.
Right, which is why it’s important to distinguish between a mere hunch and a “warranted hunch”, the latter being based on certain indicators of promise (e.g. the idea has a potential of explaining novel phenomena, or explaining them better than the currently dominant theory, the inquiry is based on feasible methodology, etc.). These indicators of promise are in no way a guarantee that the idea will work out, but they allow us to distinguish between a sensible novel idea and junk science.
What’s feasible and what isn’t is hard to say beforehand. If you take molecular biology the mainstream considered their goal at the beginning unfeaseable and it took a while till there was actually technology that made it feasible to know the shape of proteins and other biomoleculs.
There’s an interview with Sydney Brenner who was one of the fathers of molecular biology who says that the pradigmn likely wouldn’t have gotten support in the current academic climate.
Like I’ve mentioned, that’s why there are indices of theory promise (see .e.g. this paper), which don’t guarantee anything, but still make the assessment of some hypotheses more plausible than, say, research done within pseudo-medicine. These indices shouldn’t be confused with how the scientific community actually reacts on novel theories since it is no news that sometimes scientists fail to employ the adequate criteria, reacting dogmatically (for some examples, see this case study from the history of earth sciences or this one from the history of medicine). So the fact that the scientific community fails to react in a warranted way to novel ideas doesn’t imply that they couldn’t do a better job at this. This is precisely why some grants are geared towards highs-risk high-reward schemes, so that projects which are clearly risky and may simply flop, get the funding.
The research in molecular biology was indeed quite tricky, but again, this is no way means that assessing it as not worthy of pursuit would have been a justified response at the time. Hence, it’s important to distinguish between the descriptive and the normative dimensions when we speak of the assessment of scientific research.
As for the interview with Sydney Brenner, thanks for linking to it. I disagree though with his assessment of the peer-review system because he’s not making an overall comparison between two systems, where we’d have to assess both the positive and the negative effects of the peer-review and then compare that with the positive and negative effects of possible alternative approaches. This means evaluating e.g.: how many crap papers are kept at bay this way, which without the peer-review system would simply get published; how much the lack of prestige or connections with the right people disadvantages one to publish in a journal vs. a blind peer-review procedure which mitigates this problem at least to some extent; how many women or minorities had problems with publication bias vs. the blind peer-review procedure, etc.
Chiropractics was long considered to be pseudo-medicine because it rests on the perceptive ability of it’s practioners. Yet, according to Cochrane we now know that their interventions have effects that are comparable to our mainstream treatments for backpain.
The useless paradigm of domestic science had a lot of esteem in the 20th centure while chiropratics had none. Given that it took this long to settle simply question of whether chiropratical intevention works for backpain, I think it’s very hard to say for most alternative medicine approaches that have seen a lot less research what effects they have and could be demonstrated if you fund them as a serious paradigm.
In medicine most of the journals endorse the CONSORT guidelines yet their peer-review processes don’t make sure that the clear quality standards of the CONSORT guidelines are followed in a majority of published papers.
Blinding peer-review doesn’t help at all with encourages paradigm violating papers. Instead of succeding at forcing the quality standards they endorse on the papers they publish, mainstream journals do succeed at not publishing any papers that violate the mainsteam paradigm.
Again: you are conflating the descriptive and the normative. You are all the time giving examples of how science went wrong. And that may have well been the case. What I am saying is that, there are tools to mitigate these problems. In order to challenge my points, you’d have to show that chriopractics did not appear even worthy of pursuit *in view of the criteria I mentioned above* and yet it should have been pursued (I am not familiar with this branch of science, btw, so I don’t have enough knowledge to say anything concerning its current status). But even if you could do this, this would be an extremely odd example, so you’d have to come up with a couple of them to make a normatively interesting point. Of course, I’d be happy to hear about that.
The confusion between the desctiptive (how things are) and the normative (how they should be) concerns also your comments on peer review, where you are bringing issues that are problematic in the current medical practice, but I don’t see why we should consider them inherent to the peer-review procedure as such. Your points concern the presence of biases in science which make paradigmatic changes difficult, and that may indeed be a problem, but I don’t see how abandoning the peer-review procedure is going to solve it.
I agree that some notion of past fruitfulness and further promise is important. It’s however hard to judge fruitfulness from the outside as a lot of the progress within a new paradigm might not be intelligible in the old paradigms.
If you would have asked chiropractors in the 20th century whether they made theoretical progress, I would guess that you would get answer about how their theory progressed. If you however asked any mainstream medicine academic you would likely get the answer that they didn’t produce anything useful.
The standard peer review is a heavily standardized process that makes specific assumptions about the shape of knowledge.
The ontology of special relations is something that matters for science but I can have that discussion Github. Github does provide for a way of “peer-review” but it’s very different then the way traditional scientific papers work.
When I look at that discussion, it’s also funny that both the person I’m speaking with and I have both studied bioinformatics.
Bioinformatics as a field managed to share a lot of knowledge openly through ways besides scientific papers. It wouldn’t be surprising to me when the DSM gets one day replaced by a well developed ontology created with a more bioinformatical paradigm.
The database that comes out of the money from Zuckerberg will also be likely more scientifically valuable then any classical academic papers written about it.
Te problem of disagreements that arise due to different paradigms or ‘schools of thought’, which you mention, is an important problem as it concerns the possibility of so-called rational disagreements in science. This paper (published here) makes an attempt at providing a normative framework for such situations, suggesting that if scientists have at least some indications that the claims of their opponent is a result of a rational deliberation, they should epistemically tolerate their ideas, which means: they should treat them as potentially rational, their theory as potentially promising, and as a potential challenge to their own stance.
Of course, the main challenge for epistemic toleration is putting ourselves in the other one’s shoes :) Like in the example you mention: if the others are working on an approach that is completely different from mine, it won’t be easy for me to agree with everything they say, but that doesn’t mean I should equate them with some junk scientists.
As for discussions via Github, that’s interesting and probably we could discuss this in a separate thread, on the topic of different forms of scientific interaction. I think that peer-review can also be a useful form of dialogue, specially since a paper may end up going through different rounds of peer-review (sometimes also across different journals, in case it gets rejected in the beginning). However, preprint archives that we have nowadays are also valuable, since even if a paper keeps on being rejected (let’s say unfairly, e.g. due to a dogmatic environment in the given discipline), others may still have access to it, cite it, and it may still have an impact.
Sure. And I think Kuhn’s main point as summarized by Scott really does give a huge blow to the naive view that you can just compare successful predictions to missed predictions, etc.
But to think that you cannot do better than chance at generating successful new hypotheses is obviously wrong. There would be way too many hypotheses to consider, and not enough scientists to test them. From merely observing science’s success, we can conclude that there has to be some kind of skill (Yudkowksy’s take on this is here and here, among other places) that good scientists employ to do better than chance at picking what to work on. And IMO it’s a strange failure of curiosity to not want to get to the bottom of this when studying Kuhn or the history of science.
Most science happens within scientific paradigms. A good scientist looks where progress could be made within his scientific paradigm and seeks to move science forward within it.
Paradigm changes are qualitatively different and betting on a new paradigm emerging paradigm requires different decision making.
What Eliezer says about Phlogiston is wrong. Phlogiston did pay it’s rent and allowed chemistry to advance a lot from the alchemy that precedes it:
It would be an uncharitable reading of Kuhn to interpret him in that way. He does speak of the performance of scientific theories in terms of different epistemic values, and already in SSR he does speak of a scientist having an initial hunch suggesting a given idea is promising.
There is actually a whole part of philosophy of science that deals with this topic, it goes under the name of the preliminary evaluation of scientific theories, their pursuit-worthiness, endorsement, etc.
his or her* :)
For an excellent recent historical and philosophical study of the Chemical Revolution I recommend Hasok Chang’s book “Is Water H2O?”, who argues that the phlogistic chemistry was indeed worthy of pursuit at the time when it was abandoned.
There are certainly many scientists who have hunches that their attempts at revolutionizing science are promising. Most of them however fail.
Right, which is why it’s important to distinguish between a mere hunch and a “warranted hunch”, the latter being based on certain indicators of promise (e.g. the idea has a potential of explaining novel phenomena, or explaining them better than the currently dominant theory, the inquiry is based on feasible methodology, etc.). These indicators of promise are in no way a guarantee that the idea will work out, but they allow us to distinguish between a sensible novel idea and junk science.
What’s feasible and what isn’t is hard to say beforehand. If you take molecular biology the mainstream considered their goal at the beginning unfeaseable and it took a while till there was actually technology that made it feasible to know the shape of proteins and other biomoleculs.
There’s an interview with Sydney Brenner who was one of the fathers of molecular biology who says that the pradigmn likely wouldn’t have gotten support in the current academic climate.
Like I’ve mentioned, that’s why there are indices of theory promise (see .e.g. this paper), which don’t guarantee anything, but still make the assessment of some hypotheses more plausible than, say, research done within pseudo-medicine. These indices shouldn’t be confused with how the scientific community actually reacts on novel theories since it is no news that sometimes scientists fail to employ the adequate criteria, reacting dogmatically (for some examples, see this case study from the history of earth sciences or this one from the history of medicine). So the fact that the scientific community fails to react in a warranted way to novel ideas doesn’t imply that they couldn’t do a better job at this. This is precisely why some grants are geared towards highs-risk high-reward schemes, so that projects which are clearly risky and may simply flop, get the funding.
The research in molecular biology was indeed quite tricky, but again, this is no way means that assessing it as not worthy of pursuit would have been a justified response at the time. Hence, it’s important to distinguish between the descriptive and the normative dimensions when we speak of the assessment of scientific research.
As for the interview with Sydney Brenner, thanks for linking to it. I disagree though with his assessment of the peer-review system because he’s not making an overall comparison between two systems, where we’d have to assess both the positive and the negative effects of the peer-review and then compare that with the positive and negative effects of possible alternative approaches. This means evaluating e.g.: how many crap papers are kept at bay this way, which without the peer-review system would simply get published; how much the lack of prestige or connections with the right people disadvantages one to publish in a journal vs. a blind peer-review procedure which mitigates this problem at least to some extent; how many women or minorities had problems with publication bias vs. the blind peer-review procedure, etc.
Chiropractics was long considered to be pseudo-medicine because it rests on the perceptive ability of it’s practioners. Yet, according to Cochrane we now know that their interventions have effects that are comparable to our mainstream treatments for backpain.
The useless paradigm of domestic science had a lot of esteem in the 20th centure while chiropratics had none. Given that it took this long to settle simply question of whether chiropratical intevention works for backpain, I think it’s very hard to say for most alternative medicine approaches that have seen a lot less research what effects they have and could be demonstrated if you fund them as a serious paradigm.
In medicine most of the journals endorse the CONSORT guidelines yet their peer-review processes don’t make sure that the clear quality standards of the CONSORT guidelines are followed in a majority of published papers.
Blinding peer-review doesn’t help at all with encourages paradigm violating papers. Instead of succeding at forcing the quality standards they endorse on the papers they publish, mainstream journals do succeed at not publishing any papers that violate the mainsteam paradigm.
Again: you are conflating the descriptive and the normative. You are all the time giving examples of how science went wrong. And that may have well been the case. What I am saying is that, there are tools to mitigate these problems. In order to challenge my points, you’d have to show that chriopractics did not appear even worthy of pursuit *in view of the criteria I mentioned above* and yet it should have been pursued (I am not familiar with this branch of science, btw, so I don’t have enough knowledge to say anything concerning its current status). But even if you could do this, this would be an extremely odd example, so you’d have to come up with a couple of them to make a normatively interesting point. Of course, I’d be happy to hear about that.
The confusion between the desctiptive (how things are) and the normative (how they should be) concerns also your comments on peer review, where you are bringing issues that are problematic in the current medical practice, but I don’t see why we should consider them inherent to the peer-review procedure as such. Your points concern the presence of biases in science which make paradigmatic changes difficult, and that may indeed be a problem, but I don’t see how abandoning the peer-review procedure is going to solve it.
I agree that some notion of past fruitfulness and further promise is important. It’s however hard to judge fruitfulness from the outside as a lot of the progress within a new paradigm might not be intelligible in the old paradigms.
If you would have asked chiropractors in the 20th century whether they made theoretical progress, I would guess that you would get answer about how their theory progressed. If you however asked any mainstream medicine academic you would likely get the answer that they didn’t produce anything useful.
The standard peer review is a heavily standardized process that makes specific assumptions about the shape of knowledge.
The ontology of special relations is something that matters for science but I can have that discussion Github. Github does provide for a way of “peer-review” but it’s very different then the way traditional scientific papers work.
When I look at that discussion, it’s also funny that both the person I’m speaking with and I have both studied bioinformatics.
Bioinformatics as a field managed to share a lot of knowledge openly through ways besides scientific papers. It wouldn’t be surprising to me when the DSM gets one day replaced by a well developed ontology created with a more bioinformatical paradigm.
The database that comes out of the money from Zuckerberg will also be likely more scientifically valuable then any classical academic papers written about it.
Te problem of disagreements that arise due to different paradigms or ‘schools of thought’, which you mention, is an important problem as it concerns the possibility of so-called rational disagreements in science. This paper (published here) makes an attempt at providing a normative framework for such situations, suggesting that if scientists have at least some indications that the claims of their opponent is a result of a rational deliberation, they should epistemically tolerate their ideas, which means: they should treat them as potentially rational, their theory as potentially promising, and as a potential challenge to their own stance.
Of course, the main challenge for epistemic toleration is putting ourselves in the other one’s shoes :) Like in the example you mention: if the others are working on an approach that is completely different from mine, it won’t be easy for me to agree with everything they say, but that doesn’t mean I should equate them with some junk scientists.
As for discussions via Github, that’s interesting and probably we could discuss this in a separate thread, on the topic of different forms of scientific interaction. I think that peer-review can also be a useful form of dialogue, specially since a paper may end up going through different rounds of peer-review (sometimes also across different journals, in case it gets rejected in the beginning). However, preprint archives that we have nowadays are also valuable, since even if a paper keeps on being rejected (let’s say unfairly, e.g. due to a dogmatic environment in the given discipline), others may still have access to it, cite it, and it may still have an impact.