Eliezer, I’ll try to address this new essay and your response to my other comment in one comment, I hope it won’t get too muddled.
What scientists learn from their mentors is a set of domain-specific non-general practical skills, and they’re the right skills for the reasons I gave in my comment on your other essay: the skills you need to produce new science happen to be the skills settled science supplies. I think you’d agree that scientists, in their period of apprenticeship (or even formal education), learn a set of domain-specific skills. Whether they learn a set of additional general skills which may or may not have the hidden structure of Bayesianism is where we disagree. If I argue that such additional general skills are unnecessary then I believe that would undermine the applicability of Bayesianism to science. My argument is simply that we can account for the fact that the institution of science tends to be the one to produce new science merely from contingent facts about it (science has all the scientists); we don’t need to postulate general rules or norms to explain its success.
It’s my further assertion that getting the right hypothesis is a product of institutional inertia. I’m not sure this is as contentious as you imply. It’s true that scientists don’t learn any of the general skills of reasoning you list but they do go through a period of tutoring where they are given explicit advice on what research to pursue and then serve as part of a research team under a senior researcher. Only after many years would they be allowed to freely set their own research agenda and by this time, if their hypotheses hadn’t become highly constrained by their period of apprenticeship, they would be considered very bad scientists indeed. I don’t think someone like Roger Penrose makes a good counterexample. Penrose published a work of popular science outside his field of expertise and was not taken seriously by professional scientists in the relevant fields. I believe his speculations also harmed his position in physics.
All of the constraints on hypothesis choice are, again, domain-specific non-general practical skills and that, I contend, is all we ever need. It’s science itself, the actual dirty physical details of experimentation and theoretical manipulation, that suggests its own extension and practicing scientists are steeped in it and can pass their (practical, domain-specific) insights onto aspiring scientists. The whole process of science is a little like pulling a loose thread of material and having the whole thing unravel. A bunch of people working on mechanical problems in the 16th and 17th centuries stumbled on that thread and their intellectual decedents have been the ones to keep tugging at it because each bit of thread lets you pull out more thread and so on. Theologians don’t have the thread, we do, and that’s the difference between us and theologians. We don’t need to also be more rational or better Bayesians. I believe that scientists and theologians both use their full range of psychological faculties all the time, unconstrained, in solving problems and the only difference between them is the kind of problems they’re trying to solve. The kind of problems they tackle are a matter of institutional heritage. This doesn’t mean I think theological problems are worthwhile; I just don’t think there are normative differences in reasoning or the application of cognitive faculties between the two fields nor do I believe there need be to explain the success of one over the other.
Eliezer, I’ll try to address this new essay and your response to my other comment in one comment, I hope it won’t get too muddled.
What scientists learn from their mentors is a set of domain-specific non-general practical skills, and they’re the right skills for the reasons I gave in my comment on your other essay: the skills you need to produce new science happen to be the skills settled science supplies. I think you’d agree that scientists, in their period of apprenticeship (or even formal education), learn a set of domain-specific skills. Whether they learn a set of additional general skills which may or may not have the hidden structure of Bayesianism is where we disagree. If I argue that such additional general skills are unnecessary then I believe that would undermine the applicability of Bayesianism to science. My argument is simply that we can account for the fact that the institution of science tends to be the one to produce new science merely from contingent facts about it (science has all the scientists); we don’t need to postulate general rules or norms to explain its success.
It’s my further assertion that getting the right hypothesis is a product of institutional inertia. I’m not sure this is as contentious as you imply. It’s true that scientists don’t learn any of the general skills of reasoning you list but they do go through a period of tutoring where they are given explicit advice on what research to pursue and then serve as part of a research team under a senior researcher. Only after many years would they be allowed to freely set their own research agenda and by this time, if their hypotheses hadn’t become highly constrained by their period of apprenticeship, they would be considered very bad scientists indeed. I don’t think someone like Roger Penrose makes a good counterexample. Penrose published a work of popular science outside his field of expertise and was not taken seriously by professional scientists in the relevant fields. I believe his speculations also harmed his position in physics.
All of the constraints on hypothesis choice are, again, domain-specific non-general practical skills and that, I contend, is all we ever need. It’s science itself, the actual dirty physical details of experimentation and theoretical manipulation, that suggests its own extension and practicing scientists are steeped in it and can pass their (practical, domain-specific) insights onto aspiring scientists. The whole process of science is a little like pulling a loose thread of material and having the whole thing unravel. A bunch of people working on mechanical problems in the 16th and 17th centuries stumbled on that thread and their intellectual decedents have been the ones to keep tugging at it because each bit of thread lets you pull out more thread and so on. Theologians don’t have the thread, we do, and that’s the difference between us and theologians. We don’t need to also be more rational or better Bayesians. I believe that scientists and theologians both use their full range of psychological faculties all the time, unconstrained, in solving problems and the only difference between them is the kind of problems they’re trying to solve. The kind of problems they tackle are a matter of institutional heritage. This doesn’t mean I think theological problems are worthwhile; I just don’t think there are normative differences in reasoning or the application of cognitive faculties between the two fields nor do I believe there need be to explain the success of one over the other.