The institutional advantages of the current scientific community are so apparent that it seems more feasible to reform it than to supplant it. It’s worth thinking about how we could achieve either.
I don’t understand your comment, could you explain? Science seems to generate more knowledge than LW-style rationality. Maybe you meant to say “LW-style rationality is not about knowledge”?
Science has a large scale academic infrastructure to draw on, wherein people can propose research that they want to get done, and those who argue sufficiently persuasively that their research is solid and productive receive money and resources to conduct it.
You could make a system that produces more knowledge than modern science just by diverting a substantial portion of the national budget to fund it, so only the people proposing experiments that are too poorly designed to be useful don’t get funding.
Besides which, improved rationality can’t simply replace entire bodies of domain specific knowledge.
There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I’ve spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change.
“There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I’ve spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change.”
I am an academic. Can you suggest three concrete ways for me to improve my knowledge production, which will not leave me worse off?
Ways for science as an academic institution, or for you personally? For the latter, Luke has already done the work of creating a post on that. For the former, it’s more difficult, since a lot of productivity in science requires cooperation with the existing institution. At the least, I would suggest registering your experiments in a public database before conducting them, if any exist within your field, and using Bayesian probability software to process the statistics of your experiments (this will probably not help you at all in getting published, but if you do it in addition to regular significance testing, it should hopefully not inhibit it either.)
My default answer for anything regarding Hanson is ‘signaling’. How to fix science is a good start.
Science isn’t just about getting things wrong or right, but an intricate signaling game. This is why most of what comes out of journals is wrong. Scientists are rewarded for publishing results, right or wrong, so they comb data for correlations which may or may not be relevant. (Statically speaking, if you comb data 20 different ways, you’ll get at least 1 thing which shows a statistically significant correlation, just from sheer random chance.) Journals are rewarded for publishing sensational results, and not confirmations or even refutations (especially refutations of things they published in the first place). The rewards system does not set up coming with right answers, but coming up with answers that are sensational and cannot be easily refuted. Being right does make them hard to refute, which is why science is useful at all but that’s not the only way things are made hard to refute.
An ideal bayesian unconstrained with signaling could completely outdo our current scientific system (as it could do for in all spheres of life). Even shifting our current system to be more bayesian by abandoning the journal system and creating pre-registration of scientific studies would be a huge upgrade. But science isn’t about knowledge, knowledge is just a very useful byproduct we get from it.
But Grognor (not, as this comment read earlier, army1987) said that “we mere mortals can do better with Bayes”, not that “an ideal bayesian unconstrained with signaling could completely outdo our current scientific system”. Arguing, in response to cousin_it, that scientists are concerned with signalling makes the claim even stronger, and the question more compelling—“why aren’t we doing better already?”
I had taken “we” to mean the 21st-century civilization in general rather than just Bayesians, and the question to mean “why is science doing so bad, if it could do much better just by using Bayes”?
I’m fairly confident that “we” refers to LW / Bayesians, especially given the response to your comment earlier. Unfortunately we’ve got a bunch of comments addressing a different question, and some providing reasons why we shouldn’t expect to be “doing better”, all of which strengthen cousin_it’s question, as Grognor claims we can. Though naturally Grognor’s intended meanings are up for grabs.
There’s a problem here in that Bayesian reasoning has become quite common in the last 20 years in many sciences, so it isn’t clear who “we” should be in this sort of context.
Indeed. For anyone who has worked at all in oil & gas exploration, the LW treatment of Bayesian inference and decision theories as secret superpowers will seem perplexing. Oil companies have been basing billion dollar decisions on these methods for years, maybe decades.
I am also confused about what exactly we are supposed to be doing. If we had the choice of simply becoming ideal Bayesian reasoners then we would do that, but we don’t have that option. “Debiasing” is really just “installing a new, imperfect heuristic as a patch for existing and even more imperfect hardware-based heuristics.”
I know a lot of scientists—I am a scientist—and I guess if we were capable of choosing to be Bayesian superintelligences we might be progressing a bit faster, but as it stands I think we’re doing okay with the cognitive resources at our disposal.
Not to say we shouldn’t try to be more rational. It’s just that you can’t actually decide to be Einstein.
I think ‘being a better Bayesian’ isn’t about deciding to be Einstein. I think it’s about being willing to believe things that aren’t ‘settled science’, where ‘settled science’ is the replicated and established knowledge of humanity as a whole. See Science Doesn’t Trust Your Rationality.
The true art is being able to do this without ending up a New Ager, or something. The virtue isn’t believing non-settled things. The virtue is being willing to go beyond what science currently believes, if that’s where the properly adjusted evidence actually points you. (I say ‘beyond’ because I mean to refer to scope. If science believes something, you had better believe it—but if science doesn’t have a strong opinion about something you have no choice but to use your rationality).
Then why aren’t we doing better already?
The institutional advantages of the current scientific community are so apparent that it seems more feasible to reform it than to supplant it. It’s worth thinking about how we could achieve either.
The Hansonian answer would be “science is not about knowledge”, I guess. (I don’t think it’s that simple, anyway.)
I don’t understand your comment, could you explain? Science seems to generate more knowledge than LW-style rationality. Maybe you meant to say “LW-style rationality is not about knowledge”?
Science has a large scale academic infrastructure to draw on, wherein people can propose research that they want to get done, and those who argue sufficiently persuasively that their research is solid and productive receive money and resources to conduct it.
You could make a system that produces more knowledge than modern science just by diverting a substantial portion of the national budget to fund it, so only the people proposing experiments that are too poorly designed to be useful don’t get funding.
Besides which, improved rationality can’t simply replace entire bodies of domain specific knowledge.
There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I’ve spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change.
“There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I’ve spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change.”
I am an academic. Can you suggest three concrete ways for me to improve my knowledge production, which will not leave me worse off?
Well...
Ways for science as an academic institution, or for you personally? For the latter, Luke has already done the work of creating a post on that. For the former, it’s more difficult, since a lot of productivity in science requires cooperation with the existing institution. At the least, I would suggest registering your experiments in a public database before conducting them, if any exist within your field, and using Bayesian probability software to process the statistics of your experiments (this will probably not help you at all in getting published, but if you do it in addition to regular significance testing, it should hopefully not inhibit it either.)
My default answer for anything regarding Hanson is ‘signaling’. How to fix science is a good start.
Science isn’t just about getting things wrong or right, but an intricate signaling game. This is why most of what comes out of journals is wrong. Scientists are rewarded for publishing results, right or wrong, so they comb data for correlations which may or may not be relevant. (Statically speaking, if you comb data 20 different ways, you’ll get at least 1 thing which shows a statistically significant correlation, just from sheer random chance.) Journals are rewarded for publishing sensational results, and not confirmations or even refutations (especially refutations of things they published in the first place). The rewards system does not set up coming with right answers, but coming up with answers that are sensational and cannot be easily refuted. Being right does make them hard to refute, which is why science is useful at all but that’s not the only way things are made hard to refute.
An ideal bayesian unconstrained with signaling could completely outdo our current scientific system (as it could do for in all spheres of life). Even shifting our current system to be more bayesian by abandoning the journal system and creating pre-registration of scientific studies would be a huge upgrade. But science isn’t about knowledge, knowledge is just a very useful byproduct we get from it.
But Grognor (not, as this comment read earlier, army1987) said that “we mere mortals can do better with Bayes”, not that “an ideal bayesian unconstrained with signaling could completely outdo our current scientific system”. Arguing, in response to cousin_it, that scientists are concerned with signalling makes the claim even stronger, and the question more compelling—“why aren’t we doing better already?”
I had taken “we” to mean the 21st-century civilization in general rather than just Bayesians, and the question to mean “why is science doing so bad, if it could do much better just by using Bayes”?
I’m fairly confident that “we” refers to LW / Bayesians, especially given the response to your comment earlier. Unfortunately we’ve got a bunch of comments addressing a different question, and some providing reasons why we shouldn’t expect to be “doing better”, all of which strengthen cousin_it’s question, as Grognor claims we can. Though naturally Grognor’s intended meanings are up for grabs.
To which gold standard are you comparing us and Science to determine that we are not doing better?
There’s a problem here in that Bayesian reasoning has become quite common in the last 20 years in many sciences, so it isn’t clear who “we” should be in this sort of context.
Indeed. For anyone who has worked at all in oil & gas exploration, the LW treatment of Bayesian inference and decision theories as secret superpowers will seem perplexing. Oil companies have been basing billion dollar decisions on these methods for years, maybe decades.
I am also confused about what exactly we are supposed to be doing. If we had the choice of simply becoming ideal Bayesian reasoners then we would do that, but we don’t have that option. “Debiasing” is really just “installing a new, imperfect heuristic as a patch for existing and even more imperfect hardware-based heuristics.”
I know a lot of scientists—I am a scientist—and I guess if we were capable of choosing to be Bayesian superintelligences we might be progressing a bit faster, but as it stands I think we’re doing okay with the cognitive resources at our disposal.
Not to say we shouldn’t try to be more rational. It’s just that you can’t actually decide to be Einstein.
I think ‘being a better Bayesian’ isn’t about deciding to be Einstein. I think it’s about being willing to believe things that aren’t ‘settled science’, where ‘settled science’ is the replicated and established knowledge of humanity as a whole. See Science Doesn’t Trust Your Rationality.
The true art is being able to do this without ending up a New Ager, or something. The virtue isn’t believing non-settled things. The virtue is being willing to go beyond what science currently believes, if that’s where the properly adjusted evidence actually points you. (I say ‘beyond’ because I mean to refer to scope. If science believes something, you had better believe it—but if science doesn’t have a strong opinion about something you have no choice but to use your rationality).