Indeed. For anyone who has worked at all in oil & gas exploration, the LW treatment of Bayesian inference and decision theories as secret superpowers will seem perplexing. Oil companies have been basing billion dollar decisions on these methods for years, maybe decades.
I am also confused about what exactly we are supposed to be doing. If we had the choice of simply becoming ideal Bayesian reasoners then we would do that, but we don’t have that option. “Debiasing” is really just “installing a new, imperfect heuristic as a patch for existing and even more imperfect hardware-based heuristics.”
I know a lot of scientists—I am a scientist—and I guess if we were capable of choosing to be Bayesian superintelligences we might be progressing a bit faster, but as it stands I think we’re doing okay with the cognitive resources at our disposal.
Not to say we shouldn’t try to be more rational. It’s just that you can’t actually decide to be Einstein.
I think ‘being a better Bayesian’ isn’t about deciding to be Einstein. I think it’s about being willing to believe things that aren’t ‘settled science’, where ‘settled science’ is the replicated and established knowledge of humanity as a whole. See Science Doesn’t Trust Your Rationality.
The true art is being able to do this without ending up a New Ager, or something. The virtue isn’t believing non-settled things. The virtue is being willing to go beyond what science currently believes, if that’s where the properly adjusted evidence actually points you. (I say ‘beyond’ because I mean to refer to scope. If science believes something, you had better believe it—but if science doesn’t have a strong opinion about something you have no choice but to use your rationality).
Indeed. For anyone who has worked at all in oil & gas exploration, the LW treatment of Bayesian inference and decision theories as secret superpowers will seem perplexing. Oil companies have been basing billion dollar decisions on these methods for years, maybe decades.
I am also confused about what exactly we are supposed to be doing. If we had the choice of simply becoming ideal Bayesian reasoners then we would do that, but we don’t have that option. “Debiasing” is really just “installing a new, imperfect heuristic as a patch for existing and even more imperfect hardware-based heuristics.”
I know a lot of scientists—I am a scientist—and I guess if we were capable of choosing to be Bayesian superintelligences we might be progressing a bit faster, but as it stands I think we’re doing okay with the cognitive resources at our disposal.
Not to say we shouldn’t try to be more rational. It’s just that you can’t actually decide to be Einstein.
I think ‘being a better Bayesian’ isn’t about deciding to be Einstein. I think it’s about being willing to believe things that aren’t ‘settled science’, where ‘settled science’ is the replicated and established knowledge of humanity as a whole. See Science Doesn’t Trust Your Rationality.
The true art is being able to do this without ending up a New Ager, or something. The virtue isn’t believing non-settled things. The virtue is being willing to go beyond what science currently believes, if that’s where the properly adjusted evidence actually points you. (I say ‘beyond’ because I mean to refer to scope. If science believes something, you had better believe it—but if science doesn’t have a strong opinion about something you have no choice but to use your rationality).