I think Yudkowsky’s analysis here isn’t putting enough weight on the social aspects. “Science”, as we know it, is a social process, in a way that Bayesian reasoning is not.
The point of science isn’t to convince yourself—it’s to convince an audience of skeptical experts.
A large group of people, with different backgrounds, experiences, etc aren’t going to agree on their priors. As a result, there won’t be any one probability on a given idea. Different readers will have different background knowledge, and that can make a given hypothesis seem more or less believable.
(This isn’t avoidable, even in principle. The Solomonoff prior of an idea is not uniquely defined, since encodings of ideas aren’t unique. You and the reviewers are not necessarily wrong in putting different priors on an idea even if you are both using a Solomonoff prior. The problem wouldn’t go away, even if you and the reviewers did have identical knowledge, which you don’t.)
Yudkowsky is right that this makes science much more cautious in updating than a pure Bayesian. But I think that’s desirable in practice. There is a lot of value to having a scientific community all use the same theoretical language and have the same set of canonical examples. It’s expensive (in both human time and money) to retrain a lot of people. Societies cannot change their minds as quickly or easily as the members can, so it makes sense to move more slowly if the previous theory is still useful.
Other issue is that the process should be difficult to maliciously subvert (or non maliciously by rationalization of erroneous belief). That results in a boatload of features that may be frustrating to those wanting to introduce unjustified untestable propositions for fun and profit (or to justify erroneous beliefs).
Hrm. My impression is that science mostly isn’t organized to catch malicious fraud. It’s comparatively rare for outsiders to do a real audit of data or experimental method, particularly if the result isn’t super exciting. In compensation, the penalties for being caught falsifying data are ferocious—I believe it’s treated as an absolute career-ending move.
I agree that the process is pretty good at squelching over-enthusiastic rationalization. That’s an aspect I thought Yudkowsky captured quite well.
It is a part of difficulty to subvert—it is difficult to arrange a scheme with positive expected utility for falsifying data. At the same time there’s plenty of subtle falsifications such as discarding of negative results. And when it comes to rationality—if you have a hypothesis X that is supported by arguments A,B,C,D and is debunked by arguments E,F,G,H , you can count on rational self interested agents to put more effort into finding the first four but not the last four, as payoff for former is bigger. (The real agent’s reasoning costs utility, and it is expensive to find those arguments)
Consider some issue like AI risk. If you can pick out the few reasons why AI would kill everyone, even very bad reasons that rely on some oracular stuff that is not implementable, you are set for life (and you don’t even have to invent them, you can pick out of fiction and simply collect them and promote together). If you can make a few equally good reasons not to, that’s pure waste of your time as far as self interest is concerned. Of course science does not trust you to put equal effort when it is clearly irrational to put equal effort, for anyone but the true angels (and then for the true angels it is also rational to try to grab as much money (which would be ill spent otherwise) as they can as easily as they can, and then donate it to charities etc, so for purpose of fact finding you can’t trust even the selfless angels).
It is a part of difficulty to subvert—it is difficult to arrange a scheme with positive expected utility for falsifying data.
Given that one gets fame for “spectacular” discoveries, not at all especially in fields like biology where there are frequently lots of confounding variables that you can use to provide cover.
That has always been the problem with experimental science, sometimes you can’t really protect from falsification.
Actually, thing is, given the list of biases, one shouldn’t trust one’s own rationality, let alone rationality of other people (If a rationalist trusts his own rationality while knowing of biases… that’s just a new kind of irrationalist). Other issue is that introduction of novel hypotheses with ‘correct priors’ allows to introduce a cherry picked selection of hypotheses that would lead to a new hypothesis with undue confidence that wouldn’t have existed if all possible hypotheses were considered. (i.e. you may want to introduce hypothesis A with undue confidence, you introduce hypotheses B,C,D,E,F… which would raise probability of A, but not G,H,I,J… which would lower probability of A). A fully rational even slightly selfish agent would do such a thing. It is insufficient to converge when all hypotheses are considered. One has to provide best approximation at any time. That pretty much makes most methods that sound great in abstract unbounded theory entirely inapplicable.
Also, BTW, science does trust your rationality and does trust your ability to set up a probabilistic argument. But it only does so when it makes sense for you to trust your probabilistic argument—when you are actually doing bulletproof math with no gaps where errors creep in.
I think Yudkowsky’s analysis here isn’t putting enough weight on the social aspects. “Science”, as we know it, is a social process, in a way that Bayesian reasoning is not.
The point of science isn’t to convince yourself—it’s to convince an audience of skeptical experts.
A large group of people, with different backgrounds, experiences, etc aren’t going to agree on their priors. As a result, there won’t be any one probability on a given idea. Different readers will have different background knowledge, and that can make a given hypothesis seem more or less believable.
(This isn’t avoidable, even in principle. The Solomonoff prior of an idea is not uniquely defined, since encodings of ideas aren’t unique. You and the reviewers are not necessarily wrong in putting different priors on an idea even if you are both using a Solomonoff prior. The problem wouldn’t go away, even if you and the reviewers did have identical knowledge, which you don’t.)
Yudkowsky is right that this makes science much more cautious in updating than a pure Bayesian. But I think that’s desirable in practice. There is a lot of value to having a scientific community all use the same theoretical language and have the same set of canonical examples. It’s expensive (in both human time and money) to retrain a lot of people. Societies cannot change their minds as quickly or easily as the members can, so it makes sense to move more slowly if the previous theory is still useful.
Other issue is that the process should be difficult to maliciously subvert (or non maliciously by rationalization of erroneous belief). That results in a boatload of features that may be frustrating to those wanting to introduce unjustified untestable propositions for fun and profit (or to justify erroneous beliefs).
Hrm. My impression is that science mostly isn’t organized to catch malicious fraud. It’s comparatively rare for outsiders to do a real audit of data or experimental method, particularly if the result isn’t super exciting. In compensation, the penalties for being caught falsifying data are ferocious—I believe it’s treated as an absolute career-ending move.
I agree that the process is pretty good at squelching over-enthusiastic rationalization. That’s an aspect I thought Yudkowsky captured quite well.
It is a part of difficulty to subvert—it is difficult to arrange a scheme with positive expected utility for falsifying data. At the same time there’s plenty of subtle falsifications such as discarding of negative results. And when it comes to rationality—if you have a hypothesis X that is supported by arguments A,B,C,D and is debunked by arguments E,F,G,H , you can count on rational self interested agents to put more effort into finding the first four but not the last four, as payoff for former is bigger. (The real agent’s reasoning costs utility, and it is expensive to find those arguments)
Consider some issue like AI risk. If you can pick out the few reasons why AI would kill everyone, even very bad reasons that rely on some oracular stuff that is not implementable, you are set for life (and you don’t even have to invent them, you can pick out of fiction and simply collect them and promote together). If you can make a few equally good reasons not to, that’s pure waste of your time as far as self interest is concerned. Of course science does not trust you to put equal effort when it is clearly irrational to put equal effort, for anyone but the true angels (and then for the true angels it is also rational to try to grab as much money (which would be ill spent otherwise) as they can as easily as they can, and then donate it to charities etc, so for purpose of fact finding you can’t trust even the selfless angels).
Given that one gets fame for “spectacular” discoveries, not at all especially in fields like biology where there are frequently lots of confounding variables that you can use to provide cover.
That has always been the problem with experimental science, sometimes you can’t really protect from falsification.
Actually, thing is, given the list of biases, one shouldn’t trust one’s own rationality, let alone rationality of other people (If a rationalist trusts his own rationality while knowing of biases… that’s just a new kind of irrationalist). Other issue is that introduction of novel hypotheses with ‘correct priors’ allows to introduce a cherry picked selection of hypotheses that would lead to a new hypothesis with undue confidence that wouldn’t have existed if all possible hypotheses were considered. (i.e. you may want to introduce hypothesis A with undue confidence, you introduce hypotheses B,C,D,E,F… which would raise probability of A, but not G,H,I,J… which would lower probability of A). A fully rational even slightly selfish agent would do such a thing. It is insufficient to converge when all hypotheses are considered. One has to provide best approximation at any time. That pretty much makes most methods that sound great in abstract unbounded theory entirely inapplicable.
Also, BTW, science does trust your rationality and does trust your ability to set up a probabilistic argument. But it only does so when it makes sense for you to trust your probabilistic argument—when you are actually doing bulletproof math with no gaps where errors creep in.