Those are not different models. They are different interpretations of the utility of probability in different classes of applications.
though I’m not sure how you would find out the frequency at which hypotheses turn out to be true the way you figure out the frequency at which a coin comes up heads. But that could just be my not being as familiar thinking in terms of the Frequentist model
You do it exactly the same as in your Bayesian example.
I’m sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent. If you use probability to model the outcome of an inherently random event, people have called that “frequentist.” If instead you model the event as deterministic, but your knowledge over the outcome as uncertain, then people have applied the label “bayesian.” It’s the same probability, just used differently.
It’s like how if you apply your knowledge of mechanics to bridge and road building, it’s called civil engineering, but if you apply it to buildings it is architecture. It’s still mechanical engineering either way, just applied differently.
One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky’s mind. Actual statisticians use probability to model events and knowledge of events simultaneously.
Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don’t know what it is that you disagree with.
I’m sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent.
[…]
One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky’s mind. Actual statisticians use probability to model events and knowledge of events simultaneously.
I know a fellow who has a Ph.D. in statistics and works for the Department of Defense on cryptography. I think he largely agrees with your point: professional statisticians need to use both methods fluidly in order to do useful work. But he also doesn’t claim that they’re both secretly the same thing. He says that strong Bayesianism is useless in some cases that Frequentism gets right, and vice versa, though his sympathies lie more with the Frequentist position on pragmatic grounds (i.e. that methods that are easier to understand in a Frequentist framing tend to be more useful in a wider range of circumstances in his experience).
I think the debate is silly. It’s like debating which model of hyperbolic geometry is “right”. Different models highlight different intuitions about the formal system, and they make different aspects of the formal theorems more or less relevant to specific cases.
I think Eliezer’s claim is that as a matter of psychology, using a Bayesian model of probability lets you think about the results of probability theory as laws of thought, and from that you can derive some useful results about how one ought to think and what results from experimental psychology ought to capture one’s attention. He might also be claiming somewhere that Frequentism is in fact inconsistent and therefore is simply a wrong model to adopt, but honestly if he’s arguing that then I’m inclined to ignore him because people who know a lot more about Frequentism than he does don’t seem to agree.
But there is a debate, even if I think it’s silly and quite pointless.
And also, the axiomatic models are different, even if statisticians use both.
Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don’t know what it is that you disagree with.
The concern about AI risk is also the result of an attempt to propagate implications of empirical data. It just goes farther than what I think you consider sensible, and I think you’re encouraging an unnecessary limitation on human reasoning power by calling such reasoning unjustified.
I agree, it should itch that there haven’t been empirical tests of several of the key ideas involved in AI risk, and I think there should be a visceral sense of making bullshit up attached to this speculation unless and until we can find ways to do those empirical tests.
But I think it’s the same kind of stupid to ignore these projections as it is to ignore that you already know how your New Year’s Resolution isn’t going to work. It’s not obviously as strong a stupidity, but the flavor is exactly the same.
If we could banish that taste from our minds, then even without better empiricism we would be vastly stronger.
I’m concerned that you’re underestimating the value of this strength, and viewing its pursuit as a memetic hazard.
I don’t think we have to choose between massively improving our ability to make correct clever arguments and massively improving the drive and cleverness with which we ask nature its opinion. I think we can have both, and I think that getting AI risk and things like it right requires both.
But just as measuring everything about yourself isn’t really a fully mature expression of empiricism, I’m concerned about the memes you’re spreading in the name of mature empiricism retarding the art of finishing thinking.
I don’t think that they have to oppose.
And I’m under the impression that you think otherwise.
Those are not different models. They are different interpretations of the utility of probability in different classes of applications.
You do it exactly the same as in your Bayesian example.
I’m sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent. If you use probability to model the outcome of an inherently random event, people have called that “frequentist.” If instead you model the event as deterministic, but your knowledge over the outcome as uncertain, then people have applied the label “bayesian.” It’s the same probability, just used differently.
It’s like how if you apply your knowledge of mechanics to bridge and road building, it’s called civil engineering, but if you apply it to buildings it is architecture. It’s still mechanical engineering either way, just applied differently.
One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky’s mind. Actual statisticians use probability to model events and knowledge of events simultaneously.
Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don’t know what it is that you disagree with.
That’s what a model is in this case.
How sure are you of that?
I know a fellow who has a Ph.D. in statistics and works for the Department of Defense on cryptography. I think he largely agrees with your point: professional statisticians need to use both methods fluidly in order to do useful work. But he also doesn’t claim that they’re both secretly the same thing. He says that strong Bayesianism is useless in some cases that Frequentism gets right, and vice versa, though his sympathies lie more with the Frequentist position on pragmatic grounds (i.e. that methods that are easier to understand in a Frequentist framing tend to be more useful in a wider range of circumstances in his experience).
I think the debate is silly. It’s like debating which model of hyperbolic geometry is “right”. Different models highlight different intuitions about the formal system, and they make different aspects of the formal theorems more or less relevant to specific cases.
I think Eliezer’s claim is that as a matter of psychology, using a Bayesian model of probability lets you think about the results of probability theory as laws of thought, and from that you can derive some useful results about how one ought to think and what results from experimental psychology ought to capture one’s attention. He might also be claiming somewhere that Frequentism is in fact inconsistent and therefore is simply a wrong model to adopt, but honestly if he’s arguing that then I’m inclined to ignore him because people who know a lot more about Frequentism than he does don’t seem to agree.
But there is a debate, even if I think it’s silly and quite pointless.
And also, the axiomatic models are different, even if statisticians use both.
The concern about AI risk is also the result of an attempt to propagate implications of empirical data. It just goes farther than what I think you consider sensible, and I think you’re encouraging an unnecessary limitation on human reasoning power by calling such reasoning unjustified.
I agree, it should itch that there haven’t been empirical tests of several of the key ideas involved in AI risk, and I think there should be a visceral sense of making bullshit up attached to this speculation unless and until we can find ways to do those empirical tests.
But I think it’s the same kind of stupid to ignore these projections as it is to ignore that you already know how your New Year’s Resolution isn’t going to work. It’s not obviously as strong a stupidity, but the flavor is exactly the same.
If we could banish that taste from our minds, then even without better empiricism we would be vastly stronger.
I’m concerned that you’re underestimating the value of this strength, and viewing its pursuit as a memetic hazard.
I don’t think we have to choose between massively improving our ability to make correct clever arguments and massively improving the drive and cleverness with which we ask nature its opinion. I think we can have both, and I think that getting AI risk and things like it right requires both.
But just as measuring everything about yourself isn’t really a fully mature expression of empiricism, I’m concerned about the memes you’re spreading in the name of mature empiricism retarding the art of finishing thinking.
I don’t think that they have to oppose.
And I’m under the impression that you think otherwise.