Perhaps you’re using a Frequentist definition of “likelihood” whereas I’m using a Bayesian one?
There’s a difference? Probability is probability.
There very much is a difference.
Probability is a mathematical construct. Specifically, it’s a special kind of measurep on a measure space M such that p(M) = 1 and p obeys a set of axioms that we refer to as the axioms of probability (where an “event” from the Wikipedia page is to be taken as any measurable subset of M).
This is a bit like highlighting that Euclidean geometry is a mathematical construct based on following thus-and-such axioms for relating thus-and-such undefined terms. Of course, in normal ways of thinking we point at lines and dots and so on, pretend those are the things that the undefined terms refer to, and proceed to show pictures of what the axioms imply. Formally, mathematicians refer to this as building a model of an axiomatic system. (Another example of this is elliptic geometry, which is a type of non-Euclidean geometry, which you can model as doing geometry on a sphere.)
The Frequentist and Bayesian models of probability theory are relevantly different. They both think of M as the space of possible results (usually called the “sample space” but not always) and a measurable subset E ≤ M as an “event”. But they use different models of p:
Frequentists suggest that were you to look at how often all of the events in M occur, the one we’re looking at (i.e., E) would occur at a certain frequency, and that’s how we should interpret p(E). E.g., if M is the set of results from flipping a fair coin and E is “heads”, then it is a property of the setup that p(E) = 0.5. A different way of saying this is that Frequentists model p as describing a property of that which they are observing—i.e., that probability is a property of the world.
Bayesians, on the other hand, model p as describing their current state of confidence about the true state of the observed phenomenon. In other words, Bayesians model p as being a property of mental models, not of the world. So if M is again the results from flipping a fair coin and E is “heads”, then to a Bayesian the statement p(E) = 0.5 is equivalent to saying “I equally expect getting a heads to not getting a heads from this coin flip.” To a Bayesian, it doesn’t make sense to ask what the “true” probability is that their subjective probability is estimating; the very question violates the model of p by trying to sneak in a Frequentist presumption.
Now let’s suppose that M is a hypothesis space, including some sector for hypotheses that haven’t yet been considered. When we say that a given hypothesis H is “likely”, we’re working within a partial model, but we haven’t yet said what “likely” means. The formalism is easy: we require that H ≤ M is measurable, and the statement that “it’s likely” means that p(H) is larger than most other measurable subsets of M (and often we mean something stronger, like p(H) > 0.5). But we haven’t yet specified in our model what p(H) means. This is where the difference between Frequentism and Bayesianism matters. A Frequentist would say that the probability is a property of the hypothesis space, and noticing H doesn’t change that. (I’m honestly not sure how a Frequentist thinks about iterating over a hypothesis space to suggest that H in fact would occur at a frequency of p(H) in the limit—maybe by considering the frequency in counterfactual worlds?) A Bayesian, by contrast, will say that p(H) is their current confidence that H is the right hypothesis.
What I’m suggesting, in essence, is that figuring out which hypothesis H ≤ M is worth testing is equivalent to moving from p to p’ in the space of probability measures on M in a way that causes p’(H) > p(H). This is coming from using a Bayesian model of what p is.
Of course, if you’re using a Frequentist model of p, then “most likely hypothesis” actually refers to a property of the hypothesis space—though I’m not sure how you would find out the frequency at which hypotheses turn out to be true the way you figure out the frequency at which a coin comes up heads. But that could just be my not being as familiar thinking in terms of the Frequentist model.
I’ll briefly note that although I find the Bayesian model more coherent with my sense of how the world works on a day-by-day basis, I think the Frequentist model makes more sense when thinking about quantum physics. The type of randomness we find there isn’t just about confidence, but is in fact a property of the quantum phenomena in question. In this case a well-calibrated Bayesian has to give a lot of probability mass to the hypothesis that there is a “true probability” in some quantum phenomena, which makes sense if we switch the model of p to be Frequentist.
But in short:
Yes, there’s a difference.
And things like “probability” and “belief” and “evidence” mean different things depending on what model you use.
What I’m saying is that this shouldn’t change your actual beliefs—it will flush out some stale caching, or at best identify an inconsistent belief, including empirical data that you haven’t fully updated on. But it does not, by itself, constitute evidence.
Yep, we disagree.
I think the disagreement is on two fronts. One is based on using different models of probability, which is basically not an interesting disagreement. (Arguing over which definition to use isn’t going to make either of us smarter.) But I think the other is substantive. I’ll focus on that.
In short, I think you underestimate the power of noticing implications of known facts. I think that if you look at a few common or well-known examples of incomplete deduction, it becomes pretty clear that figuring out how to finish thinking would be intensely powerful:
Many people make resolutions to exercise, be nicer, eat more vegetables, etc. And while making those resolutions, they often really think they mean it this time. And yet, there’s often a voice of doubt in the back of the mind, as though saying “Come on. You know this won’t work.” But people still quite often spend a bunch of time and money trying to follow through on their new resolution—often failing for reasons that they kind of already knew would happen (and yet often feeling guilty for not sticking to their plan!).
Religious or ideological deconversion often comes from letting in facts that are already known. E.g., I used to believe that the results of parapsychological research suggested some really important things about how to survive after physical death. I knew all the pieces of info that finally changed my mind months before my mind actually changed. I had even done experiments to test my hypotheses and it still took months. I’m under the impression that this is normal.
Most people reading this already know that if they put a ton of work into emptying their email inbox, they’ll feel good for a little while, and then it’ll fill up again, complete with the sense of guilt for not keeping up with it. And yet, somehow, it always feels like the right thing to do to go on an inbox-emptying flurry, and then get around to addressing the root cause “later” or maybe try things that will fail after a month or two. This is an agonizingly predictable cycle. (Of course, this isn’t how it goes for everyone, but it’s common enough that well over half the people who attend CFAR workshops seem to relate to it.)
Most of Einstein’s work in raising special relativity to consideration consisted of saying “Let’s take the Michelson-Morley result at face value and see where it goes.” Note that he is now considered the archetypal example of a brilliant person primarily for his ability to highlight worthy hypotheses via running with the implications of what is already known or supposed.
Ignaz Semmelweis found that hand-washing dramatically reduced mortality in important cases in hospitals. He was ignored, criticized, and committed to an insane asylum where guards beat him to death. At a cultural level, the fact that whether Semmelweis was right was (a) testable and (b) independent of opinion failed to propagate until after Louis Pasteur gave the medical community justification to believe that hand-washing could matter. This is a horrendous embarrassment, and thousands of people died unnecessarily because of a cultural inability to finish thinking. (Note that this also honors the need for empiricism—but the point here is that the ability to finish thinking was a prerequisite for empiricism mattering in this case.)
I could keep going. Hopefully you could too.
But my point is this:
Please note that there’s a baby in that bathwater you’re condemning as dirty.
Those are not different models. They are different interpretations of the utility of probability in different classes of applications.
though I’m not sure how you would find out the frequency at which hypotheses turn out to be true the way you figure out the frequency at which a coin comes up heads. But that could just be my not being as familiar thinking in terms of the Frequentist model
You do it exactly the same as in your Bayesian example.
I’m sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent. If you use probability to model the outcome of an inherently random event, people have called that “frequentist.” If instead you model the event as deterministic, but your knowledge over the outcome as uncertain, then people have applied the label “bayesian.” It’s the same probability, just used differently.
It’s like how if you apply your knowledge of mechanics to bridge and road building, it’s called civil engineering, but if you apply it to buildings it is architecture. It’s still mechanical engineering either way, just applied differently.
One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky’s mind. Actual statisticians use probability to model events and knowledge of events simultaneously.
Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don’t know what it is that you disagree with.
I’m sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent.
[…]
One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky’s mind. Actual statisticians use probability to model events and knowledge of events simultaneously.
I know a fellow who has a Ph.D. in statistics and works for the Department of Defense on cryptography. I think he largely agrees with your point: professional statisticians need to use both methods fluidly in order to do useful work. But he also doesn’t claim that they’re both secretly the same thing. He says that strong Bayesianism is useless in some cases that Frequentism gets right, and vice versa, though his sympathies lie more with the Frequentist position on pragmatic grounds (i.e. that methods that are easier to understand in a Frequentist framing tend to be more useful in a wider range of circumstances in his experience).
I think the debate is silly. It’s like debating which model of hyperbolic geometry is “right”. Different models highlight different intuitions about the formal system, and they make different aspects of the formal theorems more or less relevant to specific cases.
I think Eliezer’s claim is that as a matter of psychology, using a Bayesian model of probability lets you think about the results of probability theory as laws of thought, and from that you can derive some useful results about how one ought to think and what results from experimental psychology ought to capture one’s attention. He might also be claiming somewhere that Frequentism is in fact inconsistent and therefore is simply a wrong model to adopt, but honestly if he’s arguing that then I’m inclined to ignore him because people who know a lot more about Frequentism than he does don’t seem to agree.
But there is a debate, even if I think it’s silly and quite pointless.
And also, the axiomatic models are different, even if statisticians use both.
Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don’t know what it is that you disagree with.
The concern about AI risk is also the result of an attempt to propagate implications of empirical data. It just goes farther than what I think you consider sensible, and I think you’re encouraging an unnecessary limitation on human reasoning power by calling such reasoning unjustified.
I agree, it should itch that there haven’t been empirical tests of several of the key ideas involved in AI risk, and I think there should be a visceral sense of making bullshit up attached to this speculation unless and until we can find ways to do those empirical tests.
But I think it’s the same kind of stupid to ignore these projections as it is to ignore that you already know how your New Year’s Resolution isn’t going to work. It’s not obviously as strong a stupidity, but the flavor is exactly the same.
If we could banish that taste from our minds, then even without better empiricism we would be vastly stronger.
I’m concerned that you’re underestimating the value of this strength, and viewing its pursuit as a memetic hazard.
I don’t think we have to choose between massively improving our ability to make correct clever arguments and massively improving the drive and cleverness with which we ask nature its opinion. I think we can have both, and I think that getting AI risk and things like it right requires both.
But just as measuring everything about yourself isn’t really a fully mature expression of empiricism, I’m concerned about the memes you’re spreading in the name of mature empiricism retarding the art of finishing thinking.
I don’t think that they have to oppose.
And I’m under the impression that you think otherwise.
There very much is a difference.
Probability is a mathematical construct. Specifically, it’s a special kind of measure p on a measure space M such that p(M) = 1 and p obeys a set of axioms that we refer to as the axioms of probability (where an “event” from the Wikipedia page is to be taken as any measurable subset of M).
This is a bit like highlighting that Euclidean geometry is a mathematical construct based on following thus-and-such axioms for relating thus-and-such undefined terms. Of course, in normal ways of thinking we point at lines and dots and so on, pretend those are the things that the undefined terms refer to, and proceed to show pictures of what the axioms imply. Formally, mathematicians refer to this as building a model of an axiomatic system. (Another example of this is elliptic geometry, which is a type of non-Euclidean geometry, which you can model as doing geometry on a sphere.)
The Frequentist and Bayesian models of probability theory are relevantly different. They both think of M as the space of possible results (usually called the “sample space” but not always) and a measurable subset E ≤ M as an “event”. But they use different models of p:
Frequentists suggest that were you to look at how often all of the events in M occur, the one we’re looking at (i.e., E) would occur at a certain frequency, and that’s how we should interpret p(E). E.g., if M is the set of results from flipping a fair coin and E is “heads”, then it is a property of the setup that p(E) = 0.5. A different way of saying this is that Frequentists model p as describing a property of that which they are observing—i.e., that probability is a property of the world.
Bayesians, on the other hand, model p as describing their current state of confidence about the true state of the observed phenomenon. In other words, Bayesians model p as being a property of mental models, not of the world. So if M is again the results from flipping a fair coin and E is “heads”, then to a Bayesian the statement p(E) = 0.5 is equivalent to saying “I equally expect getting a heads to not getting a heads from this coin flip.” To a Bayesian, it doesn’t make sense to ask what the “true” probability is that their subjective probability is estimating; the very question violates the model of p by trying to sneak in a Frequentist presumption.
Now let’s suppose that M is a hypothesis space, including some sector for hypotheses that haven’t yet been considered. When we say that a given hypothesis H is “likely”, we’re working within a partial model, but we haven’t yet said what “likely” means. The formalism is easy: we require that H ≤ M is measurable, and the statement that “it’s likely” means that p(H) is larger than most other measurable subsets of M (and often we mean something stronger, like p(H) > 0.5). But we haven’t yet specified in our model what p(H) means. This is where the difference between Frequentism and Bayesianism matters. A Frequentist would say that the probability is a property of the hypothesis space, and noticing H doesn’t change that. (I’m honestly not sure how a Frequentist thinks about iterating over a hypothesis space to suggest that H in fact would occur at a frequency of p(H) in the limit—maybe by considering the frequency in counterfactual worlds?) A Bayesian, by contrast, will say that p(H) is their current confidence that H is the right hypothesis.
What I’m suggesting, in essence, is that figuring out which hypothesis H ≤ M is worth testing is equivalent to moving from p to p’ in the space of probability measures on M in a way that causes p’(H) > p(H). This is coming from using a Bayesian model of what p is.
Of course, if you’re using a Frequentist model of p, then “most likely hypothesis” actually refers to a property of the hypothesis space—though I’m not sure how you would find out the frequency at which hypotheses turn out to be true the way you figure out the frequency at which a coin comes up heads. But that could just be my not being as familiar thinking in terms of the Frequentist model.
I’ll briefly note that although I find the Bayesian model more coherent with my sense of how the world works on a day-by-day basis, I think the Frequentist model makes more sense when thinking about quantum physics. The type of randomness we find there isn’t just about confidence, but is in fact a property of the quantum phenomena in question. In this case a well-calibrated Bayesian has to give a lot of probability mass to the hypothesis that there is a “true probability” in some quantum phenomena, which makes sense if we switch the model of p to be Frequentist.
But in short:
Yes, there’s a difference.
And things like “probability” and “belief” and “evidence” mean different things depending on what model you use.
Yep, we disagree.
I think the disagreement is on two fronts. One is based on using different models of probability, which is basically not an interesting disagreement. (Arguing over which definition to use isn’t going to make either of us smarter.) But I think the other is substantive. I’ll focus on that.
In short, I think you underestimate the power of noticing implications of known facts. I think that if you look at a few common or well-known examples of incomplete deduction, it becomes pretty clear that figuring out how to finish thinking would be intensely powerful:
Many people make resolutions to exercise, be nicer, eat more vegetables, etc. And while making those resolutions, they often really think they mean it this time. And yet, there’s often a voice of doubt in the back of the mind, as though saying “Come on. You know this won’t work.” But people still quite often spend a bunch of time and money trying to follow through on their new resolution—often failing for reasons that they kind of already knew would happen (and yet often feeling guilty for not sticking to their plan!).
Religious or ideological deconversion often comes from letting in facts that are already known. E.g., I used to believe that the results of parapsychological research suggested some really important things about how to survive after physical death. I knew all the pieces of info that finally changed my mind months before my mind actually changed. I had even done experiments to test my hypotheses and it still took months. I’m under the impression that this is normal.
Most people reading this already know that if they put a ton of work into emptying their email inbox, they’ll feel good for a little while, and then it’ll fill up again, complete with the sense of guilt for not keeping up with it. And yet, somehow, it always feels like the right thing to do to go on an inbox-emptying flurry, and then get around to addressing the root cause “later” or maybe try things that will fail after a month or two. This is an agonizingly predictable cycle. (Of course, this isn’t how it goes for everyone, but it’s common enough that well over half the people who attend CFAR workshops seem to relate to it.)
Most of Einstein’s work in raising special relativity to consideration consisted of saying “Let’s take the Michelson-Morley result at face value and see where it goes.” Note that he is now considered the archetypal example of a brilliant person primarily for his ability to highlight worthy hypotheses via running with the implications of what is already known or supposed.
Ignaz Semmelweis found that hand-washing dramatically reduced mortality in important cases in hospitals. He was ignored, criticized, and committed to an insane asylum where guards beat him to death. At a cultural level, the fact that whether Semmelweis was right was (a) testable and (b) independent of opinion failed to propagate until after Louis Pasteur gave the medical community justification to believe that hand-washing could matter. This is a horrendous embarrassment, and thousands of people died unnecessarily because of a cultural inability to finish thinking. (Note that this also honors the need for empiricism—but the point here is that the ability to finish thinking was a prerequisite for empiricism mattering in this case.)
I could keep going. Hopefully you could too.
But my point is this:
Please note that there’s a baby in that bathwater you’re condemning as dirty.
Those are not different models. They are different interpretations of the utility of probability in different classes of applications.
You do it exactly the same as in your Bayesian example.
I’m sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent. If you use probability to model the outcome of an inherently random event, people have called that “frequentist.” If instead you model the event as deterministic, but your knowledge over the outcome as uncertain, then people have applied the label “bayesian.” It’s the same probability, just used differently.
It’s like how if you apply your knowledge of mechanics to bridge and road building, it’s called civil engineering, but if you apply it to buildings it is architecture. It’s still mechanical engineering either way, just applied differently.
One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky’s mind. Actual statisticians use probability to model events and knowledge of events simultaneously.
Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don’t know what it is that you disagree with.
That’s what a model is in this case.
How sure are you of that?
I know a fellow who has a Ph.D. in statistics and works for the Department of Defense on cryptography. I think he largely agrees with your point: professional statisticians need to use both methods fluidly in order to do useful work. But he also doesn’t claim that they’re both secretly the same thing. He says that strong Bayesianism is useless in some cases that Frequentism gets right, and vice versa, though his sympathies lie more with the Frequentist position on pragmatic grounds (i.e. that methods that are easier to understand in a Frequentist framing tend to be more useful in a wider range of circumstances in his experience).
I think the debate is silly. It’s like debating which model of hyperbolic geometry is “right”. Different models highlight different intuitions about the formal system, and they make different aspects of the formal theorems more or less relevant to specific cases.
I think Eliezer’s claim is that as a matter of psychology, using a Bayesian model of probability lets you think about the results of probability theory as laws of thought, and from that you can derive some useful results about how one ought to think and what results from experimental psychology ought to capture one’s attention. He might also be claiming somewhere that Frequentism is in fact inconsistent and therefore is simply a wrong model to adopt, but honestly if he’s arguing that then I’m inclined to ignore him because people who know a lot more about Frequentism than he does don’t seem to agree.
But there is a debate, even if I think it’s silly and quite pointless.
And also, the axiomatic models are different, even if statisticians use both.
The concern about AI risk is also the result of an attempt to propagate implications of empirical data. It just goes farther than what I think you consider sensible, and I think you’re encouraging an unnecessary limitation on human reasoning power by calling such reasoning unjustified.
I agree, it should itch that there haven’t been empirical tests of several of the key ideas involved in AI risk, and I think there should be a visceral sense of making bullshit up attached to this speculation unless and until we can find ways to do those empirical tests.
But I think it’s the same kind of stupid to ignore these projections as it is to ignore that you already know how your New Year’s Resolution isn’t going to work. It’s not obviously as strong a stupidity, but the flavor is exactly the same.
If we could banish that taste from our minds, then even without better empiricism we would be vastly stronger.
I’m concerned that you’re underestimating the value of this strength, and viewing its pursuit as a memetic hazard.
I don’t think we have to choose between massively improving our ability to make correct clever arguments and massively improving the drive and cleverness with which we ask nature its opinion. I think we can have both, and I think that getting AI risk and things like it right requires both.
But just as measuring everything about yourself isn’t really a fully mature expression of empiricism, I’m concerned about the memes you’re spreading in the name of mature empiricism retarding the art of finishing thinking.
I don’t think that they have to oppose.
And I’m under the impression that you think otherwise.