I’m not trying to be mean here, but this post is completely wrong at all levels. No, Bayesian probability is not just for things that are space-like. None of the theorems from which it derived even refer to time.
So, you know the things in your past, so there is no need for probability there.
This simply is not true. There would be no need of detectives or historical researchers if it were true.
If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn’t, and say that the part you observed is in your past, while the part you didn’t observe is space-like separated from you.
You can say it, but it’s not even approximately true. If someone flips a coin in front of me but covers it up just before it hits the table, I observe that a coin flip has occurred, but not whether it was heads or tails—and that second even is definitely within my past light-cone.
You may have cached that you should use Bayesian probability to deal with things you are uncertain about.
No, I cached nothing. I first spent a considerable amount of time understanding Cox’s Theorem in detail, which derives probability theory as the uniquely determined extension of classical propositional logic to a logic that handles uncertainty. There is some controversy about some of its assumptions, so I later proved and published my own theorem that arrives at the same conclusion (and more) using purely logical assumptions/requirements, all of the form, “our extended logic should retain this existing property of classical propositional logical.”
The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!
1) It’s not clear this is really true. It seems to me that any situation that is affected by an agent’s beliefs can be handled within Bayesian probability theory by modeling the agent.
2) So what?
Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future!
This is a complete non sequitur. Even if I grant your premise, most things in my future are unaffected by my beliefs. The date on which the Sun will expand and engulf the Earth is in no way affected by any of my beliefs. Whether you will get luck with that woman at the bar next Friday is in no way affected by any of my beliefs. And so on,
I think you are correct that I cannot cleanly separate the things that are in my past that I know and the things that are in my post that I do not know. For example, if a probability is chosen uniformly at random in the unit interval, then a coin with that probability is flipped a large number of times, then I see some of the results, I do not know the true probability, but the coin flips that I see really should come after the thing that determines the probability in my Bayes’ net.
[META] As a general heuristic, when you encounter a post from someone otherwise reputable that seems completely nonsensical to you, it may be worth attempting to find some reframing of it that causes it to make sense—or at the very least, make more sense than before—instead of addressing your remarks to the current (nonsensical-seeming) interpretation. The probability that the writer of the post in question managed to completely lose their mind while writing said post is significantly lower than both the probability that you have misinterpreted what they are saying, and the probability that they are saying something non-obvious which requires interpretive effort to be understood. To maximize your chances of getting something useful out of the post, therefore, it is advisable to condition on the possibility that the post is not saying something trivially incorrect, and see where that leads you. This tends to be how mutual understanding is built, and is a good model for how charitable communication works. Your comment, to say the least, was neither.
This is the first thing I’ve read from Scott Garrabant, so “otherwise reputable” doesn’t apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.
I’m not trying to be mean here, but this post is completely wrong at all levels. No, Bayesian probability is not just for things that are space-like. None of the theorems from which it derived even refer to time.
This simply is not true. There would be no need of detectives or historical researchers if it were true.
You can say it, but it’s not even approximately true. If someone flips a coin in front of me but covers it up just before it hits the table, I observe that a coin flip has occurred, but not whether it was heads or tails—and that second even is definitely within my past light-cone.
No, I cached nothing. I first spent a considerable amount of time understanding Cox’s Theorem in detail, which derives probability theory as the uniquely determined extension of classical propositional logic to a logic that handles uncertainty. There is some controversy about some of its assumptions, so I later proved and published my own theorem that arrives at the same conclusion (and more) using purely logical assumptions/requirements, all of the form, “our extended logic should retain this existing property of classical propositional logical.”
1) It’s not clear this is really true. It seems to me that any situation that is affected by an agent’s beliefs can be handled within Bayesian probability theory by modeling the agent.
2) So what?
This is a complete non sequitur. Even if I grant your premise, most things in my future are unaffected by my beliefs. The date on which the Sun will expand and engulf the Earth is in no way affected by any of my beliefs. Whether you will get luck with that woman at the bar next Friday is in no way affected by any of my beliefs. And so on,
I think you are correct that I cannot cleanly separate the things that are in my past that I know and the things that are in my post that I do not know. For example, if a probability is chosen uniformly at random in the unit interval, then a coin with that probability is flipped a large number of times, then I see some of the results, I do not know the true probability, but the coin flips that I see really should come after the thing that determines the probability in my Bayes’ net.
[META] As a general heuristic, when you encounter a post from someone otherwise reputable that seems completely nonsensical to you, it may be worth attempting to find some reframing of it that causes it to make sense—or at the very least, make more sense than before—instead of addressing your remarks to the current (nonsensical-seeming) interpretation. The probability that the writer of the post in question managed to completely lose their mind while writing said post is significantly lower than both the probability that you have misinterpreted what they are saying, and the probability that they are saying something non-obvious which requires interpretive effort to be understood. To maximize your chances of getting something useful out of the post, therefore, it is advisable to condition on the possibility that the post is not saying something trivially incorrect, and see where that leads you. This tends to be how mutual understanding is built, and is a good model for how charitable communication works. Your comment, to say the least, was neither.
This is the first thing I’ve read from Scott Garrabant, so “otherwise reputable” doesn’t apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.