*I keep seeing probability referred to as an estimation of how certain you are in a belief. And while I guess it could be argued that you should be certain of a belief relative to the number of possible worlds left or whatever, that doesn’t necessarily follow. Does the above explanation differ from how other people use probability?
I believe you’ve defined an equivalent if unusual form (or rather, your definition can be extended to an equivalent form). You need the notion of the measure of a set (because naively it can be confusing whether one infinite set is bigger than another), and measure is basically equivalent to probability; “how certain you are of a belief” is equivalent to “the measure of the worlds in which this belief is true, relative to that of the worlds that you might be in now”.
*Also, if probability is defined as an arbitrary estimation of how sure you are, why should those estimations follow the laws of probability? I’ve heard the Dutch book argument, so I get why there might be practical reasons for obeying them, but unless you accept a pragmatist epistemology, that doesn’t provide reasons why your beliefs are more likely to be true if you follow them. (I’ve also heard of Cox’s rules, but I haven’t been able to find a copy. And if I understand right, they says that Bayes’ theorem follows from Boolean logic, which is similar to what I’ve said above, yes?)
The only laws of probability measure I know are that the measure of the whole set is 1, and the measure of a union of disjoint subsets is the sum of their measures. I’m finding it hard to imagine how I could hold beliefs that wouldn’t conform to them. I mean, I guess it’s conceivable that I could believe that A has probability 0.1, and B has probability 0.1, and A OR B has probability 0.3, but that just seems crazy. I guess what convinces me is dutch-booking myself; isn’t a dutch books argument precisely an argument that another set of probabilities would be more likely to be true?
*Another question: above I used propositional logic, which is okay, but it’s not exactly the creme de la creme of logics. I understand that fuzzy logics work better for a lot of things, and I’m familiar with predicate logics as well, but I’m not sure what the interaction of any of them is with probability or the use of it, although I know that technically probability doesn’t have to be binary (sets just need to be exhaustive and mutually exclusive for the Kolmogorov axioms to work, right?). I don’t know, maybe it’s just something that I haven’t learned yet, but the answer really is out there?
I’m not aware of any flaws with propositional logic. If you reach a problem you can’t solve with it then by all means extend to something fancier.
Those are the only questions that are coming to mind right now (if I think of any more, I can probably ask them in comments). So anyone? Am I doing something wrong? Or do I feel more confused than I really am?
I think you’re trying to be too formal too fast (or else your title isn’t what you’re really interested in). Try getting a solid practical handle on Bayes in finite contexts before worrying about extending it to infinite possibilities and the real world.
I believe you’ve defined an equivalent if unusual form (or rather, your definition can be extended to an equivalent form).
Yeah, that’s what MrMind said too. Thanks!
The only laws of probability measure I know are that the measure of the whole set is 1, and the measure of a union of disjoint subsets is the sum of their measures. I’m finding it hard to imagine how I could hold beliefs that wouldn’t conform to them. I mean, I guess it’s conceivable that I could believe that A has probability 0.1, and B has probability 0.1, and A OR B has probability 0.3, but that just seems crazy.
Yeah, and I fully grasp the “measure of the whole set is 1” thing. (After all, if you’re 100% certain something is true, then that’s the only thing you think is possible). The additivity axiom is harder for me to grasp, though. It seems like it should be true intuitively, but teaching myself the formal form has been more difficult. Thinking and Deciding tries to derive it from having different bets depending on how things are worded (for example, on whether a coin comes up heads or tails versus whether the sun is up and the coin comes up heads or tails) which I grasp intellectually, but I’m having a hard time grokking it intuitively.
I think you’re trying to be too formal too fast (or else your title isn’t what you’re really interested in). Try getting a solid practical handle on Bayes in finite contexts before worrying about extending it to infinite possibilities and the real world.
I do have a subjective feeling of success when I use Bayes (or Bayes-derive heuristics, more commonly) in my everyday life, but I really want to be sure I understand the nitty-gritty of it. Even if most of my use of it is just in justifying heuristics, I still want to be sure that I can formulate and apply them properly, you know?
I believe you’ve defined an equivalent if unusual form (or rather, your definition can be extended to an equivalent form). You need the notion of the measure of a set (because naively it can be confusing whether one infinite set is bigger than another), and measure is basically equivalent to probability; “how certain you are of a belief” is equivalent to “the measure of the worlds in which this belief is true, relative to that of the worlds that you might be in now”.
The only laws of probability measure I know are that the measure of the whole set is 1, and the measure of a union of disjoint subsets is the sum of their measures. I’m finding it hard to imagine how I could hold beliefs that wouldn’t conform to them. I mean, I guess it’s conceivable that I could believe that A has probability 0.1, and B has probability 0.1, and A OR B has probability 0.3, but that just seems crazy. I guess what convinces me is dutch-booking myself; isn’t a dutch books argument precisely an argument that another set of probabilities would be more likely to be true?
I’m not aware of any flaws with propositional logic. If you reach a problem you can’t solve with it then by all means extend to something fancier.
I think you’re trying to be too formal too fast (or else your title isn’t what you’re really interested in). Try getting a solid practical handle on Bayes in finite contexts before worrying about extending it to infinite possibilities and the real world.
Yeah, that’s what MrMind said too. Thanks!
Yeah, and I fully grasp the “measure of the whole set is 1” thing. (After all, if you’re 100% certain something is true, then that’s the only thing you think is possible). The additivity axiom is harder for me to grasp, though. It seems like it should be true intuitively, but teaching myself the formal form has been more difficult. Thinking and Deciding tries to derive it from having different bets depending on how things are worded (for example, on whether a coin comes up heads or tails versus whether the sun is up and the coin comes up heads or tails) which I grasp intellectually, but I’m having a hard time grokking it intuitively.
I do have a subjective feeling of success when I use Bayes (or Bayes-derive heuristics, more commonly) in my everyday life, but I really want to be sure I understand the nitty-gritty of it. Even if most of my use of it is just in justifying heuristics, I still want to be sure that I can formulate and apply them properly, you know?