How do I stop my brain from going: “I believe P and I believe something that implies not P → principle of explosion → all statements are true!” and instead go “I believe P and I believe something that implies not P → I one of my beliefs are incorrect”. It doesn’t happen to often, but it’d be nice to have an actual formal refutation for when it does.
Do you actually do this—“Oh, not P! I must be the pope.”—or do you just notice this—“Not P, so everything’s true. Where do I go from here?”.
If you want to know why you shouldn’t do this it’s because you never really learn not P, you just learn evidence against P which you should update with Bayes’ rule. If you want to understand this process more intuitively (and you’ve already read the sequences and are still confused), I would recommend this short tutorial or studying belief propagation in Bayesian networks, for which I don’t know a great source for the intuitions behind, but units 3 and 4 of the online Stanford AI class might help.
I’ve actually done that class and gotten really good grades.
Looking at it, it seems I have automatic generation of nodes for new statements, and the creation of a new node does not check for an already existing node for it’s inversion.
To complicate matters further, I don’t go “I’m the pope” nor “all statements are true.”, I go “NOT Bayes theorem, NOT induction, and NOT Occhams razor!”
Well, one mathematically right thing to do is to make a new node descending from both other nodes representing E = (P and not P) and then observe not E.
Did you read the first tutorial? Do you find the process of belief-updating on causal nets intuitive, or do you just understand the math? How hard would it be for you to explain why it works in the language of the first tutorial?
Strictly speaking, causal networks only apply to situations where the number of variables does not change, but the intuitions carry over.
You don’t observe E to be true, you infer it to be (very likely) true by propagating from P and from not P. You observe it to be false using the law of noncontradiction.
Parsimony suggests that if you think you understand the math, it’s because you understand it. Understanding Bayesianism seems easier than fixing a badly-understood flaw in your brain’s implementation of it.
The reason is that you don’t believe anything with logical conviction, if your “axioms” imply absurdity, you discard the “axioms” as untrustworthy, thus refuting the arguments for their usefulness (that always precede any beliefs, if you look for them). Why do I believe this? My brain tells me so, and its reasoning is potentially suspect.
I think I’ve found the problem: I don’t have any good intuitive notion of absurdity. The only clear association I have with it is under “absurdity heuristic” as “a thing to ignore”.
That is: It’s not self evident to me that what it implies IS absurd. After all, it was implied by a chain of logic I grok and can find no flaw in.
To the (mostly social) extent that concepts were useful to your ancestors, one is going to lead to better decisions than the other, and so you should expect to have evolved the latter intuition. (You trust two friends, and then one of them tells you the other is lying- you feel some consternation of the first kind, but then you start trying to figure out which one is trustworthy.)
How do I stop my brain from going: “I believe P and I believe something that implies not P → principle of explosion → all statements are true!” and instead go “I believe P and I believe something that implies not P → I one of my beliefs are incorrect”. It doesn’t happen to often, but it’d be nice to have an actual formal refutation for when it does.
Do you actually do this—“Oh, not P! I must be the pope.”—or do you just notice this—“Not P, so everything’s true. Where do I go from here?”.
If you want to know why you shouldn’t do this it’s because you never really learn not P, you just learn evidence against P which you should update with Bayes’ rule. If you want to understand this process more intuitively (and you’ve already read the sequences and are still confused), I would recommend this short tutorial or studying belief propagation in Bayesian networks, for which I don’t know a great source for the intuitions behind, but units 3 and 4 of the online Stanford AI class might help.
I’ve actually done that class and gotten really good grades.
Looking at it, it seems I have automatic generation of nodes for new statements, and the creation of a new node does not check for an already existing node for it’s inversion.
To complicate matters further, I don’t go “I’m the pope” nor “all statements are true.”, I go “NOT Bayes theorem, NOT induction, and NOT Occhams razor!”
Well, one mathematically right thing to do is to make a new node descending from both other nodes representing E = (P and not P) and then observe not E.
Did you read the first tutorial? Do you find the process of belief-updating on causal nets intuitive, or do you just understand the math? How hard would it be for you to explain why it works in the language of the first tutorial?
Strictly speaking, causal networks only apply to situations where the number of variables does not change, but the intuitions carry over.
Thats what I try to do, the problem is I end up observing E to be true. And E leads to an “everything” node.
I’m not sure how well I understand the math, but I feel like I probably do...
You don’t observe E to be true, you infer it to be (very likely) true by propagating from P and from not P. You observe it to be false using the law of noncontradiction.
Parsimony suggests that if you think you understand the math, it’s because you understand it. Understanding Bayesianism seems easier than fixing a badly-understood flaw in your brain’s implementation of it.
How can I get this law of noncontradiction? it seems like an useful thing to have.
The reason is that you don’t believe anything with logical conviction, if your “axioms” imply absurdity, you discard the “axioms” as untrustworthy, thus refuting the arguments for their usefulness (that always precede any beliefs, if you look for them). Why do I believe this? My brain tells me so, and its reasoning is potentially suspect.
I think I’ve found the problem: I don’t have any good intuitive notion of absurdity. The only clear association I have with it is under “absurdity heuristic” as “a thing to ignore”.
That is: It’s not self evident to me that what it implies IS absurd. After all, it was implied by a chain of logic I grok and can find no flaw in.
I used “absurdity” in the technical math sense.
To the (mostly social) extent that concepts were useful to your ancestors, one is going to lead to better decisions than the other, and so you should expect to have evolved the latter intuition. (You trust two friends, and then one of them tells you the other is lying- you feel some consternation of the first kind, but then you start trying to figure out which one is trustworthy.)
It seems a lot of intuitions all humans are supposed to have were overwritten by noise at some point...