Do you actually do this—“Oh, not P! I must be the pope.”—or do you just notice this—“Not P, so everything’s true. Where do I go from here?”.
If you want to know why you shouldn’t do this it’s because you never really learn not P, you just learn evidence against P which you should update with Bayes’ rule. If you want to understand this process more intuitively (and you’ve already read the sequences and are still confused), I would recommend this short tutorial or studying belief propagation in Bayesian networks, for which I don’t know a great source for the intuitions behind, but units 3 and 4 of the online Stanford AI class might help.
I’ve actually done that class and gotten really good grades.
Looking at it, it seems I have automatic generation of nodes for new statements, and the creation of a new node does not check for an already existing node for it’s inversion.
To complicate matters further, I don’t go “I’m the pope” nor “all statements are true.”, I go “NOT Bayes theorem, NOT induction, and NOT Occhams razor!”
Well, one mathematically right thing to do is to make a new node descending from both other nodes representing E = (P and not P) and then observe not E.
Did you read the first tutorial? Do you find the process of belief-updating on causal nets intuitive, or do you just understand the math? How hard would it be for you to explain why it works in the language of the first tutorial?
Strictly speaking, causal networks only apply to situations where the number of variables does not change, but the intuitions carry over.
You don’t observe E to be true, you infer it to be (very likely) true by propagating from P and from not P. You observe it to be false using the law of noncontradiction.
Parsimony suggests that if you think you understand the math, it’s because you understand it. Understanding Bayesianism seems easier than fixing a badly-understood flaw in your brain’s implementation of it.
Do you actually do this—“Oh, not P! I must be the pope.”—or do you just notice this—“Not P, so everything’s true. Where do I go from here?”.
If you want to know why you shouldn’t do this it’s because you never really learn not P, you just learn evidence against P which you should update with Bayes’ rule. If you want to understand this process more intuitively (and you’ve already read the sequences and are still confused), I would recommend this short tutorial or studying belief propagation in Bayesian networks, for which I don’t know a great source for the intuitions behind, but units 3 and 4 of the online Stanford AI class might help.
I’ve actually done that class and gotten really good grades.
Looking at it, it seems I have automatic generation of nodes for new statements, and the creation of a new node does not check for an already existing node for it’s inversion.
To complicate matters further, I don’t go “I’m the pope” nor “all statements are true.”, I go “NOT Bayes theorem, NOT induction, and NOT Occhams razor!”
Well, one mathematically right thing to do is to make a new node descending from both other nodes representing E = (P and not P) and then observe not E.
Did you read the first tutorial? Do you find the process of belief-updating on causal nets intuitive, or do you just understand the math? How hard would it be for you to explain why it works in the language of the first tutorial?
Strictly speaking, causal networks only apply to situations where the number of variables does not change, but the intuitions carry over.
Thats what I try to do, the problem is I end up observing E to be true. And E leads to an “everything” node.
I’m not sure how well I understand the math, but I feel like I probably do...
You don’t observe E to be true, you infer it to be (very likely) true by propagating from P and from not P. You observe it to be false using the law of noncontradiction.
Parsimony suggests that if you think you understand the math, it’s because you understand it. Understanding Bayesianism seems easier than fixing a badly-understood flaw in your brain’s implementation of it.
How can I get this law of noncontradiction? it seems like an useful thing to have.