That’s because it is. Yes, the way I described power rankings working, it is isomorphic to this:
Bayesian agent has two beliefs X and Y. If it discovered that X and Y are evidence against each other ( Pr(X | Y) < Pr(X) & Pr(Y | X) < Pr(Y) ) which belief will be updated more?
which is isomorphic to
How much evidence for X and how much for Y?
but those questions don’t cause most human brains to give good answers.
I think that thinking in terms of probability is going to be more conducive to careful thinking instead of thinking in terms of power. We’ve got a lot of emotional connections and alternative definitions for the second word which we don’t really want interfering with our reasoning when we speak of probability.
I kinda disagree here. If you show me an exact Bayesian network, I can read off it the degree to which evidence for one proposition is evidence against another. If you don’t give an exact interpretation in probability theory, then isn’t talking about “probability” instead of “power” just pretending to precision? Jumping to “probability” is something that has to be earned, and to me it’s not yet obvious that for all Bayesian graphs, if P(A) > P(B) > 0.5, then learning the truth of a descendant node which proves !(A & B) will cause B to decrease in probability more than A.
and to me it’s not yet obvious that for all Bayesian graphs, if P(A) > P(B) > 0.5, then learning the truth of a descendant node which proves !(A & B) will cause B to decrease in probability more than A.
The tradeoff occurring here seems to be reducing the possibility of triggering biases versus reducing the possibility that you’re fooling yourself into thinking that you’re thought is more precise than it really is. I would go with the first; if I felt that I was being insufficiently precise in a certain situation, I could use a couple checks, such as seeing whether it managed to distinguish fiction from reality effectively.
On a more concrete note, I read this:
If these two beliefs were brought into conflict (say, Michael Vassar presented me with a perpetual motion machine blueprint) physics would win, because it’s more powerful.
as judging that if he estimated P(A)>P(B), P(A) would remain greater than P(B) given !(A&B), not as saying that !(A&B) was stronger evidence against B than against A.
That’s because it is. Yes, the way I described power rankings working, it is isomorphic to this:
which is isomorphic to
but those questions don’t cause most human brains to give good answers.
I think that thinking in terms of probability is going to be more conducive to careful thinking instead of thinking in terms of power. We’ve got a lot of emotional connections and alternative definitions for the second word which we don’t really want interfering with our reasoning when we speak of probability.
I kinda disagree here. If you show me an exact Bayesian network, I can read off it the degree to which evidence for one proposition is evidence against another. If you don’t give an exact interpretation in probability theory, then isn’t talking about “probability” instead of “power” just pretending to precision? Jumping to “probability” is something that has to be earned, and to me it’s not yet obvious that for all Bayesian graphs, if P(A) > P(B) > 0.5, then learning the truth of a descendant node which proves !(A & B) will cause B to decrease in probability more than A.
Consider learning “not A,” for example.
The tradeoff occurring here seems to be reducing the possibility of triggering biases versus reducing the possibility that you’re fooling yourself into thinking that you’re thought is more precise than it really is. I would go with the first; if I felt that I was being insufficiently precise in a certain situation, I could use a couple checks, such as seeing whether it managed to distinguish fiction from reality effectively.
On a more concrete note, I read this:
as judging that if he estimated P(A)>P(B), P(A) would remain greater than P(B) given !(A&B), not as saying that !(A&B) was stronger evidence against B than against A.