Is it sane for Bob and Daisy to be in such a positive or negative feedback loop? How is this resolved?
It is not sane.
If you use a belief (say A) to change the value of another belief (say B), then depending on how many times you use A, you arrive at different values. That is, if you use A or A,A,A,A as evidence, you get different results. It would be as if:
P(B|A) <> P(B|A,A,A,A)
But the logic underlying bayesian reasoning is classical, so that A <-> A+A+A+A, and, by Jaynes’ requirement IIIc (see page 19 of The logic of science):
But in this case we do not have a constant A but an A dependent on someone updating on A.
With notation A[P] for “A believed by P” e.g.
B = Bright exists
B[Bob] = Bob believes Bright exists
B[Bob][Bob] = Bob believes that Bob believes that Bright exists
we can represent the expansion of the updating on belief B = Bright exists as
P(B|X(Bob)) with
X(Bob) where X(Person) = B0(Person) & X[forall P: P!=Person][Person]
that is a belief in a) an aprioi B0 of a person and
b) a belief of other persons into this kind of belief.
This requires a fixed point to solve.
Using the “belief distribution” approximation (A&B)[P] ~= A[P]&B[P]
this could be approximated for Bob with
And this could be further approximated as B0(Bob) assuming plausibly that other persons have non-stricter priors than yourself (P(X0[Q])<=P(X0[Yourself])).
With these two approximations we are back where we began.
I didn’t work out yet how to get a better bound but it seems plausible that X doesn’t diverge but probably converges given suitable B0s.
It is of course correct to state that one can only update once toward
P(B|X) for X=X(P) for all P in Person
but this implies infinitely many expansions in X (the loop I think implied by irrational).
Sorry for the confusion, but I couldn’t recast your argument in any formal language whatsoever.
Sorry too. That was my risk of inventing a notion on the spot. The original comment started using excessive “believes X” which I basically replaced. I’m not aware of/trained in a notation to compactly write nested probability assertions.
I’m still trying to resolve the expansion and nesting issues I totally glossed over.
What is B[a][b]? b believes that a believes B or a believes that b believes that B?
[] is like a parameterized suffix. It could be bracketed as X[a][b] = (X[a])[b] if that is more clear. I just lent this from programming languages.
So what does B[a] mean? B[a] means are are reasoning about the probability assignment P_a(B) of the actor a and we ask for variants of P(P_a(B)=p).
First: I glossed about a lot of required P(...) assuming (in my eagerness to address the issue) that that’d be clear from context.
In general instead of writing e.g.
P((A & B)[p]) ~= P(A[p] & B[p])
I just wrote
(A & B)[P] ~= A[P] & B[P]
What is B0(a)? It is the same as B[a]?
No. the 0 was meant to indicate an apriori (which was hidden in the fragment “a) an aprioi B0 of a person”).
Instead of writing the needed probability that Bobs prior probability of B is b (needed in the orig post) as
P_{Bob}(B) = b
I just wrote
B0(Bob)
That is informally I represented my belief in the prior p of another actor in some fact F as a fact in itself (calling it F0) instead of representing all beliefs of the represented actor as relative to that (P(F)=p).
This allowed me to simplify the never written out long form of P(B|X(Bob)). On this I’m still working.
What is X0(a)? It is the same as X[a], so that X is a relational variable?
Yes. For all prior belief expressions X0 it is plausible to approximate other persons prior probability to be less strict than your own.
Is X(a) different from X[a]?
Yes. X(a) is the X of person a. This is mostly releant for the priors.
What I now see after trying to clean up all the issues glossed over is that this possibly doesn’t make sense. At least not in this incomplete form. Please stay tuned.
I will! The main problem (not in your post, in the general discussion) seems to me that there’s no way to talk about probabilities and beliefs clearly and dependently, since after all a belief is the assignment of a probability, but they cannot be directly targeted in the base logic.
I am not certain that it’s the same A. If I say to you, here’s a book that proves that P=NP. You go and read it, and it’s full of Math, and you can’t fully process it. Later, you come back and read it again, this time you actually able to fully comprehend it. Even later you come back again, and not only comprehend it, but are able to prove some new facts, using no external sources, just your mind. Those are not all the same “A”. So, you may have some evidence for/against a sorcerer, but are not able to accurately estimate the probability. After some reflection, you derive new facts, and then update again. Upon further reflection, you derive more facts, and update. Why should this process stop?
I think we are talking about different things. I proved only that Bob cannot update his belief in Bright on the sole evidence “Bob believes in Bright”. This is a perfectly defined cognitive state, totally accessible to Bob, and unique. Therefore Bob cannot update on it. On the other hand, if from a belief Bob gathers new evidence, then this is clearly another cognitive state, well different from the previous, and so there’s no trouble in assigning different probabilities (provided that “Bob believes in Bright” doesn’t mean that he assigns to Bright probability 1).
It is not sane.
If you use a belief (say A) to change the value of another belief (say B), then depending on how many times you use A, you arrive at different values. That is, if you use A or A,A,A,A as evidence, you get different results.
It would be as if:
P(B|A) <> P(B|A,A,A,A)
But the logic underlying bayesian reasoning is classical, so that A <-> A+A+A+A, and, by Jaynes’ requirement IIIc (see page 19 of The logic of science):
P(B|A) = P(B|A,A,A,A)
But in this case we do not have a constant A but an A dependent on someone updating on A.
With notation A[P] for “A believed by P” e.g.
B = Bright exists
B[Bob] = Bob believes Bright exists
B[Bob][Bob] = Bob believes that Bob believes that Bright exists
we can represent the expansion of the updating on belief B = Bright exists as
P(B|X(Bob)) with
X(Bob) where X(Person) = B0(Person) & X[forall P: P!=Person][Person]
that is a belief in a) an aprioi B0 of a person and b) a belief of other persons into this kind of belief. This requires a fixed point to solve. Using the “belief distribution” approximation (A&B)[P] ~= A[P]&B[P] this could be approximated for Bob with
B0(Bob)
B0(Bob) & B0(Alice)[Bob]
B0(Bob) & B0(Alice)[Bob] & B0(Bob)[Alice][Bob]
B0(Bob) & B0(Alice)[Bob] & B0(Bob)[Alice][Bob] & B0(Bob)[Nob][Alice][Bob]
and so on...
And this could be further approximated as B0(Bob) assuming plausibly that other persons have non-stricter priors than yourself (P(X0[Q])<=P(X0[Yourself])). With these two approximations we are back where we began. I didn’t work out yet how to get a better bound but it seems plausible that X doesn’t diverge but probably converges given suitable B0s.
It is of course correct to state that one can only update once toward
P(B|X) for X=X(P) for all P in Person
but this implies infinitely many expansions in X (the loop I think implied by irrational).
In order to respond… better, in order to understand what you wrote up here I need you to clarify some notation.
What is B[a][b]? b believes that a believes B or a believes that b believes that B?
What is B0(a)? It is the same as B[a]?
What is X0(a)? It is the same as X[a], so that X is a relational variable?
Is X(a) different from X[a]?
Sorry for the confusion, but I couldn’t recast your argument in any formal language whatsoever.
Sorry too. That was my risk of inventing a notion on the spot. The original comment started using excessive “believes X” which I basically replaced. I’m not aware of/trained in a notation to compactly write nested probability assertions.
I’m still trying to resolve the expansion and nesting issues I totally glossed over.
[] is like a parameterized suffix. It could be bracketed as X[a][b] = (X[a])[b] if that is more clear. I just lent this from programming languages.
Note: There seems to be a theory of beliefs that might applicable but which uses a different notion (looks like X[a] == K_aX): http://en.wikipedia.org/wiki/Epistemic_modal_logic
So what does B[a] mean? B[a] means are are reasoning about the probability assignment P_a(B) of the actor a and we ask for variants of P(P_a(B)=p).
First: I glossed about a lot of required P(...) assuming (in my eagerness to address the issue) that that’d be clear from context. In general instead of writing e.g.
P((A & B)[p]) ~= P(A[p] & B[p])
I just wrote
(A & B)[P] ~= A[P] & B[P]
No. the 0 was meant to indicate an apriori (which was hidden in the fragment “a) an aprioi B0 of a person”). Instead of writing the needed probability that Bobs prior probability of B is b (needed in the orig post) as
P_{Bob}(B) = b
I just wrote
B0(Bob)
That is informally I represented my belief in the prior p of another actor in some fact F as a fact in itself (calling it F0) instead of representing all beliefs of the represented actor as relative to that (P(F)=p).
This allowed me to simplify the never written out long form of P(B|X(Bob)). On this I’m still working.
Yes. For all prior belief expressions X0 it is plausible to approximate other persons prior probability to be less strict than your own.
Yes. X(a) is the X of person a. This is mostly releant for the priors.
What I now see after trying to clean up all the issues glossed over is that this possibly doesn’t make sense. At least not in this incomplete form. Please stay tuned.
I will!
The main problem (not in your post, in the general discussion) seems to me that there’s no way to talk about probabilities and beliefs clearly and dependently, since after all a belief is the assignment of a probability, but they cannot be directly targeted in the base logic.
Puh. And I feared that I came across as writing totally unintellegible stuff. I promise that I will put some more effort into the notation.
I am not certain that it’s the same A. If I say to you, here’s a book that proves that P=NP. You go and read it, and it’s full of Math, and you can’t fully process it. Later, you come back and read it again, this time you actually able to fully comprehend it. Even later you come back again, and not only comprehend it, but are able to prove some new facts, using no external sources, just your mind. Those are not all the same “A”. So, you may have some evidence for/against a sorcerer, but are not able to accurately estimate the probability. After some reflection, you derive new facts, and then update again. Upon further reflection, you derive more facts, and update. Why should this process stop?
I think we are talking about different things.
I proved only that Bob cannot update his belief in Bright on the sole evidence “Bob believes in Bright”. This is a perfectly defined cognitive state, totally accessible to Bob, and unique. Therefore Bob cannot update on it.
On the other hand, if from a belief Bob gathers new evidence, then this is clearly another cognitive state, well different from the previous, and so there’s no trouble in assigning different probabilities (provided that “Bob believes in Bright” doesn’t mean that he assigns to Bright probability 1).