A nice way of thinking about it is that the robot can do unlimited probabilistic logic, but it only takes finite time because it’s only working from a finite pool of proven theorems. When doing the probabilistic logic, the statements (e.g. A, B) are treated as atomic. So you can have effective inconsistencies, in that you can have an atom that says A, and an atom that says B, and an atom that effectively says ‘AB’, and unluckily end up with P(‘AB’)>P(A)P(B). But you can’t know you have inconsistencies in any way that would lead to mathematical problems. Once you prove that P(‘AB’) = P(AB), where removing the quotes means breaking up the atom into an AND statement, then you can do probabilistic logic on it, and the maximum entropy distribution will no longer be effectively inconsistent.
Oh, I see. Do you know whether you can get different answers by atomizing the statements differently. For instance, will the same information always give the same resulting probabilities if the atoms are A and B as it would if the atoms are A and A-xor-B?
P(‘AB’)>P(A)P(B)
Not a problem if A and B are correlated. I assume you mean P(‘AB’)>min(P(A), P(B))?
You can’t get different probabilities by atomizing things differently, all the atoms “already exist.” But if you prove different theorems, or theorems about different things, then you can get different probabilities.
A nice way of thinking about it is that the robot can do unlimited probabilistic logic, but it only takes finite time because it’s only working from a finite pool of proven theorems. When doing the probabilistic logic, the statements (e.g. A, B) are treated as atomic. So you can have effective inconsistencies, in that you can have an atom that says A, and an atom that says B, and an atom that effectively says ‘AB’, and unluckily end up with P(‘AB’)>P(A)P(B). But you can’t know you have inconsistencies in any way that would lead to mathematical problems. Once you prove that P(‘AB’) = P(AB), where removing the quotes means breaking up the atom into an AND statement, then you can do probabilistic logic on it, and the maximum entropy distribution will no longer be effectively inconsistent.
Oh, I see. Do you know whether you can get different answers by atomizing the statements differently. For instance, will the same information always give the same resulting probabilities if the atoms are A and B as it would if the atoms are A and A-xor-B?
Not a problem if A and B are correlated. I assume you mean P(‘AB’)>min(P(A), P(B))?
Ah, right. Or even P(‘AB’)>P(A).
You can’t get different probabilities by atomizing things differently, all the atoms “already exist.” But if you prove different theorems, or theorems about different things, then you can get different probabilities.