This is cool, but seems underspecified. Could you write a program that carries out this reasoning with respect to a fairly broad class of problems?
If you have an inconsistent probability distribution, the result you get when you resolve them depends on the order in which you resolve them. For example, given P(A)=1/2. P(B)=1/2, P(AB)=1/2, and P(A¬B)=1/2, you could resolve this by deriving P(A¬B)=0 from the first 3, or by deriving P(AB)=0 from the first 2 and last 1. Both of these answers seem obviously wrong. Does your method have a consistent way of resolving that sort of problem?
A nice way of thinking about it is that the robot can do unlimited probabilistic logic, but it only takes finite time because it’s only working from a finite pool of proven theorems. When doing the probabilistic logic, the statements (e.g. A, B) are treated as atomic. So you can have effective inconsistencies, in that you can have an atom that says A, and an atom that says B, and an atom that effectively says ‘AB’, and unluckily end up with P(‘AB’)>P(A)P(B). But you can’t know you have inconsistencies in any way that would lead to mathematical problems. Once you prove that P(‘AB’) = P(AB), where removing the quotes means breaking up the atom into an AND statement, then you can do probabilistic logic on it, and the maximum entropy distribution will no longer be effectively inconsistent.
Oh, I see. Do you know whether you can get different answers by atomizing the statements differently. For instance, will the same information always give the same resulting probabilities if the atoms are A and B as it would if the atoms are A and A-xor-B?
P(‘AB’)>P(A)P(B)
Not a problem if A and B are correlated. I assume you mean P(‘AB’)>min(P(A), P(B))?
You can’t get different probabilities by atomizing things differently, all the atoms “already exist.” But if you prove different theorems, or theorems about different things, then you can get different probabilities.
This is cool, but seems underspecified. Could you write a program that carries out this reasoning with respect to a fairly broad class of problems?
If you have an inconsistent probability distribution, the result you get when you resolve them depends on the order in which you resolve them. For example, given P(A)=1/2. P(B)=1/2, P(AB)=1/2, and P(A¬B)=1/2, you could resolve this by deriving P(A¬B)=0 from the first 3, or by deriving P(AB)=0 from the first 2 and last 1. Both of these answers seem obviously wrong. Does your method have a consistent way of resolving that sort of problem?
A nice way of thinking about it is that the robot can do unlimited probabilistic logic, but it only takes finite time because it’s only working from a finite pool of proven theorems. When doing the probabilistic logic, the statements (e.g. A, B) are treated as atomic. So you can have effective inconsistencies, in that you can have an atom that says A, and an atom that says B, and an atom that effectively says ‘AB’, and unluckily end up with P(‘AB’)>P(A)P(B). But you can’t know you have inconsistencies in any way that would lead to mathematical problems. Once you prove that P(‘AB’) = P(AB), where removing the quotes means breaking up the atom into an AND statement, then you can do probabilistic logic on it, and the maximum entropy distribution will no longer be effectively inconsistent.
Oh, I see. Do you know whether you can get different answers by atomizing the statements differently. For instance, will the same information always give the same resulting probabilities if the atoms are A and B as it would if the atoms are A and A-xor-B?
Not a problem if A and B are correlated. I assume you mean P(‘AB’)>min(P(A), P(B))?
Ah, right. Or even P(‘AB’)>P(A).
You can’t get different probabilities by atomizing things differently, all the atoms “already exist.” But if you prove different theorems, or theorems about different things, then you can get different probabilities.