A complex proposition P (long MML) can have a complex negation (also with long MML) and you’d have no reason to assume you’d be presented with P instead of non-P. The positive proposition P is unlikely if its MML is long, but the proposition non-P, despite its long MML is then likely to be true.
If you have no reason to believe you’re more likely to be presented with P than with non-P, then my understanding is that they cancel each other out.
But now I’m not so sure anymore.
edit: I’m now pretty sure again my initial understanding was correct and that the counterarguments are merely cached thoughts.
I think often “complicated proposition” is used to mean “large conjunction” e.g. A&B&C&D&...
In this case its negation would be a large disjunction, and large disjunctions, while in a sense complex (it may take a lot of information to specify one) usually have prior probabilities close to 1, so in this case complicated statements definitely don’t get probability 0.5 as a prior. “Christianity is completely correct” versus “Christianity is incorrect” is one example of this.
On this other hand, if by ‘complicated proposition’ you just mean something where its truth depends on lots of factors you don’t understand well, and is not itself necessarily a large conjunction, or in any way carrying burdensome details, then you may be right about probability 0.5. “Increasing government spending will help the economy” versus “increasing government spending will harm the economy” seems like an example of this.
My claim is slightly stronger than that. My claim is that the correct prior probability of any arbitrary proposition of which we know nothing is 0.5. I’m not restricting my claim to propositions which we know are complex and depend on many factors which are difficult to gauge (as with your economy example).
I think I mostly agree. It just seemed like the discussion up to that point had mostly been about complex claims, and so I confined myself to them.
However, I think I cannot fully agree about any claim of which we know nothing. For instance, I might know nothing about A, nothing about B|A, and nothing about A&B, but for me to simultaneously hold P(A) = 0.5, P(B|A) = 0.5 and P(A&B) = 0.5 would be inconsistent.
I might know nothing about A, nothing about B|A, and nothing about A&B, but for me to simultaneously hold P(A) = 0.5, P(B|A) = 0.5 and P(A&B) = 0.5 would be inconsistent.
“B|A” is not a proposition like the others, despite appearing as an input in the P() notation. P(B|A) simply stands for P(A&B)/P(A). So you never “know nothing about B|A”, and you can consistently hold that P(A) = 0.5 and P(A&B) = 0.5, with the consequence that P(B|A) = 1.
The notation P(B|A) is poor. A better notation would be P_A(B); it’s a different function with the same input, not a different input into the same function.
Fair enough, although I think my point stands, it would be fairly silly if you could deduce P(A|B) = 1 simply from the fact that you know nothing about A and B.
This is the same confusion I was originally having with Zed. Both you and he appear to consider knowing the explicit form of a statement to be knowing something about the truth value of that statement, whereas I think you can know nothing about a statement even if you know what it is, so you can update on finding out that C is a conjunction.
Given that we aren’t often asked to evaluate the truth of statements without knowing what they are, I think my sense is more useful.
Of course, we almost never reach this level of ignorance in practice, which makes this the type of abstract academic point that people all-too-characteristically have trouble with. The step of calculating the complexity of a hypothesis seems “automatic”, so much so that it’s easy to forget that there is a step there.
How do you know something about the conjunction? Have you manufactured evidence from a vacuum?
I don’t think I am presuming them independent, I am merely stating that I have no information to favour a positive or negative correlation.
Look at it another way, suppose A and B are claims that I know nothing about. Then I also know nothing about A&B, A&(~B), (~A)&B and (~A)&(~B) (knowledge about any one of those would constitute knowledge about A and B). I do not think I can consistently hold that those four claims all have probability 0.5.
Since A=B is a possibility the uses of “two things” here is bit specious. You’re basically saying you know A&B but that could stand for anything at all.
Yeah, and I know that A is the disjunction of A&B and A&(~B), and that it is the negation of the negation of a proposition I know nothing about, and lots of other things. If we reading a statement and analysing its logical consequences to count as knowledge then we know infinitely many things about everything.
In that case it’s clear where we disagree because I think we are completely justified in assuming independence of any two unknown propositions. Intuitively speaking, dependence is hard. In the space of all propositions the number of dependent pairs of propositions is insignificant compared to the number of independent pairs. But if it so happens that the two propositions are not independent then I think we’re saved by symmetry.
There are a number of different combinations of A and ~A and B and ~B but I think that their conditional “biases” all cancel each other out. We just don’t know if we’re dealing with A or with ~A, with B or with ~B. If for every bias there is an equal and opposite bias, to paraphrase Newton, then I think the independence assumption must hold.
Suppose you are handed three closed envelopes each containing a concealed proposition. Without any additional information I think we have no choice but to assign each unknown proposition probability 0.5. If you then open the third envelope and if it reads “envelope-A & envelope-B” then the probability of that proposition changes to 0.25 and the other two stay at 0.5.
If not 0.25, then which number do you think is correct?
Okay, in that case I guess I would agree with you, but it seems a rather vacuous scenario. In real life you are almost never faced with the dilemma of having to evaluate the probability of a claim without even knowing what that claim is, it appears in this case that when you assign a probability of 0.5 to an envelope you are merely assigning 0.5 probability to the claim that “whoever filled this envelope decided to put a true statement in”.
When, as in almost all epistemological dilemmas, you can actually look at the claim you are evaluating, then even if you know nothing about the subject area you should still be able to tell a conjunction from a disjunction. I would never, ever apply the 0.5 rule to an actual political discussion, for example, where almost all propositions are large logical compounds in disguise.
This can’t be right. An unspecified hypothesis can be as many sentence letters and operators as you like, we still don’t have any information about it’s content and so can’t have any P other than 0.5. Take any well-formed formula in propositional logic. You can make that formula say anything you want by the way you assign semantic content to the sentence letters (for propositional logical, not the predicate calculus where can specify indpendence). We have conventions where we don’t do silly things like say “A AND ~B” and then have B come out semantically equivalent to ~A. It is also true that two randomly chosen hypotheses from a large set of mostly independent hypotheses are likely to be independent. But this is a judgment that requires knowing something about the hypothesis: which we don’t, by stipulation. Note, it isn’t just causal dependence we’re worried about here: for all we know A and B are semantically identical. By stipulation we know nothing about the system we’re modeling- the ‘space of all propositions’ could be very small.
The answer for all three envelopes is, in the case of complete ignorance, 0.5.
I think I agree completely with all of that. My earlier post was meant as an illustration that once you say C = A & B that you’re no longer dealing with a state of complete ignorance. You’re in complete ignorance of A and B, but not of C. In fact, C is completely defined as being the conjunction of A and B. I used the illustration of an envelope because as long as the envelope is closed you’re completely ignorant about its contents (by stipulation) but once you open it that’s no longer the case.
The answer for all three envelopes is, in the case of complete ignorance, 0.5.
So the probability that all three envelopes happen to contain a true hypothesis/proposition is 0.125 based on the assumption of independence. Since you said “mostly independent” does that mean you think we’re not allowed to assume complete independence? If the answer isn’t 0.125, what is it?
edit:
If your answer to the above is “still 0.5” then I have another scenario. You’re in total ignorance of A. B denotes the probability of rolling a a 6 on a regular die. What’s the probability that A & B are true? I’d say it has to be 1⁄12, even though it’s possible that A and B are not independent.
If you don’t know what A is and you don’t know what B is and C is the conjunction of A and B, then you don’t know what C is. This is precisely because, one cannot assume the independence of A and B. If you stipulate independence then you are no longer operating under conditions of complete ignorance. Strict, non-statistical independence can be represented as A!=B. A!=B tells you something about the hypothesis- its a fact about the hypothesis that we didn’t have in complete ignorance. This lets us give odds other than 1:1. See my comment here.
With regard to the scenario in the edit, the probability of A & B is 1⁄6 because we don’t know anything about independence. Now, you might say: “Jack, what are the chances A is dependent on B?! Surely most cases will involve A being something that has nothing to do with dice, much less something closely related to the throw of that particular dice.” But this kind of reasoning involves presuming things about the domain A purports to describe. The universe is really big and complex so we know there are lots of physical events A could conceivably describe. But what if the universe consisted only of one regular die that rolls once! If that is the only variable then A will =B. That we don’t live in such a universe or that this universe seems odd or unlikely are reasonable assumptions only because they’re based on our observations. But in the case of complete ignorance, by stipulation, we have no such observations. By definition, if you don’t know anything about A then you can’t know more about A&B then you know about B.
Complete ignorance just means 0.5, its just necessarily the case that when one specifies the hypothesis one provides analytic insight into the hypothesis which can easily change the probability. That is, any hypothesis that can be distinguished from an alternative hypothesis will give us grounds for ascribing a new probability to that hypothesis (based on the information used to distinguish it from alternative hypotheses).
Thanks for the explanation, that helped a lot. I expected you to answer 0.5 in the second scenario, and I thought your model was that total ignorance “contaminated” the model such that something + ignorance = ignorance. Now I see this is not what you meant. Instead it’s that something + ignorance = something. And then likewise something + ignorance + ignorance = something according to your model.
The problem with your model is that it clashes with my intuition (I can’t find fault with your arguments). I describe one such scenario here.
My intuition is that the probability of these two statements should not be the same:
A. “In order for us to succeed one of 12 things need to happen”
B. “In order for us to succeed all of these 12 things need to happen”
In one case we’re talking about a disjunction of 12 unknowns and in the second scenario we’re talking about a conjunction. Even if some of the “things” are not completely uncorrelated that shouldn’t affect the total estimate that much. My intuition is that saying P(A) = 1 − 0.5 ^ 12 and P(B) = 0.5 ^ 12. Worlds apart! As far as I can tell you would say that in both cases the best estimate we can make is 0.5. I introduce the assumption of independence (I don’t stipulate it) to fix this problem. Otherwise the math would lead me down a path that contradicts common sense.
A complex proposition P (long MML) can have a complex negation (also with long MML) and you’d have no reason to assume you’d be presented with P instead of non-P. The positive proposition P is unlikely if its MML is long, but the proposition non-P, despite its long MML is then likely to be true.
If you have no reason to believe you’re more likely to be presented with P than with non-P, then my understanding is that they cancel each other out.
But now I’m not so sure anymore.
edit: I’m now pretty sure again my initial understanding was correct and that the counterarguments are merely cached thoughts.
I think often “complicated proposition” is used to mean “large conjunction” e.g. A&B&C&D&...
In this case its negation would be a large disjunction, and large disjunctions, while in a sense complex (it may take a lot of information to specify one) usually have prior probabilities close to 1, so in this case complicated statements definitely don’t get probability 0.5 as a prior. “Christianity is completely correct” versus “Christianity is incorrect” is one example of this.
On this other hand, if by ‘complicated proposition’ you just mean something where its truth depends on lots of factors you don’t understand well, and is not itself necessarily a large conjunction, or in any way carrying burdensome details, then you may be right about probability 0.5. “Increasing government spending will help the economy” versus “increasing government spending will harm the economy” seems like an example of this.
My claim is slightly stronger than that. My claim is that the correct prior probability of any arbitrary proposition of which we know nothing is 0.5. I’m not restricting my claim to propositions which we know are complex and depend on many factors which are difficult to gauge (as with your economy example).
I think I mostly agree. It just seemed like the discussion up to that point had mostly been about complex claims, and so I confined myself to them.
However, I think I cannot fully agree about any claim of which we know nothing. For instance, I might know nothing about A, nothing about B|A, and nothing about A&B, but for me to simultaneously hold P(A) = 0.5, P(B|A) = 0.5 and P(A&B) = 0.5 would be inconsistent.
“B|A” is not a proposition like the others, despite appearing as an input in the P() notation. P(B|A) simply stands for P(A&B)/P(A). So you never “know nothing about B|A”, and you can consistently hold that P(A) = 0.5 and P(A&B) = 0.5, with the consequence that P(B|A) = 1.
The notation P(B|A) is poor. A better notation would be P_A(B); it’s a different function with the same input, not a different input into the same function.
Fair enough, although I think my point stands, it would be fairly silly if you could deduce P(A|B) = 1 simply from the fact that you know nothing about A and B.
Well, you can’t—you would have to know nothing about B and A&B, a very peculiar situation indeed!
EDIT: This is logically delicate, but perhaps can be clarified via the following dialogue:
-- What is P(A)?
-- I don’t know anything about A, so 0.5
-- What is P(B)?
-- Likewise, 0.5
-- What is P(C)?
-- 0.5 again.
-- Now compute P(C)/P(B)
-- 0.5/0.5 = 1
-- Ha! Gotcha! C is really A&B; you just said that P(A|B) is 1!
-- Oh; well in that case, P(C) isn’t 0.5 any more: P(C|C=A&B) = 0.25.
As per my point above, we should think of Bayesian updating as the function P varying, rather than its input.
I believe that this dialogue is logically confused, as I argue in this comment.
This is the same confusion I was originally having with Zed. Both you and he appear to consider knowing the explicit form of a statement to be knowing something about the truth value of that statement, whereas I think you can know nothing about a statement even if you know what it is, so you can update on finding out that C is a conjunction.
Given that we aren’t often asked to evaluate the truth of statements without knowing what they are, I think my sense is more useful.
Did you mean “can’t”? Because “can” is my position (as illustrated in the dialogue!).
This exemplifies the point in my original comment:
If you know nothing of A and B then P(A) = P(B) = 0.5, P(B|A) = P(A|B) = 0.5 and P(A & B) = P(A|B) * P(B) = 0.25
You do know something of the conjunction of A and B (because you presume they’re independent) and that’s how you get to 0.25.
I don’t think there’s an inconsistency here.
How do you know something about the conjunction? Have you manufactured evidence from a vacuum?
I don’t think I am presuming them independent, I am merely stating that I have no information to favour a positive or negative correlation.
Look at it another way, suppose A and B are claims that I know nothing about. Then I also know nothing about A&B, A&(~B), (~A)&B and (~A)&(~B) (knowledge about any one of those would constitute knowledge about A and B). I do not think I can consistently hold that those four claims all have probability 0.5.
If you know nothing about A and B, then you know something about A&B. You know it is the conjunction of two things you know nothing about.
Since A=B is a possibility the uses of “two things” here is bit specious. You’re basically saying you know A&B but that could stand for anything at all.
You know that either A and B are highly correlated (one way or the other) or P(A&B) is close to P(A) P(B).
Yeah, and I know that A is the disjunction of A&B and A&(~B), and that it is the negation of the negation of a proposition I know nothing about, and lots of other things. If we reading a statement and analysing its logical consequences to count as knowledge then we know infinitely many things about everything.
In that case it’s clear where we disagree because I think we are completely justified in assuming independence of any two unknown propositions. Intuitively speaking, dependence is hard. In the space of all propositions the number of dependent pairs of propositions is insignificant compared to the number of independent pairs. But if it so happens that the two propositions are not independent then I think we’re saved by symmetry.
There are a number of different combinations of A and ~A and B and ~B but I think that their conditional “biases” all cancel each other out. We just don’t know if we’re dealing with A or with ~A, with B or with ~B. If for every bias there is an equal and opposite bias, to paraphrase Newton, then I think the independence assumption must hold.
Suppose you are handed three closed envelopes each containing a concealed proposition. Without any additional information I think we have no choice but to assign each unknown proposition probability 0.5. If you then open the third envelope and if it reads “envelope-A & envelope-B” then the probability of that proposition changes to 0.25 and the other two stay at 0.5.
If not 0.25, then which number do you think is correct?
Okay, in that case I guess I would agree with you, but it seems a rather vacuous scenario. In real life you are almost never faced with the dilemma of having to evaluate the probability of a claim without even knowing what that claim is, it appears in this case that when you assign a probability of 0.5 to an envelope you are merely assigning 0.5 probability to the claim that “whoever filled this envelope decided to put a true statement in”.
When, as in almost all epistemological dilemmas, you can actually look at the claim you are evaluating, then even if you know nothing about the subject area you should still be able to tell a conjunction from a disjunction. I would never, ever apply the 0.5 rule to an actual political discussion, for example, where almost all propositions are large logical compounds in disguise.
This can’t be right. An unspecified hypothesis can be as many sentence letters and operators as you like, we still don’t have any information about it’s content and so can’t have any P other than 0.5. Take any well-formed formula in propositional logic. You can make that formula say anything you want by the way you assign semantic content to the sentence letters (for propositional logical, not the predicate calculus where can specify indpendence). We have conventions where we don’t do silly things like say “A AND ~B” and then have B come out semantically equivalent to ~A. It is also true that two randomly chosen hypotheses from a large set of mostly independent hypotheses are likely to be independent. But this is a judgment that requires knowing something about the hypothesis: which we don’t, by stipulation. Note, it isn’t just causal dependence we’re worried about here: for all we know A and B are semantically identical. By stipulation we know nothing about the system we’re modeling- the ‘space of all propositions’ could be very small.
The answer for all three envelopes is, in the case of complete ignorance, 0.5.
I think I agree completely with all of that. My earlier post was meant as an illustration that once you say C = A & B that you’re no longer dealing with a state of complete ignorance. You’re in complete ignorance of A and B, but not of C. In fact, C is completely defined as being the conjunction of A and B. I used the illustration of an envelope because as long as the envelope is closed you’re completely ignorant about its contents (by stipulation) but once you open it that’s no longer the case.
So the probability that all three envelopes happen to contain a true hypothesis/proposition is 0.125 based on the assumption of independence. Since you said “mostly independent” does that mean you think we’re not allowed to assume complete independence? If the answer isn’t 0.125, what is it?
edit:
If your answer to the above is “still 0.5” then I have another scenario. You’re in total ignorance of A. B denotes the probability of rolling a a 6 on a regular die. What’s the probability that A & B are true? I’d say it has to be 1⁄12, even though it’s possible that A and B are not independent.
If you don’t know what A is and you don’t know what B is and C is the conjunction of A and B, then you don’t know what C is. This is precisely because, one cannot assume the independence of A and B. If you stipulate independence then you are no longer operating under conditions of complete ignorance. Strict, non-statistical independence can be represented as A!=B. A!=B tells you something about the hypothesis- its a fact about the hypothesis that we didn’t have in complete ignorance. This lets us give odds other than 1:1. See my comment here.
With regard to the scenario in the edit, the probability of A & B is 1⁄6 because we don’t know anything about independence. Now, you might say: “Jack, what are the chances A is dependent on B?! Surely most cases will involve A being something that has nothing to do with dice, much less something closely related to the throw of that particular dice.” But this kind of reasoning involves presuming things about the domain A purports to describe. The universe is really big and complex so we know there are lots of physical events A could conceivably describe. But what if the universe consisted only of one regular die that rolls once! If that is the only variable then A will =B. That we don’t live in such a universe or that this universe seems odd or unlikely are reasonable assumptions only because they’re based on our observations. But in the case of complete ignorance, by stipulation, we have no such observations. By definition, if you don’t know anything about A then you can’t know more about A&B then you know about B.
Complete ignorance just means 0.5, its just necessarily the case that when one specifies the hypothesis one provides analytic insight into the hypothesis which can easily change the probability. That is, any hypothesis that can be distinguished from an alternative hypothesis will give us grounds for ascribing a new probability to that hypothesis (based on the information used to distinguish it from alternative hypotheses).
Thanks for the explanation, that helped a lot. I expected you to answer 0.5 in the second scenario, and I thought your model was that total ignorance “contaminated” the model such that something + ignorance = ignorance. Now I see this is not what you meant. Instead it’s that something + ignorance = something. And then likewise something + ignorance + ignorance = something according to your model.
The problem with your model is that it clashes with my intuition (I can’t find fault with your arguments). I describe one such scenario here.
My intuition is that the probability of these two statements should not be the same:
A. “In order for us to succeed one of 12 things need to happen”
B. “In order for us to succeed all of these 12 things need to happen”
In one case we’re talking about a disjunction of 12 unknowns and in the second scenario we’re talking about a conjunction. Even if some of the “things” are not completely uncorrelated that shouldn’t affect the total estimate that much. My intuition is that saying P(A) = 1 − 0.5 ^ 12 and P(B) = 0.5 ^ 12. Worlds apart! As far as I can tell you would say that in both cases the best estimate we can make is 0.5. I introduce the assumption of independence (I don’t stipulate it) to fix this problem. Otherwise the math would lead me down a path that contradicts common sense.
The number of possible probability distributions is far larger than the two induced by the belief that P, and the belief that ~P.
If at this point you don’t agree that the probability is 0.5 I’d like to hear your number.
P(A) = 2^-K(A).
As for ~A, see: http://lesswrong.com/lw/vs/selling_nonapples/ (The negation of a complex proposition is much vaguer, and hence more probable (and useless))