If A,B,C are binary, values of A and B are drawn from independent fair coins, and C = A XOR B, then measuring C = 1 constrains A,B to be either { 1, 1 } or { 0, 0 }, but does not constrain A alone at all.
Before we conditioned on C=1, all values of the joint variable A,B had probabilities 0.25, and all values of a single variable A had probabilities 0.5. After we conditioned on C=1, values { 0, 0 } and { 1, 1 } of A,B assume probabilities 0.5, and values { 0, 1 } and { 1, 0 } of A,B assume probabilities 0, values of a single variable A remain at probability 0.5.
By conditioning on C=1, you learn more about the joint variable A,B than about a single variable A (because your posterior for A,B changed, but your posterior for A did not), but that is not the same thing as the joint variable A,B being more plausible than the single variable A. In fact, it is still the case that p(A & B | C) ⇐ p(A | C) for all values of A,B.
edit: others below said the same, and often better.
If A,B,C are binary, values of A and B are drawn from independent fair coins, and C = A XOR B, then measuring C = 1 constrains A,B to be either { 1, 1 } or { 0, 0 }, but does not constrain A alone at all.
Before we conditioned on C=1, all values of the joint variable A,B had probabilities 0.25, and all values of a single variable A had probabilities 0.5. After we conditioned on C=1, values { 0, 0 } and { 1, 1 } of A,B assume probabilities 0.5, and values { 0, 1 } and { 1, 0 } of A,B assume probabilities 0, values of a single variable A remain at probability 0.5.
By conditioning on C=1, you learn more about the joint variable A,B than about a single variable A (because your posterior for A,B changed, but your posterior for A did not), but that is not the same thing as the joint variable A,B being more plausible than the single variable A. In fact, it is still the case that p(A & B | C) ⇐ p(A | C) for all values of A,B.
edit: others below said the same, and often better.