It’s not even guaranteed to be true (but you can verify that yourself much more easily than X directly).
Compare this with classical result of conjunction fallacy. In experiments people routinely believed that:
P(next year the Soviet Union will invade Poland, and the United States will break off diplomatic relations with the Soviet Union) > P(next year the United States will break off diplomatic relations with the Soviet Union).
Here X=the United States will break off diplomatic relations with the Soviet Union.
Y=the Soviet Union will invade Poland.
Wouldn’t your reasoning pretty much endorse what people were doing? (with Y—one possible scenario leading to X—being new information)
Hmmm, I now think the existence of Y is actually a distraction. The underlying process is:
produce estimate for P(X) ⇒ find proof of X ⇒ P(X) increases
If estimates are allowed to change in this manner, then of course they are allowed to change when someone else shows you a proof of X. (since P(X)=P(X) is also a law of probability) If they are not allowed to change in this manner, then subjective Bayesianism applied to mathematical laws collapses anyways.
From a purely human psychological perspective:
When someone tells me a proof of a theorem, it feels like I’ve learned something.
When I figure one out myself, it feels like I figured something out, as if I’d learned information through interacting with the natural world.
Are you meaning to tell me that no one learns anything in math class? Or they learn something, but the thing they learn isn’t information?
Caveat: Formalizing the concepts requires us to deviate from human experience sometimes. I don’t think the concept of information has been formalized, by Bayesians or Frequentists, in a way that deals with the problem of acting with limited computing time, aka the problem of logical uncertainty.
Would you agree that “P(X)” you’re describing is more like “some person’s answer when asked question X” than “probability of X”?
The main difference between two is that if “X” and “Y” are the same logical outcome, then their probabilities are necessarily the same, but an actual person can reply differently depending on how question was formulated.
If you’re interested in this subject, I recommend reading about epistemic modal logic—not necessarily for their solutions, but they’re clearly aware of this problem, and can describe it better than me.
In which sense is Y information?
It’s not even guaranteed to be true (but you can verify that yourself much more easily than X directly).
Compare this with classical result of conjunction fallacy. In experiments people routinely believed that:
P(next year the Soviet Union will invade Poland, and the United States will break off diplomatic relations with the Soviet Union) > P(next year the United States will break off diplomatic relations with the Soviet Union).
Here X=the United States will break off diplomatic relations with the Soviet Union. Y=the Soviet Union will invade Poland.
Wouldn’t your reasoning pretty much endorse what people were doing? (with Y—one possible scenario leading to X—being new information)
Hmmm, I now think the existence of Y is actually a distraction. The underlying process is:
produce estimate for P(X) ⇒ find proof of X ⇒ P(X) increases
If estimates are allowed to change in this manner, then of course they are allowed to change when someone else shows you a proof of X. (since P(X)=P(X) is also a law of probability) If they are not allowed to change in this manner, then subjective Bayesianism applied to mathematical laws collapses anyways.
From a purely human psychological perspective: When someone tells me a proof of a theorem, it feels like I’ve learned something. When I figure one out myself, it feels like I figured something out, as if I’d learned information through interacting with the natural world.
Are you meaning to tell me that no one learns anything in math class? Or they learn something, but the thing they learn isn’t information?
Caveat: Formalizing the concepts requires us to deviate from human experience sometimes. I don’t think the concept of information has been formalized, by Bayesians or Frequentists, in a way that deals with the problem of acting with limited computing time, aka the problem of logical uncertainty.
I think we almost agree already ;-)
Would you agree that “P(X)” you’re describing is more like “some person’s answer when asked question X” than “probability of X”?
The main difference between two is that if “X” and “Y” are the same logical outcome, then their probabilities are necessarily the same, but an actual person can reply differently depending on how question was formulated.
If you’re interested in this subject, I recommend reading about epistemic modal logic—not necessarily for their solutions, but they’re clearly aware of this problem, and can describe it better than me.