It is perfectly legal under the bayes to learn nothing from your observations.
Right, in degenerate cases, when there’s nothing to be learned, the two extremes of learning nothing and everything coincide.
Or learn in the wrong direction, or sideways, or whatever.
To the extent that I understand your navigational metaphor, I disagree with this statement. Would you kindly explain?
There is no unique “Bayesian belief”.
If you mean to say that there’s no unique justifiable prior, I agree. The prior in our setting is basically what you assume you know about the dynamics of the system—see my reply to RichardKennaway.
However, given that prior and the agent’s observations, there is a unique Bayesian belief, the one I defined above. That’s pretty much the whole point of Bayesianism, the existence of a subjectively objective probability.
If you had the “right” prior, you would find that would have to do very little updating, because the right prior is already right.
This is true in a constant world, or with regard to parts of the world which are constant. And mind you, it’s true only with high probability: there’s always the slight chance that the sky is not, after all, blue.
But in a changing world, where part of the change is revealed to you through new observations, you have to keep pace. The right prior was right yesterday, today there’s new stuff to know.
Right, in degenerate cases, when there’s nothing to be learned, the two extremes of learning nothing and everything coincide.
In the case where your prior says “the past is not informative about the future”. You learn nothing. A degenerate prior, not degenerate situation.
To the extent that I understand your navigational metaphor, I disagree with this statement. Would you kindly explain?
Imagine a bowl of jellybeans. you put in ten red and ten white. You take out 3, all of which are red, the probability of getting a red on the next draw is 7⁄17.
Take another boal, have a monkey toss in red beans and white beans with 50% probability. You draw 3 red, the draw probability is now 50% (becuase you had a maxentropy prior).
Take another boal. Beans were loaded in with unknown probabilitities. You draw 3 red, your draw probability is 4⁄5 red.
See how depening on your assumptions, you learn in different directions with the same observations? Hence you can learn in the wrong direction with a bad prior.
Learning sideways is a bit of metaphor-stretching, but if you like you can imagine observing 3 red beans proves the existence of god under some prior.
given that prior and the agent’s observations
Yes yes. I was being pedantic because your post didn’t talk about priors and inductive bias.
very little
where part of the change is revealed to you through new observations, you have to keep pace.
I thought of that. I didn’t think enough. “very little” was the wrong phrasing. It’s not that you do less updating, it’s that your updates are on concrete things like “who took the cookies” instead of “does gravity go as the squre or the cube” because your prior already encodes correct physics. Very little updating on physics.
Allow me to suggest a simpler thought experiment, that hopefully captures the essence of yours, and shows why your interpretation (of the correct math) is incorrect.
There are 100 recording studios, each recording each day with probability 0.5. Everybody knows that.
There’s a red light outside each studio to signal that a session is taking place that day, except for one rogue studio, where the signal is reversed, being off when there’s a session and on when there isn’t. Only persons B and C know that.
A, B and C are standing at the door of a studio, but only C knows that it’s the rogue one. How do their beliefs that there’s a session inside change by observing that the red light is on? A keeps the 50-50. B now thinks it’s 99-1. Only C knows that there’s no session.
So your interpretation, as I understand it, would be to say that A and B updated in the “wrong direction”. But wait! I practically gave you the same prior information that C has—of course you agree with her! Let’s rewrite the last paragraph:
A, B and C are standing at the door of a studio. For some obscure reason, C secretly believes that it’s the rogue one. Wouldn’t you now agree with B?
And now I can do the same for A, by not revealing to you, the reader, the significance of the red lights. My point is that as long as someone runs a Bayesian update, you can’t call that the “wrong direction”. Maybe they now believe in things that you judge less likely, based on the information that you have, but that doesn’t make you right and them wrong. Reality makes them right or wrong, unfortunately there’s no one around who knows reality in any other way than through their subjective information-revealing observations.
Yes, the Newtonian force a mass exerts on another mass far away from the first one falls off as the square of the distance. It’s the word “cube” that confuses me.
Right, in degenerate cases, when there’s nothing to be learned, the two extremes of learning nothing and everything coincide.
To the extent that I understand your navigational metaphor, I disagree with this statement. Would you kindly explain?
If you mean to say that there’s no unique justifiable prior, I agree. The prior in our setting is basically what you assume you know about the dynamics of the system—see my reply to RichardKennaway.
However, given that prior and the agent’s observations, there is a unique Bayesian belief, the one I defined above. That’s pretty much the whole point of Bayesianism, the existence of a subjectively objective probability.
This is true in a constant world, or with regard to parts of the world which are constant. And mind you, it’s true only with high probability: there’s always the slight chance that the sky is not, after all, blue.
But in a changing world, where part of the change is revealed to you through new observations, you have to keep pace. The right prior was right yesterday, today there’s new stuff to know.
In the case where your prior says “the past is not informative about the future”. You learn nothing. A degenerate prior, not degenerate situation.
Imagine a bowl of jellybeans. you put in ten red and ten white. You take out 3, all of which are red, the probability of getting a red on the next draw is 7⁄17.
Take another boal, have a monkey toss in red beans and white beans with 50% probability. You draw 3 red, the draw probability is now 50% (becuase you had a maxentropy prior).
Take another boal. Beans were loaded in with unknown probabilitities. You draw 3 red, your draw probability is 4⁄5 red.
See how depening on your assumptions, you learn in different directions with the same observations? Hence you can learn in the wrong direction with a bad prior.
Learning sideways is a bit of metaphor-stretching, but if you like you can imagine observing 3 red beans proves the existence of god under some prior.
Yes yes. I was being pedantic because your post didn’t talk about priors and inductive bias.
I thought of that. I didn’t think enough. “very little” was the wrong phrasing. It’s not that you do less updating, it’s that your updates are on concrete things like “who took the cookies” instead of “does gravity go as the squre or the cube” because your prior already encodes correct physics. Very little updating on physics.
Allow me to suggest a simpler thought experiment, that hopefully captures the essence of yours, and shows why your interpretation (of the correct math) is incorrect.
There are 100 recording studios, each recording each day with probability 0.5. Everybody knows that.
There’s a red light outside each studio to signal that a session is taking place that day, except for one rogue studio, where the signal is reversed, being off when there’s a session and on when there isn’t. Only persons B and C know that.
A, B and C are standing at the door of a studio, but only C knows that it’s the rogue one. How do their beliefs that there’s a session inside change by observing that the red light is on? A keeps the 50-50. B now thinks it’s 99-1. Only C knows that there’s no session.
So your interpretation, as I understand it, would be to say that A and B updated in the “wrong direction”. But wait! I practically gave you the same prior information that C has—of course you agree with her! Let’s rewrite the last paragraph:
A, B and C are standing at the door of a studio. For some obscure reason, C secretly believes that it’s the rogue one. Wouldn’t you now agree with B?
And now I can do the same for A, by not revealing to you, the reader, the significance of the red lights. My point is that as long as someone runs a Bayesian update, you can’t call that the “wrong direction”. Maybe they now believe in things that you judge less likely, based on the information that you have, but that doesn’t make you right and them wrong. Reality makes them right or wrong, unfortunately there’s no one around who knows reality in any other way than through their subjective information-revealing observations.
No matter what, someone is still updating in the wrong direction, even if we don’t know who it is.
…
inverse, with whatever relativistic corrections you would know that I woudn’t
I am not clear on that cube thing, actually.
What? is gravity not inverse quadratic?
Yes, the Newtonian force a mass exerts on another mass far away from the first one falls off as the square of the distance. It’s the word “cube” that confuses me.
I think the quote was an image for a mental question, which could be rephrased as:
Is this power a 2 or a 3?
2, but his original statement was 2x3 :)