Allow me to explain less snarkily and more directly than Manfred.
As you correctly observe, the maximum-entropy probability on {1,2,3,4} with any given mean is one that gives k (k=1,2,3,4) probability A.r^k for some A,r, and these parameters are uniquely determined by the requirement that the probabilities sum to 1 and that the resulting mean should be the given one.
In the particular case where the mean is 2.5, the specific values in question are A=1/4 and r=1.
This distribution can be described as exponential, if you insist—but it also happens to be the same uniform distribution that’s maximum-entropy without knowing the mean.
So the inconsistency you seemed to be suggesting—of an entropy-maximizing Bayesian robot choosing one distribution on the basis of maxent, and then switching to a different one on having one property of that distribution confirmed—is not real. On learning that the mean is 2.5 as it already guessed, the robot does not switch to a different distribution.
It just occurred to me that I really ought to check whether I ought to check that it was in fact different rather than going, “what are the odds that out of all the possibilities that equation happens to be the uniform distribution?”. Guess I should have done that before posting.
It also occurs to me now, that I didn’t even have to calculate out the equation (which I thought was too much effort for a “someone is wrong on the internet”), and could just plug in the values … and in fact I even already did that when finding the uniform distribution and its mean.
This post sponsored by “When someone is wrong on the internet, it’s sometimes you”
Allow me to explain less snarkily and more directly than Manfred.
As you correctly observe, the maximum-entropy probability on {1,2,3,4} with any given mean is one that gives k (k=1,2,3,4) probability A.r^k for some A,r, and these parameters are uniquely determined by the requirement that the probabilities sum to 1 and that the resulting mean should be the given one.
In the particular case where the mean is 2.5, the specific values in question are A=1/4 and r=1.
This distribution can be described as exponential, if you insist—but it also happens to be the same uniform distribution that’s maximum-entropy without knowing the mean.
So the inconsistency you seemed to be suggesting—of an entropy-maximizing Bayesian robot choosing one distribution on the basis of maxent, and then switching to a different one on having one property of that distribution confirmed—is not real. On learning that the mean is 2.5 as it already guessed, the robot does not switch to a different distribution.
[EDITED: minor tweaks for clarity.]
It just occurred to me that I really ought to check whether I ought to check that it was in fact different rather than going, “what are the odds that out of all the possibilities that equation happens to be the uniform distribution?”. Guess I should have done that before posting.
It also occurs to me now, that I didn’t even have to calculate out the equation (which I thought was too much effort for a “someone is wrong on the internet”), and could just plug in the values … and in fact I even already did that when finding the uniform distribution and its mean.
This post sponsored by “When someone is wrong on the internet, it’s sometimes you”