As I stated at the beginning, I don’t know the standard meaning of maximum entropy prior.
This time when I looked it up I found a simpler definition with finite cases. I’m not sure why I missed that before. I think I can figure out where the confusion is. I was thinking of every possible combination of opinions being separate possibilities. If this is the case, having them all be independent of each other is the maximum entropy. If, on the other hand, you only look at correlation, and consider H(80) = 50 being one case, then maximum entropy would seem to be that H(n) is uniformly distributed.
I don’t think that’s quite right either. I suspect that has something to do with H(n) being continuous instead of discrete. I know the Jeffreys prior for that is beta(1/2,1/2), as opposed to beta(1,1), which is the uniform distribution.
As I stated at the beginning, I don’t know the standard meaning of maximum entropy prior.
This time when I looked it up I found a simpler definition with finite cases. I’m not sure why I missed that before. I think I can figure out where the confusion is. I was thinking of every possible combination of opinions being separate possibilities. If this is the case, having them all be independent of each other is the maximum entropy. If, on the other hand, you only look at correlation, and consider H(80) = 50 being one case, then maximum entropy would seem to be that H(n) is uniformly distributed.
I don’t think that’s quite right either. I suspect that has something to do with H(n) being continuous instead of discrete. I know the Jeffreys prior for that is beta(1/2,1/2), as opposed to beta(1,1), which is the uniform distribution.