The improper alpha = beta = 0 prior, sometimes known as Haldane’s prior, is derived using an invariance argument in Jaynes’s 1968 paper Prior Probabilities. I actually don’t trust that argument—I find the critiques of it here compelling.
Jeffreys priors are derived from a different invariance argument; Wikipedia has a pretty good article on the subject.
I have mostly used the uniform prior myself in the past, although I think in the future I’ll be using the Jeffreys prior as a default for the binomial likelihood. But the maximum entropy argument for the uniform prior is flawed: differential entropy is not an extension of discrete Shannon entropy to continuous distributions. The correct generalization is to relative entropy. Since the measure is arbitrary, the maximum entropy argument is missing an essential component.
The improper alpha = beta = 0 prior, sometimes known as Haldane’s prior, is derived using an invariance argument in Jaynes’s 1968 paper Prior Probabilities. I actually don’t trust that argument—I find the critiques of it here compelling.
Jeffreys priors are derived from a different invariance argument; Wikipedia has a pretty good article on the subject.
I have mostly used the uniform prior myself in the past, although I think in the future I’ll be using the Jeffreys prior as a default for the binomial likelihood. But the maximum entropy argument for the uniform prior is flawed: differential entropy is not an extension of discrete Shannon entropy to continuous distributions. The correct generalization is to relative entropy. Since the measure is arbitrary, the maximum entropy argument is missing an essential component.