Nope, it’s least. :-) The only way to actually unconfuse yourself in this subject is to follow the math carefully. Here’s a Google Books link that might help. (The URL is long and weird, does it open on your computer?)
URL opens, but it is not especially useful as the odd numbered pages are not displayed by Google books. I suppose it would be a good exercise to try to infer the missing information, but life is short and the way is long.
More useful was the Clarke Barron 1994 paper that states: Jeffrey’s prior asymptotically maximizes the Shannon mutual information between a sample of size N and the parameter.
Do you mean most favorable? If not, I am very confused.
“Least favorable” here is the “min” part of minimax. (The max part is doing the best you can with this least favorable prior.)
Nope, it’s least. :-) The only way to actually unconfuse yourself in this subject is to follow the math carefully. Here’s a Google Books link that might help. (The URL is long and weird, does it open on your computer?)
URL opens, but it is not especially useful as the odd numbered pages are not displayed by Google books. I suppose it would be a good exercise to try to infer the missing information, but life is short and the way is long.
More useful was the Clarke Barron 1994 paper that states: Jeffrey’s prior asymptotically maximizes the Shannon mutual information between a sample of size N and the parameter.
That agrees with my intuition.