I don’t know what you mean by “should be allowed to put whatever prior I want”. I mean, I guess nobody will stop you. But if your beliefs are well approximated by a particular prior, then pretending that they are approximated by a different prior is going to cause a mismatch between your beliefs and your beliefs about your beliefs.
[Nitpick: The Kelly criterion assumes not only that you will be confronted with a large number of similar bets, but also that you have some base level of risk-aversion (concave utility function) that repeated bets can smooth out into a logarithmic utility function. If you start with a linear utility function then repeating the bets still gives you linear utility, and the optimal strategy is to make every bet all-or-nothing whenever you have an advantage. At least, this is true before taking into account the resource constraints of the system you are betting against.]
I agree that “want” is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.
What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.
I don’t know what you mean by “should be allowed to put whatever prior I want”. I mean, I guess nobody will stop you. But if your beliefs are well approximated by a particular prior, then pretending that they are approximated by a different prior is going to cause a mismatch between your beliefs and your beliefs about your beliefs.
[Nitpick: The Kelly criterion assumes not only that you will be confronted with a large number of similar bets, but also that you have some base level of risk-aversion (concave utility function) that repeated bets can smooth out into a logarithmic utility function. If you start with a linear utility function then repeating the bets still gives you linear utility, and the optimal strategy is to make every bet all-or-nothing whenever you have an advantage. At least, this is true before taking into account the resource constraints of the system you are betting against.]
I agree that “want” is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.
What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.