Putting this in the OT because I’m risking asking something silly and basic here—after reading “Are Bayesian methods guaranteed to overfit?”, it feels like there should exist a Bayesian-update analogue to Kelly betting: underfitting enough to preserve some property that’s important because the environment is unpredictable, catastrophic losses have distortionary effects, etc., where fitting to the observations ‘as well as possible’ is analogous to playing purely for expected value and thus turns out to be ‘wrong’ in the kind of iterated games we wind up in in reality. Dweomite’s comment is part of what inspired this, because of the way it enumerated reasons having to do with limited training data.
Is this maybe an existing well-known concept that I missed the boat on, or something that’s already known to be unworkable or undefinable for some reason? Or what?
Putting this in the OT because I’m risking asking something silly and basic here—after reading “Are Bayesian methods guaranteed to overfit?”, it feels like there should exist a Bayesian-update analogue to Kelly betting: underfitting enough to preserve some property that’s important because the environment is unpredictable, catastrophic losses have distortionary effects, etc., where fitting to the observations ‘as well as possible’ is analogous to playing purely for expected value and thus turns out to be ‘wrong’ in the kind of iterated games we wind up in in reality. Dweomite’s comment is part of what inspired this, because of the way it enumerated reasons having to do with limited training data.
Is this maybe an existing well-known concept that I missed the boat on, or something that’s already known to be unworkable or undefinable for some reason? Or what?