“For a true Bayesian, information would never have negative expected utility.”
Is this true in general? It seems to me that if a Bayesian has limited information handling ability, then they need to give some thought (not too much!) to the risks of being swamped with information and of spending too many resources on gathering information.
I suggest that “true Bayesian” is ambiguous enough (this seems to use it in the sense of a human using the principles of Bayes) that some other phrase—perhaps “unlimited Bayesian”—would be clearer.
The cost of gathering or processing the information may exceed the value of information, but the information is always positive value; At worst, you do nothing different, and the rest of the time you make a more informed choice.
A true Bayesian has unlimited information handling ability.
I think I see that—because if it didn’t, then not all of its probabilities would be properly updated, so its degrees of belief wouldn’t have the relations implied by probability theory, so it wouldn’t be a true Bayesian. Right?
Yes, one generally ignores the cost of making these computations. One might try to take it into account, but then one is ignoring the cost of doing that computation, etc. Historically, the “Bayesian revolution” needed computers before it could happen.
And, I notice, it has only gone as far as the computers allow. “True Bayesians” also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.
And, I notice, it has only gone as far as the computers allow. “True Bayesians” also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.
It is impossible, even in principal. The only way to have universal priors over all computable universes is if you have access to a source of hypercomputation, but that would mean the universe isn’t computable so the truth still isn’t in your prior set.
Yeah, certainly. The search might be expensive. Or, some of its resources might be devoted to distinguishing the most relevant among the information it receives—diluting its input with irrelevant truths makes it work harder to find what’s really important.
An interpretation of the original statement that I think is true, though, is that in all these cases, receiving the information and getting a little more knowledgeable offsets the negative utility of whatever price was paid for it. The negative utility of the combination of search+learning is always negative because of the searching part of it—if you kept the searching but removed the learning at the end, it’d be even worse.
I’m not exactly sure what “a true Bayesian” refers to, if anything, but it’s possible that being whatever that is precludes having limited information handling ability.
“For a true Bayesian, information would never have negative expected utility.”
Is this true in general? It seems to me that if a Bayesian has limited information handling ability, then they need to give some thought (not too much!) to the risks of being swamped with information and of spending too many resources on gathering information.
I believe that in this situation “true Bayesian” implies unbounded processing power/ logical omniscience.
I suggest that “true Bayesian” is ambiguous enough (this seems to use it in the sense of a human using the principles of Bayes) that some other phrase—perhaps “unlimited Bayesian”—would be clearer.
The cost of gathering or processing the information may exceed the value of information, but the information is always positive value; At worst, you do nothing different, and the rest of the time you make a more informed choice.
Yes, in this technical sense.
A true Bayesian has unlimited information handling ability.
I think I see that—because if it didn’t, then not all of its probabilities would be properly updated, so its degrees of belief wouldn’t have the relations implied by probability theory, so it wouldn’t be a true Bayesian. Right?
Yes, one generally ignores the cost of making these computations. One might try to take it into account, but then one is ignoring the cost of doing that computation, etc. Historically, the “Bayesian revolution” needed computers before it could happen.
And, I notice, it has only gone as far as the computers allow. “True Bayesians” also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.
It is impossible, even in principal. The only way to have universal priors over all computable universes is if you have access to a source of hypercomputation, but that would mean the universe isn’t computable so the truth still isn’t in your prior set.
Is that written up as a theorem anywhere?
That depends on how one wants to formalize it.
Yeah, certainly. The search might be expensive. Or, some of its resources might be devoted to distinguishing the most relevant among the information it receives—diluting its input with irrelevant truths makes it work harder to find what’s really important.
An interpretation of the original statement that I think is true, though, is that in all these cases, receiving the information and getting a little more knowledgeable offsets the negative utility of whatever price was paid for it. The negative utility of the combination of search+learning is always negative because of the searching part of it—if you kept the searching but removed the learning at the end, it’d be even worse.
I’m not exactly sure what “a true Bayesian” refers to, if anything, but it’s possible that being whatever that is precludes having limited information handling ability.
“True Bayesian” is in this case a “True Scotsman”, if some information has negative utility for you, you are not a true Bayesian.