Looks like my argument leads to a mildly interesting result. Let’s say a Bayesian is playing a game against a Tamperer. The Bayesian is receiving evidence about something, and tries to get more accurate beliefs about that thing according to some proper scoring rule. The Tamperer sits in the middle, tampering with the evidence received by the Bayesian. Then any Nash equilibrium will give the Bayesian an expected score that’s at least as high as they would’ve got by just using their prior and ignoring the evidence. (The expected score is calculated according to the Bayesian’s prior.) In other words, you cannot deliberately lead a Bayesian away from the truth.
The proof is kinda trivial: the Bayesian can guarantee a certain score by just using the prior, regardless of what the Tamperer does. Therefore in any Nash equilibrium the Bayesian will get at least that much, and might get more if the Tamperer’s opportunities for tampering are somehow limited.
That rather depends on whether the Bayesian (usually known as Bob) knows there is a Tamperer (usually known as Mallory) messing around with his evidence.
If the Bayesian does know, he just distrusts all evidence and doesn’t move off his prior. But if he does not know, then the Tamperer just pwns him.
I think your objection is kinda covered by the use of the term “Nash equilibrium” in my comment. And even if the universe decides to create a Tamperer with some probability and leave the evidence untouched otherwise, the result should still hold. The term for that kind of situation is “Bayes-Nash equilibrium”, I think.
Bob is playing a zero-sum game against Mallory. All Bob’s information is filtered/changed/provided by Mallory and Bob knows it. In this situation Bob cannot trust any of this information and so never changes his response or belief.
The result also applies if Mallory has limited opportunities to change Bob’s information, e.g. a 10% chance of successfully changing it. Or you could have any other complicated setup. In such cases Bob’s best strategy involves some updating, and the result says that such updating cannot lower Bob’s score on average. (If you’re wondering why Bob’s strategy in a Nash equilibrium must look like Bayesian updating at all, the reason is given by the definition of a proper scoring rule.) In other words, it’s still trivial, but not quite as trivial as you say. Also note that if Mallory’s options are limited, her best strategy might become pretty complicated.
Looks like my argument leads to a mildly interesting result. Let’s say a Bayesian is playing a game against a Tamperer. The Bayesian is receiving evidence about something, and tries to get more accurate beliefs about that thing according to some proper scoring rule. The Tamperer sits in the middle, tampering with the evidence received by the Bayesian. Then any Nash equilibrium will give the Bayesian an expected score that’s at least as high as they would’ve got by just using their prior and ignoring the evidence. (The expected score is calculated according to the Bayesian’s prior.) In other words, you cannot deliberately lead a Bayesian away from the truth.
The proof is kinda trivial: the Bayesian can guarantee a certain score by just using the prior, regardless of what the Tamperer does. Therefore in any Nash equilibrium the Bayesian will get at least that much, and might get more if the Tamperer’s opportunities for tampering are somehow limited.
That rather depends on whether the Bayesian (usually known as Bob) knows there is a Tamperer (usually known as Mallory) messing around with his evidence.
If the Bayesian does know, he just distrusts all evidence and doesn’t move off his prior. But if he does not know, then the Tamperer just pwns him.
I think your objection is kinda covered by the use of the term “Nash equilibrium” in my comment. And even if the universe decides to create a Tamperer with some probability and leave the evidence untouched otherwise, the result should still hold. The term for that kind of situation is “Bayes-Nash equilibrium”, I think.
In this case what’s special about Bayesians here?
Bob is playing a zero-sum game against Mallory. All Bob’s information is filtered/changed/provided by Mallory and Bob knows it. In this situation Bob cannot trust any of this information and so never changes his response or belief.
I don’t see any reason to invoke St.Bayes.
The result also applies if Mallory has limited opportunities to change Bob’s information, e.g. a 10% chance of successfully changing it. Or you could have any other complicated setup. In such cases Bob’s best strategy involves some updating, and the result says that such updating cannot lower Bob’s score on average. (If you’re wondering why Bob’s strategy in a Nash equilibrium must look like Bayesian updating at all, the reason is given by the definition of a proper scoring rule.) In other words, it’s still trivial, but not quite as trivial as you say. Also note that if Mallory’s options are limited, her best strategy might become pretty complicated.