I haverepeatedlyargued for a departure from pure Bayesianism that I call “quasi-Bayesianism”. But, coming from a LessWrong-ish background, it might be hard to wrap your head around the fact Bayesianism is somehow deficient. So, here’s another way to understand it, using Bayesianism’s own favorite trick: Dutch booking!
Consider a Bayesian agent Alice. Since Alice is Bayesian, ey never randomize: ey just follow a Bayes-optimal policy for eir prior, and such a policy can always be chosen to be deterministic. Moreover, Alice always accepts a bet if ey can choose which side of the bet to take: indeed, at least one side of any bet has non-negative expected utility. Now, Alice meets Omega. Omega is very smart so ey know more than Alice and moreover ey can predict Alice. Omega offers Alice a series of bets. The bets are specifically chosen by Omega s.t. Alice would pick the wrong side of each one. Alice takes the bets and loses, indefinitely. Alice cannot escape eir predicament: ey might know, in some sense, that Omega is cheating em, but there is no way within the Bayesian paradigm to justify turning down the bets.
A possible counterargument is, we don’t need to depart far from Bayesianism to win here. We only need to somehow justify randomization, perhaps by something like infinitesimal random perturbations of the belief state (like with reflective oracles). But, in a way, this is exactly what quasi-Bayesianism does: a quasi-Bayes-optimal policy is in particular Bayes-optimal when the prior is taken to be in Nash equilibrium of the associated zero-sum game. However, Bayes-optimality underspecifies the policy: not every optimal reply to a Nash equilibrium is a Nash equilibrium.
This argument is not entirely novel: it is just a special case of an environment that the agent cannot simulate, which is the original motivation for quasi-Bayesianism. In some sense, any Bayesian agent is dogmatic: it dogmatically beliefs that the environment is computationally simple, since it cannot consider a hypothesis which is not. Here, Omega exploits this false dogmatic belief.
Bayeseans are allowed to understand that there are agents with better estimates than they have. And that being offered a bet _IS_ evidence that the other agent THINKS they have an advantage.
Randomization (aka “mixed strategy”) is well-understood as the rational move in games where opponents are predicting your choices. I have read nothing that would even hint that it’s unavailable to Bayesean agents. The relevant probability (updated per Bayes’s Rule) would be “is my counterpart trying to minimize my payout based on my choices”.
edit: I realize you may be using a different definition of “bayeseanism” than I am. I’m thinking humans striving for rational choices, which perforce includes the knowledge of incomplete computation and imperfect knowledge. Naive agents can be imagined that don’t have this complexity. Those guys are stuck, and Omega’s gonna pwn them.
I’m thinking humans striving for rational choices,
It feels like there’s better words for this like rationality, whereas bayesianism is a more specific philosophy about how best to represent and update beliefs.
And here I thought the reason was going to be that Bayesianism doesn’t appear to include the cost of computation. (Thus, the usual dutch book arguments should be adjusted so that “optimal betting” does not leave one worse off for having payed, say, an oracle, too much for computation.)
I have repeatedly argued for a departure from pure Bayesianism that I call “quasi-Bayesianism”. But, coming from a LessWrong-ish background, it might be hard to wrap your head around the fact Bayesianism is somehow deficient. So, here’s another way to understand it, using Bayesianism’s own favorite trick: Dutch booking!
Consider a Bayesian agent Alice. Since Alice is Bayesian, ey never randomize: ey just follow a Bayes-optimal policy for eir prior, and such a policy can always be chosen to be deterministic. Moreover, Alice always accepts a bet if ey can choose which side of the bet to take: indeed, at least one side of any bet has non-negative expected utility. Now, Alice meets Omega. Omega is very smart so ey know more than Alice and moreover ey can predict Alice. Omega offers Alice a series of bets. The bets are specifically chosen by Omega s.t. Alice would pick the wrong side of each one. Alice takes the bets and loses, indefinitely. Alice cannot escape eir predicament: ey might know, in some sense, that Omega is cheating em, but there is no way within the Bayesian paradigm to justify turning down the bets.
A possible counterargument is, we don’t need to depart far from Bayesianism to win here. We only need to somehow justify randomization, perhaps by something like infinitesimal random perturbations of the belief state (like with reflective oracles). But, in a way, this is exactly what quasi-Bayesianism does: a quasi-Bayes-optimal policy is in particular Bayes-optimal when the prior is taken to be in Nash equilibrium of the associated zero-sum game. However, Bayes-optimality underspecifies the policy: not every optimal reply to a Nash equilibrium is a Nash equilibrium.
This argument is not entirely novel: it is just a special case of an environment that the agent cannot simulate, which is the original motivation for quasi-Bayesianism. In some sense, any Bayesian agent is dogmatic: it dogmatically beliefs that the environment is computationally simple, since it cannot consider a hypothesis which is not. Here, Omega exploits this false dogmatic belief.
Bayeseans are allowed to understand that there are agents with better estimates than they have. And that being offered a bet _IS_ evidence that the other agent THINKS they have an advantage.
Randomization (aka “mixed strategy”) is well-understood as the rational move in games where opponents are predicting your choices. I have read nothing that would even hint that it’s unavailable to Bayesean agents. The relevant probability (updated per Bayes’s Rule) would be “is my counterpart trying to minimize my payout based on my choices”.
edit: I realize you may be using a different definition of “bayeseanism” than I am. I’m thinking humans striving for rational choices, which perforce includes the knowledge of incomplete computation and imperfect knowledge. Naive agents can be imagined that don’t have this complexity. Those guys are stuck, and Omega’s gonna pwn them.
It feels like there’s better words for this like rationality, whereas bayesianism is a more specific philosophy about how best to represent and update beliefs.
And here I thought the reason was going to be that Bayesianism doesn’t appear to include the cost of computation. (Thus, the usual dutch book arguments should be adjusted so that “optimal betting” does not leave one worse off for having payed, say, an oracle, too much for computation.)