Hi Apodosis, I have done my PhD in Bayesian game theory, so this is a topic close to my heart ˆˆ
There are plenty of fascinating things to explore in the study of interactions between Bayesians. One important finding of my PhD was that, essentially, Bayesians end up playing (stable) Bayes-Nash equilibria in repeated games, even if the only feedback they receive is their utility (and in particular even if the private information of other players remain private). I also studied Bayesian incentive-compatible mechanism design, i.e. coming up with rules that incentivize Bayesians’ honesty.
The book also discusses interesting features of interactions between Bayesians, such as Aumann-Aaronson’s agreement theorem or Bayesian persuasion (i.e. maximizing a Bayesian judge’s probability of convicting a defendant by optimizing what investigations should be persued). One research direction I’m interested in is that a Byzantine Bayesian agreement, i.e. how much a group of honest Bayesians can agree if they are infiltrated by a small number of malicious individuals, though I have not yet found the time to dig this topic further.
A more empirical challenge is to determine how well these Bayesian game theory models fit the description of human (or AI) interactions. Clearly, we humans are not Bayesians. We have some systematic cognitive biases (and even powerful AIs may also have systematic biases, since they won’t be running Bayes rule exactly!). How can we best model and predict humans’ divergence from Bayes rule? There has been a lot of spectacular advance in cognitive sciences in this regard (check out Josh Tenenbaum’s work for instance), but there’s definitely a lot more to do!
Hi Apodosis, I have done my PhD in Bayesian game theory, so this is a topic close to my heart ˆˆ There are plenty of fascinating things to explore in the study of interactions between Bayesians. One important finding of my PhD was that, essentially, Bayesians end up playing (stable) Bayes-Nash equilibria in repeated games, even if the only feedback they receive is their utility (and in particular even if the private information of other players remain private). I also studied Bayesian incentive-compatible mechanism design, i.e. coming up with rules that incentivize Bayesians’ honesty. The book also discusses interesting features of interactions between Bayesians, such as Aumann-Aaronson’s agreement theorem or Bayesian persuasion (i.e. maximizing a Bayesian judge’s probability of convicting a defendant by optimizing what investigations should be persued). One research direction I’m interested in is that a Byzantine Bayesian agreement, i.e. how much a group of honest Bayesians can agree if they are infiltrated by a small number of malicious individuals, though I have not yet found the time to dig this topic further. A more empirical challenge is to determine how well these Bayesian game theory models fit the description of human (or AI) interactions. Clearly, we humans are not Bayesians. We have some systematic cognitive biases (and even powerful AIs may also have systematic biases, since they won’t be running Bayes rule exactly!). How can we best model and predict humans’ divergence from Bayes rule? There has been a lot of spectacular advance in cognitive sciences in this regard (check out Josh Tenenbaum’s work for instance), but there’s definitely a lot more to do!