Thank you for sharing your work, I am excited to take a look at it at some point in the next few days. Based on the synopsis you’ve provided, it seems as if you have traveled down (and more productively so) many of the same inferential avenues as myself which arise in the process of identification and application of Bayesian methods in the material and social world. In particular I have been fascinqged with the game theoretic consequences And higher order behaviors which must arise in multi agent interactions where actors are characterized by state and strategy updates driven by approximate Bayesian rules. Of all existing work I have found in that arena, Epistemic game theory has been by far the most exciting. I am still working through the foundations of the theory but I believe it to be powerfully descriptive for mutually observable social interactions among groups. I would love to hear your thoughts if you happen to have encountered the idea, and best of luck on your new book!
Hi Apodosis, I have done my PhD in Bayesian game theory, so this is a topic close to my heart ˆˆ
There are plenty of fascinating things to explore in the study of interactions between Bayesians. One important finding of my PhD was that, essentially, Bayesians end up playing (stable) Bayes-Nash equilibria in repeated games, even if the only feedback they receive is their utility (and in particular even if the private information of other players remain private). I also studied Bayesian incentive-compatible mechanism design, i.e. coming up with rules that incentivize Bayesians’ honesty.
The book also discusses interesting features of interactions between Bayesians, such as Aumann-Aaronson’s agreement theorem or Bayesian persuasion (i.e. maximizing a Bayesian judge’s probability of convicting a defendant by optimizing what investigations should be persued). One research direction I’m interested in is that a Byzantine Bayesian agreement, i.e. how much a group of honest Bayesians can agree if they are infiltrated by a small number of malicious individuals, though I have not yet found the time to dig this topic further.
A more empirical challenge is to determine how well these Bayesian game theory models fit the description of human (or AI) interactions. Clearly, we humans are not Bayesians. We have some systematic cognitive biases (and even powerful AIs may also have systematic biases, since they won’t be running Bayes rule exactly!). How can we best model and predict humans’ divergence from Bayes rule? There has been a lot of spectacular advance in cognitive sciences in this regard (check out Josh Tenenbaum’s work for instance), but there’s definitely a lot more to do!
Thank you for sharing your work, I am excited to take a look at it at some point in the next few days. Based on the synopsis you’ve provided, it seems as if you have traveled down (and more productively so) many of the same inferential avenues as myself which arise in the process of identification and application of Bayesian methods in the material and social world. In particular I have been fascinqged with the game theoretic consequences And higher order behaviors which must arise in multi agent interactions where actors are characterized by state and strategy updates driven by approximate Bayesian rules. Of all existing work I have found in that arena, Epistemic game theory has been by far the most exciting. I am still working through the foundations of the theory but I believe it to be powerfully descriptive for mutually observable social interactions among groups. I would love to hear your thoughts if you happen to have encountered the idea, and best of luck on your new book!
Hi Apodosis, I have done my PhD in Bayesian game theory, so this is a topic close to my heart ˆˆ There are plenty of fascinating things to explore in the study of interactions between Bayesians. One important finding of my PhD was that, essentially, Bayesians end up playing (stable) Bayes-Nash equilibria in repeated games, even if the only feedback they receive is their utility (and in particular even if the private information of other players remain private). I also studied Bayesian incentive-compatible mechanism design, i.e. coming up with rules that incentivize Bayesians’ honesty. The book also discusses interesting features of interactions between Bayesians, such as Aumann-Aaronson’s agreement theorem or Bayesian persuasion (i.e. maximizing a Bayesian judge’s probability of convicting a defendant by optimizing what investigations should be persued). One research direction I’m interested in is that a Byzantine Bayesian agreement, i.e. how much a group of honest Bayesians can agree if they are infiltrated by a small number of malicious individuals, though I have not yet found the time to dig this topic further. A more empirical challenge is to determine how well these Bayesian game theory models fit the description of human (or AI) interactions. Clearly, we humans are not Bayesians. We have some systematic cognitive biases (and even powerful AIs may also have systematic biases, since they won’t be running Bayes rule exactly!). How can we best model and predict humans’ divergence from Bayes rule? There has been a lot of spectacular advance in cognitive sciences in this regard (check out Josh Tenenbaum’s work for instance), but there’s definitely a lot more to do!