Fun link! But I think designing the “game” such that it corresponds to this persuasion game is both practically difficult (in the real world the AI’s moves might be able to cause the judge to do something other than Bayesian updating), and domains in which a convex utility function over our beliefs will help us get what we want in the real world might either be simple and low-impact, or require a very “high-power” utility function that has to already know a good set of beliefs for us to have and want to aim very precisely at that point.
Fun link! But I think designing the “game” such that it corresponds to this persuasion game is both practically difficult (in the real world the AI’s moves might be able to cause the judge to do something other than Bayesian updating), and domains in which a convex utility function over our beliefs will help us get what we want in the real world might either be simple and low-impact, or require a very “high-power” utility function that has to already know a good set of beliefs for us to have and want to aim very precisely at that point.