I must not have been paying attention to the decision theory mailing list at that time. Thinking it over now, I think technically it works, but doesn’t seem very satisfying, because the individual agents jointly have non-VNM preferences, and are having to do all the work to pick out a specific mixed strategy/outcome. They’re then using a coin-flip + VNM AI just to reach that specific outcome, without the VNM AI actually embodying their joint preferences.
To put it another way, if your preferences can only be implemented by picking a VNM AI based on a coin flip, then your preferences are not VNM rational. The fact that any point on the Pareto frontier can be reached by a coin-flip + VNM AI seems more like a distraction to trying to figure how to get an AI to correctly embody such preferences.
I must not have been paying attention to the decision theory mailing list at that time. Thinking it over now, I think technically it works, but doesn’t seem very satisfying, because the individual agents jointly have non-VNM preferences, and are having to do all the work to pick out a specific mixed strategy/outcome. They’re then using a coin-flip + VNM AI just to reach that specific outcome, without the VNM AI actually embodying their joint preferences.
To put it another way, if your preferences can only be implemented by picking a VNM AI based on a coin flip, then your preferences are not VNM rational. The fact that any point on the Pareto frontier can be reached by a coin-flip + VNM AI seems more like a distraction to trying to figure how to get an AI to correctly embody such preferences.
What do you mean when you say the agents “jointly have non-VNM preferences”? Is there a definition of joint preferences?