Let me repeat back your argument as I understand it.
If we have a Bayesian utility maximizing agent, that’s just a probabilistic inference layer with a VNM utility maximizer sitting on top of it. So our would-be arbitrageur comes along with a source of “objective” randomness, like a quantum random number generator. The arbitrageur wants to interact with the VNM layer, so it needs to design bets to which the inference layer assigns some specific probability. It does that by using the “objective” randomness source in the bet design: just incorporate that randomness in such a way that the inference layer assigns the probabilities the arbitrageur wants.
This seems correct insofar as it applies. It is a useful perspective, and not one I had thought much about before this, so thanks for bringing it in.
The main issue I still don’t see resolved by this argument is the architecture question. The coherence theorems only say that an agent must act as if they perform Bayesian inference and then choose the option with highest expected value based on those probabilities. In the agent’s actual internal architecture, there need not be separate modules for inference and decision-making (a Kalman filter is one example). If we can’t neatly separate the two pieces somehow, then we don’t have a good way to construct lotteries with specified probabilities, so we don’t have a way to treat the agent as a VNM-type agent.
This directly follows from the original main issue: VNM utility theory is built on the idea that probabilities live in the environment, not in the agent. If there’s a neat separation between the agent’s inference and decision modules, then we can redefine the inference module to be part of the environment, but that neat separation need not always exist.
EDIT: Also, I should point out explicitly that VNM alone doesn’t tell us why we ever expect probabilities to be relevant to anything in the first place. If we already have a Bayesian expected utility maximizer with separate inference and decision modules, then we can model that as an inference layer with VNM on top, but then we don’t have a theorem telling us why inference layers should magically appear in the world.
Why do we expect (approximate) expected utility maximizers to show up in the real world? That’s the main question coherence theorems answer, and VNM cannot answer that question unless all of the probabilities involved are ontologically fundamental.
Let me repeat back your argument as I understand it.
If we have a Bayesian utility maximizing agent, that’s just a probabilistic inference layer with a VNM utility maximizer sitting on top of it. So our would-be arbitrageur comes along with a source of “objective” randomness, like a quantum random number generator. The arbitrageur wants to interact with the VNM layer, so it needs to design bets to which the inference layer assigns some specific probability. It does that by using the “objective” randomness source in the bet design: just incorporate that randomness in such a way that the inference layer assigns the probabilities the arbitrageur wants.
This seems correct insofar as it applies. It is a useful perspective, and not one I had thought much about before this, so thanks for bringing it in.
The main issue I still don’t see resolved by this argument is the architecture question. The coherence theorems only say that an agent must act as if they perform Bayesian inference and then choose the option with highest expected value based on those probabilities. In the agent’s actual internal architecture, there need not be separate modules for inference and decision-making (a Kalman filter is one example). If we can’t neatly separate the two pieces somehow, then we don’t have a good way to construct lotteries with specified probabilities, so we don’t have a way to treat the agent as a VNM-type agent.
This directly follows from the original main issue: VNM utility theory is built on the idea that probabilities live in the environment, not in the agent. If there’s a neat separation between the agent’s inference and decision modules, then we can redefine the inference module to be part of the environment, but that neat separation need not always exist.
EDIT: Also, I should point out explicitly that VNM alone doesn’t tell us why we ever expect probabilities to be relevant to anything in the first place. If we already have a Bayesian expected utility maximizer with separate inference and decision modules, then we can model that as an inference layer with VNM on top, but then we don’t have a theorem telling us why inference layers should magically appear in the world.
Why do we expect (approximate) expected utility maximizers to show up in the real world? That’s the main question coherence theorems answer, and VNM cannot answer that question unless all of the probabilities involved are ontologically fundamental.