only two of those four are relevant to coherence. The main problem is that the axioms relevant to coherence (acyclicity and completeness) do not say anything at all about probability
It seems to me that the independence axiom is a coherence condition, unless I misunderstand what you mean by coherence?
correctly point out problems with VNM
I’m curious what problems you have in mind, since I don’t think VNM has problems that don’t apply to similar coherence theorems.
VNM utility stipulates that agents have preferences over “lotteries” with known, objective probabilities of each outcome. The probabilities are assumed to be objectively known from the start. The Bayesian coherence theorems do not assume probabilities from the start; they derive probabilities from the coherence criteria, and those probabilities are specific to the agent.
One can construct lotteries with probabilities that are pretty well understood (e.g. flipping coins that we have accumulated a lot of evidence are fair), and you can restrict attention to lotteries only involving uncertainty coming from such sources. One may then get probabilities for other, less well-understood sources of uncertainty by comparing preferences involving such uncertainty to preferences involving easy-to-quantify uncertainty (e.g. if A is preferred to B, and you’re indifferent between 60%A+40%B and “A if X, B if not-X”, then you assign probability 60% to X. Perhaps not quite as philosophically satisfying as deriving probabilities from scratch, but this doesn’t seem like a fatal flaw in VNM to me.
I do not expect agent-like systems in the wild to be pushed toward VNM expected utility maximization. I expect them to be pushed toward Bayesian expected utility maximization.
I understood those as being synonyms. What’s the difference?
I would argue that independence of irrelevant alternatives is not a real coherence criterion. It looks like one at first glance: if it’s violated, then you get an Allais Paradox-type situation where someone pays to throw a switch and then pays to throw it back. The problem is, the “arbitrage” of throwing the switch back and forth hinges on the assumption that the stated probabilities are objectively correct. It’s entirely possible for someone to come along who believes that throwing the switch changes the probabilities in a way that makes it a good deal. Then there’s no real arbitrage, it just comes down to whose probabilities better match the outcomes.
My intuition for this not being real arbitrage comes from finance. In finance, we’d call it “statistical arbitrage”: it only works if the probabilities are correct. The major lesson of the collapse of Long Term Capital Management in the 90′s is that statistical arbitrage is definitely not real arbitrage. The whole point of true arbitrage is that it does not depend on your statistical model being correct .
This directly leads to the difference between VNM and Bayesian expected utility maximization. In VNM, agents have preferences over lotteries: the probabilities of each outcome are inputs to the preference function. In Bayesian expected utility maximization, the only inputs to the preference function are the choices available to the agent—figuring out the probabilities of each outcome under each choice is the agent’s job.
(I do agree that we can set up situations where objectively correct probabilities are a reasonable model, e.g. in a casino, but the point of coherence theorems is to be pretty generally applicable. A theorem only relevant to casinos isn’t all that interesting.)
Ok, I see what you mean about independence of irrelevant alternatives only being a real coherence condition when the probabilities are objective (or otherwise known to be equal because they come from the same source, even if there isn’t an objective way of saying what their common probability is).
But I disagree that this makes VNM only applicable to settings in which all sources of uncertainty have objectively correct probabilities. As I said in my previous comment, you only need there to exist some source of objective probabilities, and you can then use preferences over lotteries involving objective probabilities and preferences over related lotteries involving other sources of uncertainty to determine what probability the agent must assign for those other sources of uncertainty.
Re: the difference between VNM and Bayesian expected utility maximization, I take it from the word “Bayesian” that the way you’re supposed to choose between actions does involve first coming up with probabilities of each outcome resulting from each action, and from “expected utility maximization”, that these probabilities are to be used in exactly the way the VNM theorem says they should be. Since the VNM theorem does not make any assumptions about where the probabilities came from, these still sound essentially the same, except with Bayesian expected utility maximization being framed to emphasize that you have to get the probabilities somehow first.
Let me repeat back your argument as I understand it.
If we have a Bayesian utility maximizing agent, that’s just a probabilistic inference layer with a VNM utility maximizer sitting on top of it. So our would-be arbitrageur comes along with a source of “objective” randomness, like a quantum random number generator. The arbitrageur wants to interact with the VNM layer, so it needs to design bets to which the inference layer assigns some specific probability. It does that by using the “objective” randomness source in the bet design: just incorporate that randomness in such a way that the inference layer assigns the probabilities the arbitrageur wants.
This seems correct insofar as it applies. It is a useful perspective, and not one I had thought much about before this, so thanks for bringing it in.
The main issue I still don’t see resolved by this argument is the architecture question. The coherence theorems only say that an agent must act as if they perform Bayesian inference and then choose the option with highest expected value based on those probabilities. In the agent’s actual internal architecture, there need not be separate modules for inference and decision-making (a Kalman filter is one example). If we can’t neatly separate the two pieces somehow, then we don’t have a good way to construct lotteries with specified probabilities, so we don’t have a way to treat the agent as a VNM-type agent.
This directly follows from the original main issue: VNM utility theory is built on the idea that probabilities live in the environment, not in the agent. If there’s a neat separation between the agent’s inference and decision modules, then we can redefine the inference module to be part of the environment, but that neat separation need not always exist.
EDIT: Also, I should point out explicitly that VNM alone doesn’t tell us why we ever expect probabilities to be relevant to anything in the first place. If we already have a Bayesian expected utility maximizer with separate inference and decision modules, then we can model that as an inference layer with VNM on top, but then we don’t have a theorem telling us why inference layers should magically appear in the world.
Why do we expect (approximate) expected utility maximizers to show up in the real world? That’s the main question coherence theorems answer, and VNM cannot answer that question unless all of the probabilities involved are ontologically fundamental.
I think you’re underestimating VNM here.
It seems to me that the independence axiom is a coherence condition, unless I misunderstand what you mean by coherence?
I’m curious what problems you have in mind, since I don’t think VNM has problems that don’t apply to similar coherence theorems.
One can construct lotteries with probabilities that are pretty well understood (e.g. flipping coins that we have accumulated a lot of evidence are fair), and you can restrict attention to lotteries only involving uncertainty coming from such sources. One may then get probabilities for other, less well-understood sources of uncertainty by comparing preferences involving such uncertainty to preferences involving easy-to-quantify uncertainty (e.g. if A is preferred to B, and you’re indifferent between 60%A+40%B and “A if X, B if not-X”, then you assign probability 60% to X. Perhaps not quite as philosophically satisfying as deriving probabilities from scratch, but this doesn’t seem like a fatal flaw in VNM to me.
I understood those as being synonyms. What’s the difference?
I would argue that independence of irrelevant alternatives is not a real coherence criterion. It looks like one at first glance: if it’s violated, then you get an Allais Paradox-type situation where someone pays to throw a switch and then pays to throw it back. The problem is, the “arbitrage” of throwing the switch back and forth hinges on the assumption that the stated probabilities are objectively correct. It’s entirely possible for someone to come along who believes that throwing the switch changes the probabilities in a way that makes it a good deal. Then there’s no real arbitrage, it just comes down to whose probabilities better match the outcomes.
My intuition for this not being real arbitrage comes from finance. In finance, we’d call it “statistical arbitrage”: it only works if the probabilities are correct. The major lesson of the collapse of Long Term Capital Management in the 90′s is that statistical arbitrage is definitely not real arbitrage. The whole point of true arbitrage is that it does not depend on your statistical model being correct .
This directly leads to the difference between VNM and Bayesian expected utility maximization. In VNM, agents have preferences over lotteries: the probabilities of each outcome are inputs to the preference function. In Bayesian expected utility maximization, the only inputs to the preference function are the choices available to the agent—figuring out the probabilities of each outcome under each choice is the agent’s job.
(I do agree that we can set up situations where objectively correct probabilities are a reasonable model, e.g. in a casino, but the point of coherence theorems is to be pretty generally applicable. A theorem only relevant to casinos isn’t all that interesting.)
Ok, I see what you mean about independence of irrelevant alternatives only being a real coherence condition when the probabilities are objective (or otherwise known to be equal because they come from the same source, even if there isn’t an objective way of saying what their common probability is).
But I disagree that this makes VNM only applicable to settings in which all sources of uncertainty have objectively correct probabilities. As I said in my previous comment, you only need there to exist some source of objective probabilities, and you can then use preferences over lotteries involving objective probabilities and preferences over related lotteries involving other sources of uncertainty to determine what probability the agent must assign for those other sources of uncertainty.
Re: the difference between VNM and Bayesian expected utility maximization, I take it from the word “Bayesian” that the way you’re supposed to choose between actions does involve first coming up with probabilities of each outcome resulting from each action, and from “expected utility maximization”, that these probabilities are to be used in exactly the way the VNM theorem says they should be. Since the VNM theorem does not make any assumptions about where the probabilities came from, these still sound essentially the same, except with Bayesian expected utility maximization being framed to emphasize that you have to get the probabilities somehow first.
Let me repeat back your argument as I understand it.
If we have a Bayesian utility maximizing agent, that’s just a probabilistic inference layer with a VNM utility maximizer sitting on top of it. So our would-be arbitrageur comes along with a source of “objective” randomness, like a quantum random number generator. The arbitrageur wants to interact with the VNM layer, so it needs to design bets to which the inference layer assigns some specific probability. It does that by using the “objective” randomness source in the bet design: just incorporate that randomness in such a way that the inference layer assigns the probabilities the arbitrageur wants.
This seems correct insofar as it applies. It is a useful perspective, and not one I had thought much about before this, so thanks for bringing it in.
The main issue I still don’t see resolved by this argument is the architecture question. The coherence theorems only say that an agent must act as if they perform Bayesian inference and then choose the option with highest expected value based on those probabilities. In the agent’s actual internal architecture, there need not be separate modules for inference and decision-making (a Kalman filter is one example). If we can’t neatly separate the two pieces somehow, then we don’t have a good way to construct lotteries with specified probabilities, so we don’t have a way to treat the agent as a VNM-type agent.
This directly follows from the original main issue: VNM utility theory is built on the idea that probabilities live in the environment, not in the agent. If there’s a neat separation between the agent’s inference and decision modules, then we can redefine the inference module to be part of the environment, but that neat separation need not always exist.
EDIT: Also, I should point out explicitly that VNM alone doesn’t tell us why we ever expect probabilities to be relevant to anything in the first place. If we already have a Bayesian expected utility maximizer with separate inference and decision modules, then we can model that as an inference layer with VNM on top, but then we don’t have a theorem telling us why inference layers should magically appear in the world.
Why do we expect (approximate) expected utility maximizers to show up in the real world? That’s the main question coherence theorems answer, and VNM cannot answer that question unless all of the probabilities involved are ontologically fundamental.