The problem with VNM-style lotteries is that the probabilities involved have to come from somewhere besides the coherence theorems themselves. We need to have some other, external reason to think it’s useful to model the environment using these probabilities. That also means that the “probabilities” associated with the lottery are not necessarily the agent’s probabilities, at least not in the sense that the implied probabilities derived from coherence theorems are the agent’s.
Okay, then to make sure I’ve understood correctly: what you were saying in the quoted text is that you’ll often see an economist, etc., use coherence theorems informally to justify a particular utility maximization model for some system, with particular priors and conditionals. (As opposed to using coherence theorems to justify the idea of EU models generally, which is what I’d thought you meant.) And this is a problem because the particular priors and conditionals they pick can’t be justified solely by the coherence theorem(s) they cite.
The problem with VNM-style lotteries is that the probabilities involved have to come from somewhere besides the coherence theorems themselves. We need to have some other, external reason to think it’s useful to model the environment using these probabilities.
To try to give an example of this: suppose I wanted to use coherence / consistency conditions alone to assign priors over the outcomes of a VNM lottery. Maybe the closest I could come to doing this would be to use maxent + transformation groups to assign an ignorance prior over those outcomes; and to do that, I’d need to additionally know the symmetries that are implied by my ignorance of those outcomes. But those symmetries are specific to the structure of my problem and are not contained in the coherence theorems themselves. So this information about symmetries would be what you would refer to as an “external reason to think it’s useful to model the environment using these probabilities”.
… what you were saying in the quoted text is that you’ll often see an economist, etc., use coherence theorems informally to justify a particular utility maximization model for some system, with particular priors and conditionals. (As opposed to using coherence theorems to justify the idea of EU models generally, which is what I’d thought you meant.)
Correct.
This is a problem not because I want the choices fully justified, but rather because with many real world systems it’s not clear exactly how I should set up my agent model. For instance, what’s the world model and utility function of an e-coli? Some choices would make the model tautological/trivial; I want my claim that e.g. an e-coli approximates a Bayesian expected utility maximizer to have nontrivial and correct implications. I want to know the sense-in-which an e-coli approximates a Bayesian expected utility maximizer, and a rock doesn’t. The coherence theorems tell us how to do that. They provide nontrivial sufficient conditions (like e.g. pareto optimality) which imply (and are implied by) particular utilities and world models.
To try to give an example of this: suppose I wanted to use coherence / consistency conditions alone to assign priors over the outcomes of a VNM lottery. …
Is this a correct interpretation?
Your example is correct, though it is not the usual way of obtaining probabilities from coherence conditions. (Well, ok, in actual practice it kinda is the usual way, because existing coherence theorems are pretty weak. But it’s not the usual way used by people who talk about coherence theorems a lot.) A more typical example: I can look at a chain of options on a stock, and use the prices of those options to back out market-implied probabilities for each possible stock price at expiry. Many coherence theorems do basically the same thing, but “prices” are derived from the trade-offs an agent accepts, rather than from a market.
A more typical example: I can look at a chain of options on a stock, and use the prices of those options to back out market-implied probabilities for each possible stock price at expiry.
Gotcha, this is a great example. And the fundamental reasons why this works are 1) the immediate incentive that you can earn higher returns by pricing the option more correctly; combined with 2) the fact that the agents who are assigning these prices have (on a dollar-weighted-average basis) gone through multiple rounds of selection for higher returns.
(I wonder to what extent any selection mechanism ultimately yields agents with general reasoning capabilities, given tight enough competition between individuals in the selected population? Even if the environment doesn’t start out especially complicated, if the individuals are embedded in it and are interacting with one another, after a few rounds of selection most of the complexity an individual perceives is going to be due to its competitors. Not everything is like this — e.g., training a neural net is a form of selection without competition — but it certainly seems to describe many of the more interesting bits of the world.)
Thanks for the clarifications here btw — this has really piqued my interest in selection theorems as a research angle.
The problem with VNM-style lotteries is that the probabilities involved have to come from somewhere besides the coherence theorems themselves. We need to have some other, external reason to think it’s useful to model the environment using these probabilities. That also means that the “probabilities” associated with the lottery are not necessarily the agent’s probabilities, at least not in the sense that the implied probabilities derived from coherence theorems are the agent’s.
Okay, then to make sure I’ve understood correctly: what you were saying in the quoted text is that you’ll often see an economist, etc., use coherence theorems informally to justify a particular utility maximization model for some system, with particular priors and conditionals. (As opposed to using coherence theorems to justify the idea of EU models generally, which is what I’d thought you meant.) And this is a problem because the particular priors and conditionals they pick can’t be justified solely by the coherence theorem(s) they cite.
To try to give an example of this: suppose I wanted to use coherence / consistency conditions alone to assign priors over the outcomes of a VNM lottery. Maybe the closest I could come to doing this would be to use maxent + transformation groups to assign an ignorance prior over those outcomes; and to do that, I’d need to additionally know the symmetries that are implied by my ignorance of those outcomes. But those symmetries are specific to the structure of my problem and are not contained in the coherence theorems themselves. So this information about symmetries would be what you would refer to as an “external reason to think it’s useful to model the environment using these probabilities”.
Is this a correct interpretation?
Correct.
This is a problem not because I want the choices fully justified, but rather because with many real world systems it’s not clear exactly how I should set up my agent model. For instance, what’s the world model and utility function of an e-coli? Some choices would make the model tautological/trivial; I want my claim that e.g. an e-coli approximates a Bayesian expected utility maximizer to have nontrivial and correct implications. I want to know the sense-in-which an e-coli approximates a Bayesian expected utility maximizer, and a rock doesn’t. The coherence theorems tell us how to do that. They provide nontrivial sufficient conditions (like e.g. pareto optimality) which imply (and are implied by) particular utilities and world models.
Your example is correct, though it is not the usual way of obtaining probabilities from coherence conditions. (Well, ok, in actual practice it kinda is the usual way, because existing coherence theorems are pretty weak. But it’s not the usual way used by people who talk about coherence theorems a lot.) A more typical example: I can look at a chain of options on a stock, and use the prices of those options to back out market-implied probabilities for each possible stock price at expiry. Many coherence theorems do basically the same thing, but “prices” are derived from the trade-offs an agent accepts, rather than from a market.
Gotcha, this is a great example. And the fundamental reasons why this works are 1) the immediate incentive that you can earn higher returns by pricing the option more correctly; combined with 2) the fact that the agents who are assigning these prices have (on a dollar-weighted-average basis) gone through multiple rounds of selection for higher returns.
(I wonder to what extent any selection mechanism ultimately yields agents with general reasoning capabilities, given tight enough competition between individuals in the selected population? Even if the environment doesn’t start out especially complicated, if the individuals are embedded in it and are interacting with one another, after a few rounds of selection most of the complexity an individual perceives is going to be due to its competitors. Not everything is like this — e.g., training a neural net is a form of selection without competition — but it certainly seems to describe many of the more interesting bits of the world.)
Thanks for the clarifications here btw — this has really piqued my interest in selection theorems as a research angle.