Expected utility is based on epistemic uncertainty
What? So if you had accurate information about the probability distribution of outcomes, then you couldn’t use expected utility? I don’t think that’s right. In fact, it’s exactly the reverse. Expected utility doesn’t really work when you have epistemic uncertainty.
If I’m using subjective probabilities at all – if I don’t know the exact outcome – I’m working under uncertainty and using expected utility. If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
I don’t know that this affects your point, but I think we can make good sense of objective probabilities as being something else than either subjective probabilities or objective frequencies. See for example this.
If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
You still need to multiply utility of those events by their probability. Objective frequency is not as objective as it may seem, it’s just a point at which the posterior is no longer expected to change, given more information. Or, alternatively, it’s “physical probability”, a parameter in your model that has nothing to do with subjective probability and expected utility maximization, and has the status similar to that of, say, mass.
Expected utility is based on epistemic uncertainty, which has nothing to do with objective frequency
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
What was you original comment supposed to mean?
If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
This is the situation which the theory of expected utility was designed for. I think your claim is exactly backwards of what it should be.
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
This is not what is meant by epistemic uncertainty. In a framework of Bayesian probability theory, you start with a fixed, exactly defined prior distribution. The uncertainty comes from working with big events on state space, some of them coming in form of states of variables, as opposed to individual states. See Probability Space, Random Variable, also E.T. Jaynes (1990) “Probability Theory as Logic” may be helpful.
According to Wikipedia, that is what is meant by epistemic uncertainty. It says that one type of uncertainty is
“1. Uncertainty due to variability of input and / or model parameters when the characterization of the variability is available (e.g., with probability density functions, pdf),”
and that all other types of uncertainty are epistemic uncertainty.
And here’s a quote from “Separating natural and epistemic uncertainty in flood frequency analysis”, Bruno Merz and Annegret H. Thieken, J. of Hydrology 2004, which also agrees with me:
“Natural uncertainty stems from variability of the underlying stochastic process. Epistemic uncertainty results from incomplete knowledge about the process under study.”
This “natural uncertainty” is a property of distributions, while epistemic uncertainty to which you refer here corresponds to what I meant. When you have incomplete knowledge about the process under study, you are working with one of the multiple possible processes, you are operating inside a wide event that includes all these possibilities. I suspect you are still confusing the prior on global state space with marginal probability distributions on variables. Follow the links I gave before.
What? So if you had accurate information about the probability distribution of outcomes, then you couldn’t use expected utility? I don’t think that’s right. In fact, it’s exactly the reverse. Expected utility doesn’t really work when you have epistemic uncertainty.
What does “accurate information about the probability distribution” mean? Probability is in the mind.
If I’m using subjective probabilities at all – if I don’t know the exact outcome – I’m working under uncertainty and using expected utility. If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
I don’t know that this affects your point, but I think we can make good sense of objective probabilities as being something else than either subjective probabilities or objective frequencies. See for example this.
You still need to multiply utility of those events by their probability. Objective frequency is not as objective as it may seem, it’s just a point at which the posterior is no longer expected to change, given more information. Or, alternatively, it’s “physical probability”, a parameter in your model that has nothing to do with subjective probability and expected utility maximization, and has the status similar to that of, say, mass.
You said
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
What was you original comment supposed to mean?
This is the situation which the theory of expected utility was designed for. I think your claim is exactly backwards of what it should be.
This is not what is meant by epistemic uncertainty. In a framework of Bayesian probability theory, you start with a fixed, exactly defined prior distribution. The uncertainty comes from working with big events on state space, some of them coming in form of states of variables, as opposed to individual states. See Probability Space, Random Variable, also E.T. Jaynes (1990) “Probability Theory as Logic” may be helpful.
According to Wikipedia, that is what is meant by epistemic uncertainty. It says that one type of uncertainty is
“1. Uncertainty due to variability of input and / or model parameters when the characterization of the variability is available (e.g., with probability density functions, pdf),”
and that all other types of uncertainty are epistemic uncertainty.
And here’s a quote from “Separating natural and epistemic uncertainty in flood frequency analysis”, Bruno Merz and Annegret H. Thieken, J. of Hydrology 2004, which also agrees with me:
“Natural uncertainty stems from variability of the underlying stochastic process. Epistemic uncertainty results from incomplete knowledge about the process under study.”
This “natural uncertainty” is a property of distributions, while epistemic uncertainty to which you refer here corresponds to what I meant. When you have incomplete knowledge about the process under study, you are working with one of the multiple possible processes, you are operating inside a wide event that includes all these possibilities. I suspect you are still confusing the prior on global state space with marginal probability distributions on variables. Follow the links I gave before.