But it has to do with the f-values of all the different versions of you that there might be in the future (or in different Everett branches, or whatever).
I think this is part of the problem. Expected utility is based on epistemic uncertainty, which has nothing to do with objective frequency, including objective frequency across Everett branches or the like.
Nothing to do with objective frequency? Surely that’s wrong: e.g., as you gain information, your subjective probabilities ought to converge on the objective frequencies.
But I agree: the relevant sense of expectation here has to be the subjective one (since all an agent can use in deciding what to prefer is its own subjective probabilities, not whatever objective ones there may be), and this does seem like it’s a problem with what Phil’s saying.
the relevant sense of expectation here has to be the subjective one (since all an agent can use in deciding what to prefer is its own subjective probabilities, not whatever objective ones there may be), and this does seem like it’s a problem with what Phil’s saying.
It’s just as much a problem with all of decision theory, and all expectation maximization, as with anything I’m saying. This may be a difficulty, but it’s completely orthogonal to the issue at hand, since all of the alternatives have that same weakness.
I think the point is that, if probability is only in the mind, the analogy between averaging over future yous and averaging over other people is weaker than it might initially appear.
It sort of seems like there might be an analogy if you’re talking about averaging over versions of you that end up in different Everett branches. But if we’re talking about subjective probabilities, then there’s no sense in which these “future yous” exist, except in your mind, and it’s more difficult to see the analogy between averaging over them, and averaging over actual people.
Expected utility is based on epistemic uncertainty
What? So if you had accurate information about the probability distribution of outcomes, then you couldn’t use expected utility? I don’t think that’s right. In fact, it’s exactly the reverse. Expected utility doesn’t really work when you have epistemic uncertainty.
If I’m using subjective probabilities at all – if I don’t know the exact outcome – I’m working under uncertainty and using expected utility. If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
I don’t know that this affects your point, but I think we can make good sense of objective probabilities as being something else than either subjective probabilities or objective frequencies. See for example this.
If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
You still need to multiply utility of those events by their probability. Objective frequency is not as objective as it may seem, it’s just a point at which the posterior is no longer expected to change, given more information. Or, alternatively, it’s “physical probability”, a parameter in your model that has nothing to do with subjective probability and expected utility maximization, and has the status similar to that of, say, mass.
Expected utility is based on epistemic uncertainty, which has nothing to do with objective frequency
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
What was you original comment supposed to mean?
If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
This is the situation which the theory of expected utility was designed for. I think your claim is exactly backwards of what it should be.
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
This is not what is meant by epistemic uncertainty. In a framework of Bayesian probability theory, you start with a fixed, exactly defined prior distribution. The uncertainty comes from working with big events on state space, some of them coming in form of states of variables, as opposed to individual states. See Probability Space, Random Variable, also E.T. Jaynes (1990) “Probability Theory as Logic” may be helpful.
According to Wikipedia, that is what is meant by epistemic uncertainty. It says that one type of uncertainty is
“1. Uncertainty due to variability of input and / or model parameters when the characterization of the variability is available (e.g., with probability density functions, pdf),”
and that all other types of uncertainty are epistemic uncertainty.
And here’s a quote from “Separating natural and epistemic uncertainty in flood frequency analysis”, Bruno Merz and Annegret H. Thieken, J. of Hydrology 2004, which also agrees with me:
“Natural uncertainty stems from variability of the underlying stochastic process. Epistemic uncertainty results from incomplete knowledge about the process under study.”
This “natural uncertainty” is a property of distributions, while epistemic uncertainty to which you refer here corresponds to what I meant. When you have incomplete knowledge about the process under study, you are working with one of the multiple possible processes, you are operating inside a wide event that includes all these possibilities. I suspect you are still confusing the prior on global state space with marginal probability distributions on variables. Follow the links I gave before.
I think this is part of the problem. Expected utility is based on epistemic uncertainty, which has nothing to do with objective frequency, including objective frequency across Everett branches or the like.
Nothing to do with objective frequency? Surely that’s wrong: e.g., as you gain information, your subjective probabilities ought to converge on the objective frequencies.
But I agree: the relevant sense of expectation here has to be the subjective one (since all an agent can use in deciding what to prefer is its own subjective probabilities, not whatever objective ones there may be), and this does seem like it’s a problem with what Phil’s saying.
It’s just as much a problem with all of decision theory, and all expectation maximization, as with anything I’m saying. This may be a difficulty, but it’s completely orthogonal to the issue at hand, since all of the alternatives have that same weakness.
I think the point is that, if probability is only in the mind, the analogy between averaging over future yous and averaging over other people is weaker than it might initially appear.
It sort of seems like there might be an analogy if you’re talking about averaging over versions of you that end up in different Everett branches. But if we’re talking about subjective probabilities, then there’s no sense in which these “future yous” exist, except in your mind, and it’s more difficult to see the analogy between averaging over them, and averaging over actual people.
What? So if you had accurate information about the probability distribution of outcomes, then you couldn’t use expected utility? I don’t think that’s right. In fact, it’s exactly the reverse. Expected utility doesn’t really work when you have epistemic uncertainty.
What does “accurate information about the probability distribution” mean? Probability is in the mind.
If I’m using subjective probabilities at all – if I don’t know the exact outcome – I’m working under uncertainty and using expected utility. If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
I don’t know that this affects your point, but I think we can make good sense of objective probabilities as being something else than either subjective probabilities or objective frequencies. See for example this.
You still need to multiply utility of those events by their probability. Objective frequency is not as objective as it may seem, it’s just a point at which the posterior is no longer expected to change, given more information. Or, alternatively, it’s “physical probability”, a parameter in your model that has nothing to do with subjective probability and expected utility maximization, and has the status similar to that of, say, mass.
You said
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
What was you original comment supposed to mean?
This is the situation which the theory of expected utility was designed for. I think your claim is exactly backwards of what it should be.
This is not what is meant by epistemic uncertainty. In a framework of Bayesian probability theory, you start with a fixed, exactly defined prior distribution. The uncertainty comes from working with big events on state space, some of them coming in form of states of variables, as opposed to individual states. See Probability Space, Random Variable, also E.T. Jaynes (1990) “Probability Theory as Logic” may be helpful.
According to Wikipedia, that is what is meant by epistemic uncertainty. It says that one type of uncertainty is
“1. Uncertainty due to variability of input and / or model parameters when the characterization of the variability is available (e.g., with probability density functions, pdf),”
and that all other types of uncertainty are epistemic uncertainty.
And here’s a quote from “Separating natural and epistemic uncertainty in flood frequency analysis”, Bruno Merz and Annegret H. Thieken, J. of Hydrology 2004, which also agrees with me:
“Natural uncertainty stems from variability of the underlying stochastic process. Epistemic uncertainty results from incomplete knowledge about the process under study.”
This “natural uncertainty” is a property of distributions, while epistemic uncertainty to which you refer here corresponds to what I meant. When you have incomplete knowledge about the process under study, you are working with one of the multiple possible processes, you are operating inside a wide event that includes all these possibilities. I suspect you are still confusing the prior on global state space with marginal probability distributions on variables. Follow the links I gave before.