What if I told you that the balls were either all green or all blue?
Hmm. Well, with the interval prior I had in mind (footnote 7), this would result in very high (but not complete) ambiguity. My guess is that’s a limitation of two dimensions—it’ll handle updating on draws from the urn but not “internals” like that. But I’m guessing. (1/2 +- 1⁄6) seems like a reasonable prior interval for a structureless event.
So in the standard Ellisberg paradox, you wouldn’t act nonbayesianally if you were told “The reason I’m asking you to choose between red and green rather than red and blue is because of a coin flip.”
If I take the statement at face value, sure.
but you’d still prefer red if all three options were allowed?
Yes, but again I could flip a coin to decide between green and blue then.
This seems to be going against the whole idea of probability being about mental states;
Well, okay. I don’t think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I’d say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean. However these mental states still leave the correct course of action underdetermined, and the virtual interval represents one degree of freedom. There is no rule for selecting the prior virtual interval. 0 is the obvious value, but any initial value is still dynamically consistent.
My guess is that’s a limitation of two dimensions—it’ll handle updating on draws from the urn but not “internals” like that. But I’m guessing. (1/2 +- 1⁄6) seems like a reasonable prior interval for a structureless event.
Would a single ball that is either green or blue work?
0 is the obvious value, but any initial value is still dynamically consistent.
I agree that your decision procedure is consistent, not susceptible to Dutch books, etc.
I don’t think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I’d say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean.
I don’t think this is true. Whether or not you flip the coin, you have the same information about the number of green balls in the urn, so, while the total information is different, the part about the green balls is the same. In order to follow your decision algorithm while believing that probability is about incomplete information, you have to always use all your knowledge in decisions, even knowledge that, like the coin flip, is ‘uncorrelated’, if I can use that word for something that isn’t being assigned a probability, with what you are betting on. This is consistent with the letter of what I wrote, but I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.
Would a single ball that is either green or blue work?
That still seems like a structureless event. No abstract example comes to mind, but there must be concrete cases where Bayesians disagree wildly about the prior probability of an event (95%). Some of these cases should be candidates for very high (but not complete) ambiguity.
I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the
number of green balls in the urn, not your entire mental state.
I think you’re really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind. I’d say the former is your key proposition. It would be sufficient to rule out that an agent’s internal variables, like the virtual interval, could have any effect. I’d say the metaphysical status of probability is a red herring (but I would also accept a 50:50 green-blue herring).
Of course even for Bayesians there are equiprobable options, so decisions can’t be entirely a function of probability. More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.
Would a single ball that is either green or blue work?
That still seems like a structureless event.
Okay.
I think you’re really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind.
Of course even for Bayesians there are equiprobable options, so decisions can’t be entirely a function of probability.
Well, once you assign probabilities to everything, you’re mostly a Bayesian already. I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one’s knowledge about the possible outcomes.
More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.
Aren’t you violating the axiom of independence but not the axiom of transitivity?
I’d say the former is your key proposition. It would be sufficient to rule out that an agent’s internal variables, like the virtual interval, could have any effect.
I’m not really sure what a lot of this means. The virtual interval seems to me to be subjectively objective in the same way probability is. Also, do you mean ‘could have any effect’ in the normative sense of an effect on what the right choice is?
I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one’s knowledge about the possible outcomes.
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
Aren’t you violating the axiom of indepentence but not the axiom of transitivity?
My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.
Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
I’m not really sure what a lot of this means
Sorry about that. Maybe I’ve been clearer this time around?
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
Well, you’d have to say how you choose the interval. Jaynes justified his prior distributions with symmetry principles and maximum entropy. So far, your proposals allow the interval to depend on a coin flip that has no effect on the utility or on the process that does determine the utility. That is not what predicting the results of actions looks like.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
Given an interval, your preferences obey transitivity even though ambiguity doesn’t, right? I don’t think that nontransitivity is the problem here; the thing I don’t like about your decision process is that it takes into account things that have nothing to do with the consequences of your actions.
I’m not really sure what a lot of this means
Sorry about that. Maybe I’ve been clearer this time around?
I only mean that middle paragraph, not the whole comment.
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it “objective”. It is “objective” in that it looks like the sort of thing that Bayesians call “objective” priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can’t apply max entropy now. That’s ok: apply max entropy “retroactively” and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy.
Well, that would correspond to a complete absence of knowledge that would favour any configuration over any other, but I do endorse this basic framework for prior selection.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.)
Doesn’t an interval of 0 just recover Bayesian inference?
Hmm. Well, with the interval prior I had in mind (footnote 7), this would result in very high (but not complete) ambiguity. My guess is that’s a limitation of two dimensions—it’ll handle updating on draws from the urn but not “internals” like that. But I’m guessing. (1/2 +- 1⁄6) seems like a reasonable prior interval for a structureless event.
If I take the statement at face value, sure.
Yes, but again I could flip a coin to decide between green and blue then.
Well, okay. I don’t think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I’d say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean. However these mental states still leave the correct course of action underdetermined, and the virtual interval represents one degree of freedom. There is no rule for selecting the prior virtual interval. 0 is the obvious value, but any initial value is still dynamically consistent.
Would a single ball that is either green or blue work?
I agree that your decision procedure is consistent, not susceptible to Dutch books, etc.
I don’t think this is true. Whether or not you flip the coin, you have the same information about the number of green balls in the urn, so, while the total information is different, the part about the green balls is the same. In order to follow your decision algorithm while believing that probability is about incomplete information, you have to always use all your knowledge in decisions, even knowledge that, like the coin flip, is ‘uncorrelated’, if I can use that word for something that isn’t being assigned a probability, with what you are betting on. This is consistent with the letter of what I wrote, but I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.
That still seems like a structureless event. No abstract example comes to mind, but there must be concrete cases where Bayesians disagree wildly about the prior probability of an event (95%). Some of these cases should be candidates for very high (but not complete) ambiguity.
I think you’re really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind. I’d say the former is your key proposition. It would be sufficient to rule out that an agent’s internal variables, like the virtual interval, could have any effect. I’d say the metaphysical status of probability is a red herring (but I would also accept a 50:50 green-blue herring).
Of course even for Bayesians there are equiprobable options, so decisions can’t be entirely a function of probability. More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.
Okay.
Well, once you assign probabilities to everything, you’re mostly a Bayesian already. I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one’s knowledge about the possible outcomes.
Aren’t you violating the axiom of independence but not the axiom of transitivity?
I’m not really sure what a lot of this means. The virtual interval seems to me to be subjectively objective in the same way probability is. Also, do you mean ‘could have any effect’ in the normative sense of an effect on what the right choice is?
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.
Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
Sorry about that. Maybe I’ve been clearer this time around?
Well, you’d have to say how you choose the interval. Jaynes justified his prior distributions with symmetry principles and maximum entropy. So far, your proposals allow the interval to depend on a coin flip that has no effect on the utility or on the process that does determine the utility. That is not what predicting the results of actions looks like.
Given an interval, your preferences obey transitivity even though ambiguity doesn’t, right? I don’t think that nontransitivity is the problem here; the thing I don’t like about your decision process is that it takes into account things that have nothing to do with the consequences of your actions.
I only mean that middle paragraph, not the whole comment.
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it “objective”. It is “objective” in that it looks like the sort of thing that Bayesians call “objective” priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can’t apply max entropy now. That’s ok: apply max entropy “retroactively” and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.
Well, that would correspond to a complete absence of knowledge that would favour any configuration over any other, but I do endorse this basic framework for prior selection.
Doesn’t an interval of 0 just recover Bayesian inference?