I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one’s knowledge about the possible outcomes.
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
Aren’t you violating the axiom of indepentence but not the axiom of transitivity?
My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.
Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
I’m not really sure what a lot of this means
Sorry about that. Maybe I’ve been clearer this time around?
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
Well, you’d have to say how you choose the interval. Jaynes justified his prior distributions with symmetry principles and maximum entropy. So far, your proposals allow the interval to depend on a coin flip that has no effect on the utility or on the process that does determine the utility. That is not what predicting the results of actions looks like.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
Given an interval, your preferences obey transitivity even though ambiguity doesn’t, right? I don’t think that nontransitivity is the problem here; the thing I don’t like about your decision process is that it takes into account things that have nothing to do with the consequences of your actions.
I’m not really sure what a lot of this means
Sorry about that. Maybe I’ve been clearer this time around?
I only mean that middle paragraph, not the whole comment.
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it “objective”. It is “objective” in that it looks like the sort of thing that Bayesians call “objective” priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can’t apply max entropy now. That’s ok: apply max entropy “retroactively” and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy.
Well, that would correspond to a complete absence of knowledge that would favour any configuration over any other, but I do endorse this basic framework for prior selection.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.)
Doesn’t an interval of 0 just recover Bayesian inference?
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.
Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
Sorry about that. Maybe I’ve been clearer this time around?
Well, you’d have to say how you choose the interval. Jaynes justified his prior distributions with symmetry principles and maximum entropy. So far, your proposals allow the interval to depend on a coin flip that has no effect on the utility or on the process that does determine the utility. That is not what predicting the results of actions looks like.
Given an interval, your preferences obey transitivity even though ambiguity doesn’t, right? I don’t think that nontransitivity is the problem here; the thing I don’t like about your decision process is that it takes into account things that have nothing to do with the consequences of your actions.
I only mean that middle paragraph, not the whole comment.
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it “objective”. It is “objective” in that it looks like the sort of thing that Bayesians call “objective” priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can’t apply max entropy now. That’s ok: apply max entropy “retroactively” and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.
Well, that would correspond to a complete absence of knowledge that would favour any configuration over any other, but I do endorse this basic framework for prior selection.
Doesn’t an interval of 0 just recover Bayesian inference?