Maximality seems asymmetrical and losing information?
Maybe it will help me to have an example though I’m not sure if this is a good one… if I have two weather forecasts that provide different probabilities for 0 inches, 1 inch, etc but I have absolutely no idea about which forecast is better, and I don’t want to go out if there is greater than 20% probability of more than 2 inches of rain then I’d weigh each forecast equally and calculate the probability from there. If the forecasts themselves provide a high/low probabilities for 0 inches, 1 inch, etc then I’d think this isn’t a very good forecast since the forecaster should either have combined all their analysis into a single probability (say 30%) or else given the conditions under which they give their low end (say 10%) or high end (say 40%) and then if I didn’t have any opinions on the probability of those conditions then I would weigh the low and high equally (and get 25%). Do you think I should be doing something different (or what is a better example)?
I’d think this isn’t a very good forecast since the forecaster should either have combined all their analysis into a single probability (say 30%) or else given the conditions under which they give their low end (say 10%) or high end (say 40%) and then if I didn’t have any opinions on the probability of those conditions then I would weigh the low and high equally (and get 25%).
This sounds like a critique of imprecise credences themselves, not maximality as a decision rule. Do you think that, even if the credences you actually endorse are imprecise, maximality is objectionable?
Anyway, to respond to the critique itself:
The motivation for having an imprecise credence of [10%, 40%] in this case is that you might think a) there are some reasons to favor numbers closer to 40%; b) there are some reasons to favor numbers closer to 10%; and c) you don’t think these reasons have exactly equal weight, nor do you think the reasons in (a) have determinately more or less weight than those in (b). Given (c), it’s not clear what the motivation is for aggregating these numbers into 25% using equal weights.
I’m not sure why exactly you think the forecaster “should” have combined their forecast into a single probability. In what sense are we losing information by not doing this? (Prima facie, it seems like the opposite: By compressing our representation of our information into one number, we’re losing the information “the balance of reasons in (a) and (b) seems indeterminate”.)
Thank you for your clear response. How about another example? If somebody offers to flip a fair coin and give me $11 if Heads and $10 if Tails then I will happily take this bet. If they say we’re going to repeat the same bet 1000 times then I will take this bet also and I expect to gain and unlikely to lose a lot. If instead they show me five unfair coins and say they are weighted from 20% Heads to 70% Heads then I’ll be taking on more risk. The other three could be all 21% Heads or all 69% Heads but if I had to pick then I’ll pick Tails because if I know nothing about the other three and I know nothing about if the other person wants me to make or lose money then I’d figure the other three are randomly biased within that range (even though I could be playing a loser’s game for 1000 rounds with flips of those coins if each time one of the coins is selected randomly to flip, but it’s still better than picking Heads). Is this the situation we’re discussing?
Maximality seems asymmetrical and losing information?
Maybe it will help me to have an example though I’m not sure if this is a good one… if I have two weather forecasts that provide different probabilities for 0 inches, 1 inch, etc but I have absolutely no idea about which forecast is better, and I don’t want to go out if there is greater than 20% probability of more than 2 inches of rain then I’d weigh each forecast equally and calculate the probability from there. If the forecasts themselves provide a high/low probabilities for 0 inches, 1 inch, etc then I’d think this isn’t a very good forecast since the forecaster should either have combined all their analysis into a single probability (say 30%) or else given the conditions under which they give their low end (say 10%) or high end (say 40%) and then if I didn’t have any opinions on the probability of those conditions then I would weigh the low and high equally (and get 25%). Do you think I should be doing something different (or what is a better example)?
This sounds like a critique of imprecise credences themselves, not maximality as a decision rule. Do you think that, even if the credences you actually endorse are imprecise, maximality is objectionable?
Anyway, to respond to the critique itself:
The motivation for having an imprecise credence of [10%, 40%] in this case is that you might think a) there are some reasons to favor numbers closer to 40%; b) there are some reasons to favor numbers closer to 10%; and c) you don’t think these reasons have exactly equal weight, nor do you think the reasons in (a) have determinately more or less weight than those in (b). Given (c), it’s not clear what the motivation is for aggregating these numbers into 25% using equal weights.
I’m not sure why exactly you think the forecaster “should” have combined their forecast into a single probability. In what sense are we losing information by not doing this? (Prima facie, it seems like the opposite: By compressing our representation of our information into one number, we’re losing the information “the balance of reasons in (a) and (b) seems indeterminate”.)
Thank you for your clear response. How about another example? If somebody offers to flip a fair coin and give me $11 if Heads and $10 if Tails then I will happily take this bet. If they say we’re going to repeat the same bet 1000 times then I will take this bet also and I expect to gain and unlikely to lose a lot. If instead they show me five unfair coins and say they are weighted from 20% Heads to 70% Heads then I’ll be taking on more risk. The other three could be all 21% Heads or all 69% Heads but if I had to pick then I’ll pick Tails because if I know nothing about the other three and I know nothing about if the other person wants me to make or lose money then I’d figure the other three are randomly biased within that range (even though I could be playing a loser’s game for 1000 rounds with flips of those coins if each time one of the coins is selected randomly to flip, but it’s still better than picking Heads). Is this the situation we’re discussing?