Alice is just an expert on rain, not necessarily on the quality of her own epistemic state. (One easier example: suppose your credence initially in rain is .5. Alice’s is either .6 or .4. Conditional on it being .6, you become certain it rains. Conditional on it being .4, you become certain it won’t rain. You’d obviously use her credences to bet over your own, but you also take her to be massively underconfident.)
Now, the slight wrinkle here is that the language we used of calibration makes this also seem more “objective” or long-run frequentist than we really intend. All that really matters is your own subjective reaction to Alice’s credences, so whether she’s actually calibrated or not doesn’t ultimately determine whether the conditions on local trust can be met.
Yes, although with some subtlety.
Alice is just an expert on rain, not necessarily on the quality of her own epistemic state. (One easier example: suppose your credence initially in rain is .5. Alice’s is either .6 or .4. Conditional on it being .6, you become certain it rains. Conditional on it being .4, you become certain it won’t rain. You’d obviously use her credences to bet over your own, but you also take her to be massively underconfident.)
Now, the slight wrinkle here is that the language we used of calibration makes this also seem more “objective” or long-run frequentist than we really intend. All that really matters is your own subjective reaction to Alice’s credences, so whether she’s actually calibrated or not doesn’t ultimately determine whether the conditions on local trust can be met.