There are six total worlds:, and.
All we get are Alice’s credences in rain (given by an inequality), so the only propositions we might learn are (corresponding to non-trivial propositions), and , and (corresponding to non-trivial propositions). Local trust only constrains your reaction to these propositions directly, so it won’t require deference on the other 58 events. (Well, 56.)
Yes, although with some subtlety.
Alice is just an expert on rain, not necessarily on the quality of her own epistemic state. (One easier example: suppose your credence initially in rain is .5. Alice’s is either .6 or .4. Conditional on it being .6, you become certain it rains. Conditional on it being .4, you become certain it won’t rain. You’d obviously use her credences to bet over your own, but you also take her to be massively underconfident.)
Now, the slight wrinkle here is that the language we used of calibration makes this also seem more “objective” or long-run frequentist than we really intend. All that really matters is your own subjective reaction to Alice’s credences, so whether she’s actually calibrated or not doesn’t ultimately determine whether the conditions on local trust can be met.