with the central thesis being with knowledge of the probability someone assigns a proposition, and their calibration, you can calculate a Bayesian probability estimate for the truthhood of that proposition.
What do you mean by knowing someone’s calibration? If it’s summarized in a single score over many kinds of predictions, then I’m not sure your idea can work. For example, imagine Bob is perfectly calibrated when predicting earthquakes, but overconfident when predicting meteors. That makes him overconfident on average, but when he predicts an earthquake, you shouldn’t assume that he’s overconfident and update accordingly.
What do you mean by knowing someone’s calibration? If it’s summarized in a single score over many kinds of predictions, then I’m not sure your idea can work. For example, imagine Bob is perfectly calibrated when predicting earthquakes, but overconfident when predicting meteors. That makes him overconfident on average, but when he predicts an earthquake, you shouldn’t assume that he’s overconfident and update accordingly.