Based on your predicted base rate, estimate a conditional probability based on new information.
Compare the estimated base rate against the actual base rate.
Using the actual base rate, now estimate a new conditional probability.
Compare both the estimated conditional probabilities against the actual conditional probability.
So, I think there are multiple levels here. You want to make sure you get the base rate part right. You also want to make sure that you get the update right. You can see how well calibrated you are for each. You might find that you’re okay at estimating conditional probabilities, but bad at estimating the base rate, etc.
I tend not to use my old estimates as a prior. I’m not an expert at Bayesian probability (so maybe I get all of this wrong!). I interpret what I’m looking for as a conditional probability, maybe with an estimated prior/base rate (which you could call your “old estimate”, I guess). I prefer data whenever it is available.
The toy problems are okay, and I’m sure you can generate a lot of them.
The vasectomy example was much less straightforward than I would have expected. I spent at least 10 minutes rearranging different equations for the conditional probability before finding one where I could get what I wanted in terms of what data I could find. The problem is that the data you can find in the literature often does not fit so nicely into a simple statement of Bayes rule.
Another example I found to be useful was computing my risk for developing a certain cancer. The base rate of this cancer is very low, but I have a family member who developed the cancer (and recovered, thankfully), and the relative risk for me is considerably higher. I had felt this gave me a probability of developing the cancer on the order of 10% or so, but doing the math showed that while it was higher than the base rate, it’s still basically negligible. This sounds to me like the sort of exercise you want to do.
Here’s my idea to get better at doing updates:
Estimate the base rate.
Based on your predicted base rate, estimate a conditional probability based on new information.
Compare the estimated base rate against the actual base rate.
Using the actual base rate, now estimate a new conditional probability.
Compare both the estimated conditional probabilities against the actual conditional probability.
So, I think there are multiple levels here. You want to make sure you get the base rate part right. You also want to make sure that you get the update right. You can see how well calibrated you are for each. You might find that you’re okay at estimating conditional probabilities, but bad at estimating the base rate, etc.
I tend not to use my old estimates as a prior. I’m not an expert at Bayesian probability (so maybe I get all of this wrong!). I interpret what I’m looking for as a conditional probability, maybe with an estimated prior/base rate (which you could call your “old estimate”, I guess). I prefer data whenever it is available.
The toy problems are okay, and I’m sure you can generate a lot of them.
The vasectomy example was much less straightforward than I would have expected. I spent at least 10 minutes rearranging different equations for the conditional probability before finding one where I could get what I wanted in terms of what data I could find. The problem is that the data you can find in the literature often does not fit so nicely into a simple statement of Bayes rule.
Another example I found to be useful was computing my risk for developing a certain cancer. The base rate of this cancer is very low, but I have a family member who developed the cancer (and recovered, thankfully), and the relative risk for me is considerably higher. I had felt this gave me a probability of developing the cancer on the order of 10% or so, but doing the math showed that while it was higher than the base rate, it’s still basically negligible. This sounds to me like the sort of exercise you want to do.