Can someone please show me exactly how to do that?
The problem with your question is that the event you described has never happened. Normally you would take a dataset and count the number of times an event occurs vs. the number of times it does not occur, and that gives you the probability.
So to get estimates here you need to be creative with the definition of events. You could count the number of times a global war started in a decade. Going back to say 1800 and counting the two world wars and the Napoleonic wars, that would give about 3⁄21. If you wanted to make yourself feel safe, you could count the number of nukes used compared to the number that have been built. You could count the number of people killed due to particular historical events, and fit a power law to the distribution.
But nothing is going to give you the exact answer. Probability is exact, but statistics (the inverse problem of probability) decidedly isn’t.
Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that’s presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?
What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would identify some facts that could be used as evidence in favour or against the hypothesis. And then one would do the necessary Bayesian updates.
I know how to do this for the simple cases of balls in a bin etc. But I get confused when it comes to forming beliefs about statements that are about the real world.
If you haven’t already, you might want to take a look at Bayes Theorem by Eliezer.
As sort of a quick tip about where you might be getting confused: you summarize the steps involved as (1) come up with a prior, (2) identify potential evidence, and (3) update on the evidence. You’re missing one step. You also need to check to see whether the potential evidence is “true,” and you need to do that before you update.
If you check out Conservation of Expected Evidence, linked above, you’ll see why. You can’t update just because you’ve thought of some facts that might bear on your hypothesis and guessed at their probability—if your intuition is good enough, your guess about the probability of the facts that bear on the hypothesis should already be factored into your very first prior. What you need to do is go out and actually gather information about those facts, and then update on that new information.
For example: I feel hot. I bet I’m running a fever. I estimate my chance of having a bacterial infection that would show up on a microscope slide at 20%.
I think: if my temperature were above 103 degrees, I would be twice as likely to have a bacterial infection, and if my temperature were below 103 degrees, I would only be half as likely to have a bacterial infection. Considering how hot I feel, I guess there’s a 50-50 chance my temperature is above 103 degrees. I STILL estimate my chance of having a bacterial infection at 20%, because I already accounted for all of this. This is just a longhand way of guessing.
Now, I take my temperature with a thermometer. The readout says 104 degrees. Now I update on the evidence; now I think the odds that I have a bacterial infection are 40%.
The math is fudged very heavily, but hopefully it clarifies the concepts. If you want accurate math, you can read Eliezer’s post.
The answer is… its complicated, so you approximate. A good way of approximating is getting a dataset together and putting together a good model that helps explain that dataset. Doing the perfect Bayesian update in the real world is usually worse than nontrivial—its basically impossible.
The problem with your question is that the event you described has never happened. Normally you would take a dataset and count the number of times an event occurs vs. the number of times it does not occur, and that gives you the probability.
So to get estimates here you need to be creative with the definition of events. You could count the number of times a global war started in a decade. Going back to say 1800 and counting the two world wars and the Napoleonic wars, that would give about 3⁄21. If you wanted to make yourself feel safe, you could count the number of nukes used compared to the number that have been built. You could count the number of people killed due to particular historical events, and fit a power law to the distribution.
But nothing is going to give you the exact answer. Probability is exact, but statistics (the inverse problem of probability) decidedly isn’t.
Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that’s presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?
What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would identify some facts that could be used as evidence in favour or against the hypothesis. And then one would do the necessary Bayesian updates.
I know how to do this for the simple cases of balls in a bin etc. But I get confused when it comes to forming beliefs about statements that are about the real world.
If you haven’t already, you might want to take a look at Bayes Theorem by Eliezer.
As sort of a quick tip about where you might be getting confused: you summarize the steps involved as (1) come up with a prior, (2) identify potential evidence, and (3) update on the evidence. You’re missing one step. You also need to check to see whether the potential evidence is “true,” and you need to do that before you update.
If you check out Conservation of Expected Evidence, linked above, you’ll see why. You can’t update just because you’ve thought of some facts that might bear on your hypothesis and guessed at their probability—if your intuition is good enough, your guess about the probability of the facts that bear on the hypothesis should already be factored into your very first prior. What you need to do is go out and actually gather information about those facts, and then update on that new information.
For example: I feel hot. I bet I’m running a fever. I estimate my chance of having a bacterial infection that would show up on a microscope slide at 20%.
I think: if my temperature were above 103 degrees, I would be twice as likely to have a bacterial infection, and if my temperature were below 103 degrees, I would only be half as likely to have a bacterial infection. Considering how hot I feel, I guess there’s a 50-50 chance my temperature is above 103 degrees. I STILL estimate my chance of having a bacterial infection at 20%, because I already accounted for all of this. This is just a longhand way of guessing.
Now, I take my temperature with a thermometer. The readout says 104 degrees. Now I update on the evidence; now I think the odds that I have a bacterial infection are 40%.
The math is fudged very heavily, but hopefully it clarifies the concepts. If you want accurate math, you can read Eliezer’s post.
The answer is… its complicated, so you approximate. A good way of approximating is getting a dataset together and putting together a good model that helps explain that dataset. Doing the perfect Bayesian update in the real world is usually worse than nontrivial—its basically impossible.