The design is very nice and the tool itself is very intuitive. It would be nice if every evidence element had a button to remove it, currently this is only possible for the last one.
For someone not too familiar with the practical application of Bayes, I’m wondering how to rate the probability for evidence when it is not known. In your example, you give “not aliens” a probability of 99.999999% - and the probability that politicians would take this seriously in such a world 5%. This seems like a reasonable guess for the time before they took it seriously. Now they do, so it happened in a world where aliens (presumably) certainly don’t exist. Could I not very well reason that it’s therefore also 50% − 90% likely to happen? How do I choose this number—intuition, base rates?
Only allowing the last piece of evidence to be deleted was a deliberate decision. The problem is that deleting evidence from the middle changes the meaning of all the likelihood values (the sliders) for all of the evidence below it, and which therefore may change in value. If I allowed it to be deleted anyway, it would make it very easy to mistakenly use the now incorrect values (and give the impression that that was fine). I know this makes it more annoying and inconvenient, but it’s because the math itself is annoying and inconvenient!
The meaning of the e.g. Hypothesis B slider for Evidence #3 is “In what percentage of worlds where Hypothesis B is true would I see Evidence #3?” (hopefully this was clear, just reiterating to make sure we’re on the same page). This is called the likelihood of Evidence #3 given Hypothesis B. When answering this, we don’t use the fact that we’ve seen this piece of evidence (in this case that politicians are taking this seriously), which is always just going to be true for actual evidence. Hopefully that makes sense?
As for choosing this number, or the prior values, it’s in general a difficult problem that has been debated a lot. My recommendation is that you make up numbers that feel right (or at least are not obviously wrong), and then play around with the sliders a bit to see how much the exact value effects things. The intended use of the tool is not to make you commit to numbers, but to help you develop intuition on how much to update your beliefs given the evidence, as well as to help you figure out what numbers correspond to your intuitive feelings.
If you’re serious about choosing the right number, then here is what it takes to figure it out: Each hypothesis represents a model of how some part of the world works. To properly get a number out of it, you need to develop the model in technical detail, to the point where you can represent it with an equation or a computer program. Then, you need to set the evidence above the one you’re computing the likelihood for to true in your model. You then need to compute what percentage of the time this evidence turns out to be true in the model. A nice general way to do this is to run the model a whole bunch of times, and see how often it happens (and if reality has been kind enough to instantiate your model enough times, then you might be able to use this to get a “base rate”). Or if your model is relatively simple, you might be able to use math to compute the exact value. This is typically a lot of work, and doesn’t actually help train your intuition about the intuitive mental models you actually use on a day-to-day basis much. But going through this process is helpful for understanding what the numbers you make up are trying to be. I hope this is helpful and not just more confusing.
I came across your site from a comment you made on the discussion about the UAP Disclosure Act. Since my comment focuses mostly on the general usage of the tool and the application of Bayes, I’ll post it here.
The design is very nice and the tool itself is very intuitive. It would be nice if every evidence element had a button to remove it, currently this is only possible for the last one.
For someone not too familiar with the practical application of Bayes, I’m wondering how to rate the probability for evidence when it is not known. In your example, you give “not aliens” a probability of 99.999999% - and the probability that politicians would take this seriously in such a world 5%. This seems like a reasonable guess for the time before they took it seriously. Now they do, so it happened in a world where aliens (presumably) certainly don’t exist. Could I not very well reason that it’s therefore also 50% − 90% likely to happen? How do I choose this number—intuition, base rates?
Thanks, I’m very glad you find it intuitive!
Only allowing the last piece of evidence to be deleted was a deliberate decision. The problem is that deleting evidence from the middle changes the meaning of all the likelihood values (the sliders) for all of the evidence below it, and which therefore may change in value. If I allowed it to be deleted anyway, it would make it very easy to mistakenly use the now incorrect values (and give the impression that that was fine). I know this makes it more annoying and inconvenient, but it’s because the math itself is annoying and inconvenient!
The meaning of the e.g. Hypothesis B slider for Evidence #3 is “In what percentage of worlds where Hypothesis B is true would I see Evidence #3?” (hopefully this was clear, just reiterating to make sure we’re on the same page). This is called the likelihood of Evidence #3 given Hypothesis B. When answering this, we don’t use the fact that we’ve seen this piece of evidence (in this case that politicians are taking this seriously), which is always just going to be true for actual evidence. Hopefully that makes sense?
As for choosing this number, or the prior values, it’s in general a difficult problem that has been debated a lot. My recommendation is that you make up numbers that feel right (or at least are not obviously wrong), and then play around with the sliders a bit to see how much the exact value effects things. The intended use of the tool is not to make you commit to numbers, but to help you develop intuition on how much to update your beliefs given the evidence, as well as to help you figure out what numbers correspond to your intuitive feelings.
If you’re serious about choosing the right number, then here is what it takes to figure it out: Each hypothesis represents a model of how some part of the world works. To properly get a number out of it, you need to develop the model in technical detail, to the point where you can represent it with an equation or a computer program. Then, you need to set the evidence above the one you’re computing the likelihood for to true in your model. You then need to compute what percentage of the time this evidence turns out to be true in the model. A nice general way to do this is to run the model a whole bunch of times, and see how often it happens (and if reality has been kind enough to instantiate your model enough times, then you might be able to use this to get a “base rate”). Or if your model is relatively simple, you might be able to use math to compute the exact value. This is typically a lot of work, and doesn’t actually help train your intuition about the intuitive mental models you actually use on a day-to-day basis much. But going through this process is helpful for understanding what the numbers you make up are trying to be. I hope this is helpful and not just more confusing.