I give an example of this in the “bob is best friend” picture.
How you calculate it is just a proportion. I’m 99% sure of ghosts, and 60% of that is 60*.99=59.4 percentage points.
If I figure out that the ghost girl was actually just my brain rationalizing sleep paralysis, then my belief in ghosts loses 59.4 percentage points. So now I believe in ghosts with 99-59.4= 39.4% confidence. Note that the other two beliefs (and unmentioned beliefs not in set {A,B,C}) must now be normalized to equal 100%.
You should be able to verify that you understand this by getting the same answer I did in the “Bob is best friend” example.
With this you can also answer: how many percentage points of 99% do you lose when the ghost girl belief goes from 60% to 50%?
99% is much further from 98% than 51% from 50%. As an example, getting from a one in a million confidence that Alice killed Bob (because Alice is one of a million citizens) to ten suspects requires much more evidence than eliminating five of them. Probabiliy differences are measured on the log-odds scale, in order to make seeing reason A, then B have the same effect as seeing B, then A. On that scale, you could in fact take two statistically independent reasons and say how many times more evidence one gives than the other.
Yes to both. Suppose a coin has heads probability 33% and another 66%. We take a random coin and throw it three times. Afterwards, if we have seen 0, 1, 2 or 3 heads, the subjective probability of us having taken the 66% coin is 1⁄9, 1⁄3, 2⁄3 or 8⁄9. The absolute probability reduction is not the same each time we remove a reason to believe. On a log-odds scale, it is.
Thanks for explaining. I’m more convinced you’re right math wise, though I haven’t verified for myself.
I don’t think understanding this or working it out correctly will help in actual conversations with people about their beliefs though. (In fact, I get the most out of it by just drawing the picture of beliefs and connected reasons, and writing estimates probabilities. It really helps keep track of what’s said and makes circular reasoning very clear.)
Are you saying there is a practical reason for doing so? I can’t imagine one for the average university student I run into, let alone less technical people. Maybe with oneself or someone technical?
Having in mind that we are measuring bits of evidence tells us that to give percentages, we must establish a baseline prior probability that we would assign without reasons.
Mostly you should be fine, just have heuristics for the anomalies near 0 and 1 - if one belief pushes the probability to .5 and another to .6, then the prior was noticeably far from zero or getting only the second reason won’t be noticeable either.
Oh! That’s clear, thanks!
I give an example of this in the “bob is best friend” picture.
How you calculate it is just a proportion. I’m 99% sure of ghosts, and 60% of that is 60*.99=59.4 percentage points.
If I figure out that the ghost girl was actually just my brain rationalizing sleep paralysis, then my belief in ghosts loses 59.4 percentage points. So now I believe in ghosts with 99-59.4= 39.4% confidence. Note that the other two beliefs (and unmentioned beliefs not in set {A,B,C}) must now be normalized to equal 100%.
You should be able to verify that you understand this by getting the same answer I did in the “Bob is best friend” example.
With this you can also answer: how many percentage points of 99% do you lose when the ghost girl belief goes from 60% to 50%?
99% is much further from 98% than 51% from 50%. As an example, getting from a one in a million confidence that Alice killed Bob (because Alice is one of a million citizens) to ten suspects requires much more evidence than eliminating five of them. Probabiliy differences are measured on the log-odds scale, in order to make seeing reason A, then B have the same effect as seeing B, then A. On that scale, you could in fact take two statistically independent reasons and say how many times more evidence one gives than the other.
I don’t understand how your comment relates to mine. Are you claiming the math to update the confidence is wrong?
Are you claiming that I haven’t properly defined how to calculate the probabilities and that this is bad for a reason?
Yes to both. Suppose a coin has heads probability 33% and another 66%. We take a random coin and throw it three times. Afterwards, if we have seen 0, 1, 2 or 3 heads, the subjective probability of us having taken the 66% coin is 1⁄9, 1⁄3, 2⁄3 or 8⁄9. The absolute probability reduction is not the same each time we remove a reason to believe. On a log-odds scale, it is.
Thanks for explaining. I’m more convinced you’re right math wise, though I haven’t verified for myself.
I don’t think understanding this or working it out correctly will help in actual conversations with people about their beliefs though. (In fact, I get the most out of it by just drawing the picture of beliefs and connected reasons, and writing estimates probabilities. It really helps keep track of what’s said and makes circular reasoning very clear.)
Are you saying there is a practical reason for doing so? I can’t imagine one for the average university student I run into, let alone less technical people. Maybe with oneself or someone technical?
Having in mind that we are measuring bits of evidence tells us that to give percentages, we must establish a baseline prior probability that we would assign without reasons.
Mostly you should be fine, just have heuristics for the anomalies near 0 and 1 - if one belief pushes the probability to .5 and another to .6, then the prior was noticeably far from zero or getting only the second reason won’t be noticeable either.