I agree that “I don’t know” is a better answer as there is no reason to talk in rational jargon in a case like this. Especially when we haven’t even defined our terms. But could you explain to me why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?
why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?
Because you lose the capability to distinguish between the “I know the probabilities involved and they are 50% for X and 50% for Y” and “I don’t know”.
Look at the distributions of your probability estimates. For the “I don’t know” case it’s a uniform distribution on the 0 to 1 range. For the “I know it’s 50%” it’s a narrow spike at 0.5. These are very different things.
Ahhh, this is confusing me. I intuitively feel a 50-50 chance implies a uniform distribution. But what you are saying about the distribution being a spike for 0.5 makes total sense. Well, I guess I have a bit of studying to do...
Being a full-on Bayesian means not only having probability assignments for every proposition, but also having the conditional probabilities that will allow you to make appropriate updates to your probability assignments when new information comes in.
The difference between “The probability of X is definitely 0.5” and “The probability of X is somewhere between 0 and 1, and I have no idea at all where” lies in how you will adjust your estimates for Pr(X) as new information comes in. If your estimate is based on a lot of strong evidence, then your conditional probabilities for X given modest quantities of new evidence will still be close to 0.5. If your estimate is a mere seat-of-the-pants guess, then your conditional probabilities for X given modest quantities of new evidence will be all over the place.
Sometimes this is described in terms of your probability estimates for your probability estimates. That’s appropriate when, e.g., what you know about X is that it is governed by some sort of random process that makes X happen with a particular probability (a coin toss, say) but you are uncertain about the details of that random process (e.g., does something about the coin or how it’s tossed mean that Pr(heads) is far from 0.5?). But similar issues arise in different cases where there’s nothing going on that could reasonably be called a random process but your degree of knowledge is greater or less, and I’m not sure the “probabilities of probabilities” perspective is particularly helpful there.
“I know the probabilities involved and they are 50% for X and 50% for Y” and “I don’t know”.
Could we further distinguish between
a uniform distribution on the 0 to 1 range and “I don’t know”?
Let’s say a biased coin with unknown probability of landing heads is tossed, p is uniform on (0,1) and “I don’t know” means you can’t predict better than randomly guessing. So saying p is 50% doesn’t matter because it doesn’t beat random.
But what if we tossed the coin twice, and I had you guess twice, before the tosses. If you get at least one guess correct then you get to keep your life. Assuming you want to play to keep your life, then how would you play? Coin is still p uniform on (0,1), but it seems like “I don’t know” doesn’t mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.
You would guess (H,T) or (T,H) but avoid randomly guessing because it would produce things like (H,H) which is really bad because if p is uniform on (0,1), then probability of heads is 90% is just as likely as probability of heads is 10%, but heads at 10% is really bad for (H,H), so bad that even 90% heads doesn’t really help that much more.
If p is 90% or 10%, guessing (H,T) or (T,H) would result in the same small probability of dying at 9%. But (H,H) would result in at best 1% or 81% chance of dying. Saying I don’t know in this scenario doesn’t feel the same as I don’t know in the first scenario. I am probably confused.
but it seems like “I don’t know” doesn’t mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.
But you’ve changed things :-) In your situation you know a very important thing: that the probability p is the same for both throws. That is useful information which allows you to do some probability math (specifically compare 1 - p(1-p) and 1 - p^2).
But let’s say you don’t toss the same coin twice, but you toss two different coins. Does guessing (H,T) help now?
Now, just to make sure I got it, does this make sense: the question of gods existence (assuming the term was perfectly defined) is a yes/no question but you are conceptualising the probability that a yes or a no is true. That is why you are using a uniform distribution in a question with a binary answer. It is not representing the answer but your current confidence. Right?
Skipping a complicated discussion about many meanings of “probability”, yes.
Think about it this way. Someone gives you a box and says that if you press a button, the box will show you either a dragon head or a dragon tail. That’s all the information you have.
What’s the probability of the box showing you a head if you press the button? You don’t know. This means you need an estimate. If you’re forced to produce a single-number estimate (a “point estimate”) it will be 50%. However if you can produce this estimate as a distribution, it will be uniform from 0 to 1. Basically, you are very unsure about your estimate.
Now, let’s say you had this box for a while and pressed the button a couple thousands of times. Your tally is 1017 heads and 983 tails. What is your point estimate now? More or less the same, rounding to 50%. But the distribution is very different now. You are much more confident about your estimate.
Your probability estimate is basically a forecast of what do you think will happen when you press the button. Like with any forecast, there is a confidence interval around it. It can be wide or it can be narrow.
No. Why would it?
“I don’t know” is a perfectly valid answer. Sometimes it’s called Knightian uncertainty or Rumsfeldian “unknown unknowns”.
I agree that “I don’t know” is a better answer as there is no reason to talk in rational jargon in a case like this. Especially when we haven’t even defined our terms. But could you explain to me why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?
Because you lose the capability to distinguish between the “I know the probabilities involved and they are 50% for X and 50% for Y” and “I don’t know”.
Look at the distributions of your probability estimates. For the “I don’t know” case it’s a uniform distribution on the 0 to 1 range. For the “I know it’s 50%” it’s a narrow spike at 0.5. These are very different things.
Ahhh, this is confusing me. I intuitively feel a 50-50 chance implies a uniform distribution. But what you are saying about the distribution being a spike for 0.5 makes total sense. Well, I guess I have a bit of studying to do...
Being a full-on Bayesian means not only having probability assignments for every proposition, but also having the conditional probabilities that will allow you to make appropriate updates to your probability assignments when new information comes in.
The difference between “The probability of X is definitely 0.5” and “The probability of X is somewhere between 0 and 1, and I have no idea at all where” lies in how you will adjust your estimates for Pr(X) as new information comes in. If your estimate is based on a lot of strong evidence, then your conditional probabilities for X given modest quantities of new evidence will still be close to 0.5. If your estimate is a mere seat-of-the-pants guess, then your conditional probabilities for X given modest quantities of new evidence will be all over the place.
Sometimes this is described in terms of your probability estimates for your probability estimates. That’s appropriate when, e.g., what you know about X is that it is governed by some sort of random process that makes X happen with a particular probability (a coin toss, say) but you are uncertain about the details of that random process (e.g., does something about the coin or how it’s tossed mean that Pr(heads) is far from 0.5?). But similar issues arise in different cases where there’s nothing going on that could reasonably be called a random process but your degree of knowledge is greater or less, and I’m not sure the “probabilities of probabilities” perspective is particularly helpful there.
Thanks for the detailed explanation. It helps!
Well, imagine a bet on a fair coin flip. That’s a 50-50 chance, right? And yet there is no uniform distribution in sight.
So if we can distinguish between
Could we further distinguish between
Let’s say a biased coin with unknown probability of landing heads is tossed, p is uniform on (0,1) and “I don’t know” means you can’t predict better than randomly guessing. So saying p is 50% doesn’t matter because it doesn’t beat random.
But what if we tossed the coin twice, and I had you guess twice, before the tosses. If you get at least one guess correct then you get to keep your life. Assuming you want to play to keep your life, then how would you play? Coin is still p uniform on (0,1), but it seems like “I don’t know” doesn’t mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.
You would guess (H,T) or (T,H) but avoid randomly guessing because it would produce things like (H,H) which is really bad because if p is uniform on (0,1), then probability of heads is 90% is just as likely as probability of heads is 10%, but heads at 10% is really bad for (H,H), so bad that even 90% heads doesn’t really help that much more.
If p is 90% or 10%, guessing (H,T) or (T,H) would result in the same small probability of dying at 9%. But (H,H) would result in at best 1% or 81% chance of dying. Saying I don’t know in this scenario doesn’t feel the same as I don’t know in the first scenario. I am probably confused.
But you’ve changed things :-) In your situation you know a very important thing: that the probability p is the same for both throws. That is useful information which allows you to do some probability math (specifically compare 1 - p(1-p) and 1 - p^2).
But let’s say you don’t toss the same coin twice, but you toss two different coins. Does guessing (H,T) help now?
I understand now. Thanks!
You are obviously right! This is helpful :)
Now, just to make sure I got it, does this make sense: the question of gods existence (assuming the term was perfectly defined) is a yes/no question but you are conceptualising the probability that a yes or a no is true. That is why you are using a uniform distribution in a question with a binary answer. It is not representing the answer but your current confidence. Right?
Skipping a complicated discussion about many meanings of “probability”, yes.
Think about it this way. Someone gives you a box and says that if you press a button, the box will show you either a dragon head or a dragon tail. That’s all the information you have.
What’s the probability of the box showing you a head if you press the button? You don’t know. This means you need an estimate. If you’re forced to produce a single-number estimate (a “point estimate”) it will be 50%. However if you can produce this estimate as a distribution, it will be uniform from 0 to 1. Basically, you are very unsure about your estimate.
Now, let’s say you had this box for a while and pressed the button a couple thousands of times. Your tally is 1017 heads and 983 tails. What is your point estimate now? More or less the same, rounding to 50%. But the distribution is very different now. You are much more confident about your estimate.
Your probability estimate is basically a forecast of what do you think will happen when you press the button. Like with any forecast, there is a confidence interval around it. It can be wide or it can be narrow.