Less Wrong has a number of participants who endorse the idea of assigning probability values to beliefs. Less Wrong also seems to have a number of participants who broadly fall into the “New Atheist” group, many of the members of which insist that there is an important semantic distinction to be made between “lack of belief in God” and “belief that God does not exist.”
I’m not sure how to translate this distinction into probabilistic terms, assuming it is possible to do so—it is a basic theorem in standard probability theory (e.g. starting from the Kolmogorov axioms) that P(X) + P(not(X)) = 1 for any event X. In particular, if you take “lack of belief in God” to mean that you assign a value very close to 0 for P(“God exists”), then you must assign a value very close to 1 for P(not(“God exists”)). I would have thought (perhaps naively) that not(“God exists”) and “God does not exist” are equivalent, and that what it means to say that you believe in some proposition X is that you assign it a probability that is close to 1 (though not exactly 1, if you’re following the advice to never assign probabilities of exactly 0 or 1 to anything). This would imply that that a lack of belief in God implies a belief that God does not exist.
Am I misunderstanding something about translating these statements into probabilistic language? Or am I just wrong to think that there are people who simultaneously endorse both assigning probabilities to beliefs and the distinction between “lack of belief that God exists” and “belief that God does not exist?”
(I am completely ignoring the very important part of defining God in the sentence as I take the question to be asking of a way to express ‘not knowing’ in probabilistic terms. This can be applied to any subject really.)
I agree that “I don’t know” is a better answer as there is no reason to talk in rational jargon in a case like this. Especially when we haven’t even defined our terms. But could you explain to me why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?
why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?
Because you lose the capability to distinguish between the “I know the probabilities involved and they are 50% for X and 50% for Y” and “I don’t know”.
Look at the distributions of your probability estimates. For the “I don’t know” case it’s a uniform distribution on the 0 to 1 range. For the “I know it’s 50%” it’s a narrow spike at 0.5. These are very different things.
Ahhh, this is confusing me. I intuitively feel a 50-50 chance implies a uniform distribution. But what you are saying about the distribution being a spike for 0.5 makes total sense. Well, I guess I have a bit of studying to do...
Being a full-on Bayesian means not only having probability assignments for every proposition, but also having the conditional probabilities that will allow you to make appropriate updates to your probability assignments when new information comes in.
The difference between “The probability of X is definitely 0.5” and “The probability of X is somewhere between 0 and 1, and I have no idea at all where” lies in how you will adjust your estimates for Pr(X) as new information comes in. If your estimate is based on a lot of strong evidence, then your conditional probabilities for X given modest quantities of new evidence will still be close to 0.5. If your estimate is a mere seat-of-the-pants guess, then your conditional probabilities for X given modest quantities of new evidence will be all over the place.
Sometimes this is described in terms of your probability estimates for your probability estimates. That’s appropriate when, e.g., what you know about X is that it is governed by some sort of random process that makes X happen with a particular probability (a coin toss, say) but you are uncertain about the details of that random process (e.g., does something about the coin or how it’s tossed mean that Pr(heads) is far from 0.5?). But similar issues arise in different cases where there’s nothing going on that could reasonably be called a random process but your degree of knowledge is greater or less, and I’m not sure the “probabilities of probabilities” perspective is particularly helpful there.
“I know the probabilities involved and they are 50% for X and 50% for Y” and “I don’t know”.
Could we further distinguish between
a uniform distribution on the 0 to 1 range and “I don’t know”?
Let’s say a biased coin with unknown probability of landing heads is tossed, p is uniform on (0,1) and “I don’t know” means you can’t predict better than randomly guessing. So saying p is 50% doesn’t matter because it doesn’t beat random.
But what if we tossed the coin twice, and I had you guess twice, before the tosses. If you get at least one guess correct then you get to keep your life. Assuming you want to play to keep your life, then how would you play? Coin is still p uniform on (0,1), but it seems like “I don’t know” doesn’t mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.
You would guess (H,T) or (T,H) but avoid randomly guessing because it would produce things like (H,H) which is really bad because if p is uniform on (0,1), then probability of heads is 90% is just as likely as probability of heads is 10%, but heads at 10% is really bad for (H,H), so bad that even 90% heads doesn’t really help that much more.
If p is 90% or 10%, guessing (H,T) or (T,H) would result in the same small probability of dying at 9%. But (H,H) would result in at best 1% or 81% chance of dying. Saying I don’t know in this scenario doesn’t feel the same as I don’t know in the first scenario. I am probably confused.
but it seems like “I don’t know” doesn’t mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.
But you’ve changed things :-) In your situation you know a very important thing: that the probability p is the same for both throws. That is useful information which allows you to do some probability math (specifically compare 1 - p(1-p) and 1 - p^2).
But let’s say you don’t toss the same coin twice, but you toss two different coins. Does guessing (H,T) help now?
Now, just to make sure I got it, does this make sense: the question of gods existence (assuming the term was perfectly defined) is a yes/no question but you are conceptualising the probability that a yes or a no is true. That is why you are using a uniform distribution in a question with a binary answer. It is not representing the answer but your current confidence. Right?
Skipping a complicated discussion about many meanings of “probability”, yes.
Think about it this way. Someone gives you a box and says that if you press a button, the box will show you either a dragon head or a dragon tail. That’s all the information you have.
What’s the probability of the box showing you a head if you press the button? You don’t know. This means you need an estimate. If you’re forced to produce a single-number estimate (a “point estimate”) it will be 50%. However if you can produce this estimate as a distribution, it will be uniform from 0 to 1. Basically, you are very unsure about your estimate.
Now, let’s say you had this box for a while and pressed the button a couple thousands of times. Your tally is 1017 heads and 983 tails. What is your point estimate now? More or less the same, rounding to 50%. But the distribution is very different now. You are much more confident about your estimate.
Your probability estimate is basically a forecast of what do you think will happen when you press the button. Like with any forecast, there is a confidence interval around it. It can be wide or it can be narrow.
You’re right about the probabilistic statements, with a potentially tangential elaboration. There are nonsense sentences—not contradictions (“A and not A”) but things that fail to parse (“A and”)--and it doesn’t make sense to assign probabilities to those. One might claim that “God exists” is a nonsense sentence in that way, but I think most New Atheists don’t take that approach.
The distinction that people are drawing is basically which framing should have the benefit of the doubt, since not believing a new statement is the default. This is much more important for social rationality / human psychology than it is for Bayesianism, where you just assign a prior and then start calculating likelihood ratios.
From my perspective, a belief needs to be about empiric facts to have a probability attached to it. I need to be able to clearly describe how the belief could be tested in principle.
In addition to beliefs about empiric facts there are also beliefs like “Nobody loves me.” that aren’t about specific empiric facts but that still matter a great deal.
I cannot speak for other atheists, but as far as I’m concerned I agree with you. Since we have a hard time defining even a human being, I accept that “God” cannot be clearly defined in any model, but I accept that there are narrations that points to some being of divine nature, and I accept that as a valid ‘reference’ to God(s). To those, I give a very low probability, with very little Knightian uncertainty (meaning that I also give very little probability to future evidence that would raise this probability, and high probability to evidence that will lower this value). For that account of the divine, I consider myself a full fledged atheist. There are other narrations though, and I’ve heard of definitions that basically reduce to “the law of physics”, to which I give obviously very high probability with very high meta-certainty. There might be definitions or narrations that are in the middle, though. On that account, I cannot say precisely what my probabilities are, and thus would be appropriate to say that I lack a belief in this kind of god, more than a definite belief in the non-existence.
It seems to me that someone could quite consistently hold the following position:
“Atheist” means “lacking positive belief in any god or gods”. You can be an atheist without thinking the existence of gods is vanishingly improbable, or indeed without giving any thought at all to probabilities. I, as it happens, do prefer to think in probabilities when possible. Exactly what I think about the existence of God depends a great deal on how you define God, and it might be anywhere from “vanishingly unlikely” to “somewhat unlikely”, or in many cases “I can’t answer that question because it’s not clear enough what it means”. But, whatever way you pose the question, I don’t positively believe in any sort of god, and it’s therefore appropriate to call me an atheist.
Less Wrong has a number of participants who endorse the idea of assigning probability values to beliefs. Less Wrong also seems to have a number of participants who broadly fall into the “New Atheist” group, many of the members of which insist that there is an important semantic distinction to be made between “lack of belief in God” and “belief that God does not exist.”
I’m not sure how to translate this distinction into probabilistic terms, assuming it is possible to do so—it is a basic theorem in standard probability theory (e.g. starting from the Kolmogorov axioms) that P(X) + P(not(X)) = 1 for any event X. In particular, if you take “lack of belief in God” to mean that you assign a value very close to 0 for P(“God exists”), then you must assign a value very close to 1 for P(not(“God exists”)). I would have thought (perhaps naively) that not(“God exists”) and “God does not exist” are equivalent, and that what it means to say that you believe in some proposition X is that you assign it a probability that is close to 1 (though not exactly 1, if you’re following the advice to never assign probabilities of exactly 0 or 1 to anything). This would imply that that a lack of belief in God implies a belief that God does not exist.
Am I misunderstanding something about translating these statements into probabilistic language? Or am I just wrong to think that there are people who simultaneously endorse both assigning probabilities to beliefs and the distinction between “lack of belief that God exists” and “belief that God does not exist?”
Shouldn’t a lack of belief in god imply:
P(not(“God exists”)) = 0.5
P(“God exists”) = 0.5
(I am completely ignoring the very important part of defining God in the sentence as I take the question to be asking of a way to express ‘not knowing’ in probabilistic terms. This can be applied to any subject really.)
That is indeed the chief problem here. I’m assuming you’re talking about the prior probability which we have before looking at the evidence.
No. Why would it?
“I don’t know” is a perfectly valid answer. Sometimes it’s called Knightian uncertainty or Rumsfeldian “unknown unknowns”.
I agree that “I don’t know” is a better answer as there is no reason to talk in rational jargon in a case like this. Especially when we haven’t even defined our terms. But could you explain to me why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?
Because you lose the capability to distinguish between the “I know the probabilities involved and they are 50% for X and 50% for Y” and “I don’t know”.
Look at the distributions of your probability estimates. For the “I don’t know” case it’s a uniform distribution on the 0 to 1 range. For the “I know it’s 50%” it’s a narrow spike at 0.5. These are very different things.
Ahhh, this is confusing me. I intuitively feel a 50-50 chance implies a uniform distribution. But what you are saying about the distribution being a spike for 0.5 makes total sense. Well, I guess I have a bit of studying to do...
Being a full-on Bayesian means not only having probability assignments for every proposition, but also having the conditional probabilities that will allow you to make appropriate updates to your probability assignments when new information comes in.
The difference between “The probability of X is definitely 0.5” and “The probability of X is somewhere between 0 and 1, and I have no idea at all where” lies in how you will adjust your estimates for Pr(X) as new information comes in. If your estimate is based on a lot of strong evidence, then your conditional probabilities for X given modest quantities of new evidence will still be close to 0.5. If your estimate is a mere seat-of-the-pants guess, then your conditional probabilities for X given modest quantities of new evidence will be all over the place.
Sometimes this is described in terms of your probability estimates for your probability estimates. That’s appropriate when, e.g., what you know about X is that it is governed by some sort of random process that makes X happen with a particular probability (a coin toss, say) but you are uncertain about the details of that random process (e.g., does something about the coin or how it’s tossed mean that Pr(heads) is far from 0.5?). But similar issues arise in different cases where there’s nothing going on that could reasonably be called a random process but your degree of knowledge is greater or less, and I’m not sure the “probabilities of probabilities” perspective is particularly helpful there.
Thanks for the detailed explanation. It helps!
Well, imagine a bet on a fair coin flip. That’s a 50-50 chance, right? And yet there is no uniform distribution in sight.
So if we can distinguish between
Could we further distinguish between
Let’s say a biased coin with unknown probability of landing heads is tossed, p is uniform on (0,1) and “I don’t know” means you can’t predict better than randomly guessing. So saying p is 50% doesn’t matter because it doesn’t beat random.
But what if we tossed the coin twice, and I had you guess twice, before the tosses. If you get at least one guess correct then you get to keep your life. Assuming you want to play to keep your life, then how would you play? Coin is still p uniform on (0,1), but it seems like “I don’t know” doesn’t mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.
You would guess (H,T) or (T,H) but avoid randomly guessing because it would produce things like (H,H) which is really bad because if p is uniform on (0,1), then probability of heads is 90% is just as likely as probability of heads is 10%, but heads at 10% is really bad for (H,H), so bad that even 90% heads doesn’t really help that much more.
If p is 90% or 10%, guessing (H,T) or (T,H) would result in the same small probability of dying at 9%. But (H,H) would result in at best 1% or 81% chance of dying. Saying I don’t know in this scenario doesn’t feel the same as I don’t know in the first scenario. I am probably confused.
But you’ve changed things :-) In your situation you know a very important thing: that the probability p is the same for both throws. That is useful information which allows you to do some probability math (specifically compare 1 - p(1-p) and 1 - p^2).
But let’s say you don’t toss the same coin twice, but you toss two different coins. Does guessing (H,T) help now?
I understand now. Thanks!
You are obviously right! This is helpful :)
Now, just to make sure I got it, does this make sense: the question of gods existence (assuming the term was perfectly defined) is a yes/no question but you are conceptualising the probability that a yes or a no is true. That is why you are using a uniform distribution in a question with a binary answer. It is not representing the answer but your current confidence. Right?
Skipping a complicated discussion about many meanings of “probability”, yes.
Think about it this way. Someone gives you a box and says that if you press a button, the box will show you either a dragon head or a dragon tail. That’s all the information you have.
What’s the probability of the box showing you a head if you press the button? You don’t know. This means you need an estimate. If you’re forced to produce a single-number estimate (a “point estimate”) it will be 50%. However if you can produce this estimate as a distribution, it will be uniform from 0 to 1. Basically, you are very unsure about your estimate.
Now, let’s say you had this box for a while and pressed the button a couple thousands of times. Your tally is 1017 heads and 983 tails. What is your point estimate now? More or less the same, rounding to 50%. But the distribution is very different now. You are much more confident about your estimate.
Your probability estimate is basically a forecast of what do you think will happen when you press the button. Like with any forecast, there is a confidence interval around it. It can be wide or it can be narrow.
You’re right about the probabilistic statements, with a potentially tangential elaboration. There are nonsense sentences—not contradictions (“A and not A”) but things that fail to parse (“A and”)--and it doesn’t make sense to assign probabilities to those. One might claim that “God exists” is a nonsense sentence in that way, but I think most New Atheists don’t take that approach.
The distinction that people are drawing is basically which framing should have the benefit of the doubt, since not believing a new statement is the default. This is much more important for social rationality / human psychology than it is for Bayesianism, where you just assign a prior and then start calculating likelihood ratios.
From my perspective, a belief needs to be about empiric facts to have a probability attached to it. I need to be able to clearly describe how the belief could be tested in principle.
In addition to beliefs about empiric facts there are also beliefs like “Nobody loves me.” that aren’t about specific empiric facts but that still matter a great deal.
I cannot speak for other atheists, but as far as I’m concerned I agree with you.
Since we have a hard time defining even a human being, I accept that “God” cannot be clearly defined in any model, but I accept that there are narrations that points to some being of divine nature, and I accept that as a valid ‘reference’ to God(s). To those, I give a very low probability, with very little Knightian uncertainty (meaning that I also give very little probability to future evidence that would raise this probability, and high probability to evidence that will lower this value). For that account of the divine, I consider myself a full fledged atheist.
There are other narrations though, and I’ve heard of definitions that basically reduce to “the law of physics”, to which I give obviously very high probability with very high meta-certainty.
There might be definitions or narrations that are in the middle, though. On that account, I cannot say precisely what my probabilities are, and thus would be appropriate to say that I lack a belief in this kind of god, more than a definite belief in the non-existence.
It seems to me that someone could quite consistently hold the following position: