Many things, if real, would have some nonzero chance of obtruding on your awareness, even if you haven’t looked for them. The fact that this hasn’t happened is evidence against their existence.
That presents and interesting chicken-and-egg problem, don’t you think?
I can’t consider existence or non-existence of something without that something “obtruding on my awareness” which automatically grants it evidence for existence. And I cannot provide this evidence against the existence of anything because as soon as it enters my mind, poof! the evidence against disappears and the evidence for magically appears in its place.
Anyway, I know the point you’re trying to make. But taking it to absurd lengths leads to absurd results which are generally not the desired outcome.
Sorry, I wasn’t clear. I didn’t mean “obtruding on your awareness” in the sense of having the idea of the thing occur to you. I meant that you encounter the thing in a way that is overwhelming evidence for its existence. Like, maybe you aren’t looking for goblins, but you might one day open the lid of your trashcan and see a goblin in there.
I am confused. So if you DON’T “encounter the thing in a way that is overwhelming evidence for its existence” then you have evidence against its existence?
I am confused. So if you DON’T “encounter the thing in a way that is overwhelming evidence for its existence” then you have evidence against its existence?
Yes. Let
H = “Goblins exist.”
E = “I’ve seen a goblin in my trashcan under circumstances that make the existence of goblins overwhelmingly likely (in particular, the probability that I was hallucinating or dreaming is very low).”
Let us further suppose that the prior probability that I assign to the existence of goblins is very low.
Then P(H | E) > P(H). Hence, P(H | ~E) < P(H). Therefore, the fact that I haven’t seen a goblin in my trashcan is evidence against the existence of goblins.
Of course, it may be very weak evidence. It may not be evidence that I, as a computationally limited being, should take any time to weigh consciously. But it is still evidence.
As I said, I understand the point. To demonstrate my problem, replace goblins with tigers. I don’t think the fact that I haven’t seen a tiger in my trashcan is evidence against the existence of tigers.
In a world where tigers didn’t exist, I wouldn’t expect to see one in my trashcan. In a world where tigers did exist, I also wouldn’t expect to see a tiger in my trashcan, but I wouldn’t be quite as surprised if I did see one. My prior probability that tigers exist is very high, since I have lots of independent reasons to believe that they do exist. The conditional probability of observing no tiger in my trashcan is skewed very slightly towards the world where tigers do not exist, but not enough to affect a prior probability that is very close to 100% already. You could say the same for the goblin example, etc–my prior probability is close to zero, and although I’m more likely not to observe a goblin in my trashcan in the world where goblins don’t exist, I’m also not likely to see one in the world where goblins do exist. The prior probability is far more skewed than the conditional probability, so the evidence of not observing a goblin doesn’t affect my belief much.
The fact that you haven’t seen a tiger in your trashcan is, however, evidence that there is no tiger in your trashcan.
Edit: Which I think is more or less harmonious with your original post. It appears to me, however, that at some step in the discussion, there was a leap of levels from “absence of evidence for goblins in the trashcan is evidence of absence of goblins from the trashcan” to “absence of evidence for goblins in the trashcan is evidence for the complete nonexistence of goblins”.
For practical purposes, sure, this is a case where “absence of evidence is evidence of absence” is not a very useful refrain. The evidence is so weak that it’s a waste of time to think about it. P(I see a tiger in my trashcan|Tigers exist) is very small, and not much higher than P(I see [hallucinate] a tiger in my trashcan|Tigers don’t exist). A very small adjustment to P(Tigers exist), of which you already have very high confidence, isn’t worth keeping track of… unless maybe you’re systematically searching the world for tigers, by examining small regions one at a time, each no more likely to contain a tiger than your own trashcan. Then you really would want to keep track of that very small amount of evidence: if you round it down to no evidence at all, then even after searching the whole world, you’d still have no evidence about tigers!
It’s not fully accurate to say
Only provided you have looked, and looked in the right place.
but it might be a useful heuristic. “Be mindful of the strength of evidence, not just its presence” would be more precise, because looking in the right place does provide a much higher likelihood ratio than not looking at all.
I don’t think the fact that I haven’t seen a tiger in my trashcan is evidence against the existence of tigers.
Is it because you deny that P(H | E) > P(H) in this case? Or do you acknowledge that P(H | ~E) < P(H) is true in this case, but you don’t interpret it as meaning “the fact that I haven’t seen a tiger in my trashcan is evidence against the existence of tigers.”
If you deny that P(H | E) > P(H), this might be because your implicit prior knowledge already screens off E from H. Perhaps we should, following Jaynes, always keep track of your prior knowledge X. Then we should rewrite P(H | E) > P(H) as P(H | E & X) > P(H | X). But if your prior knowledge already includes, say, seeing tigers at the zoo, then the additional experience of seeing a tiger in your trashcan may not make tigers any more likely to exist. That is, you could have that P(H | E & X) = P(H | X).
In that case, if you’ve already seen tigers at the zoo, then their absence from your trashcan does not count as evidence against their existence.
Sorry, I think that I was editing my comment after you replied. (I have no excuse. I think what happened was that I was going to make a quick typofix, but the edit grew longer, and by the end I’d forgotten that I had already submitted the comment.)
How do you react to my conjecture that your background knowledge screens off (or seems to) the experience of seeing a tiger in your trashcan from the hypothesis that tigers exist?
I don’t think screening off helps with the underlying problem.
Let’s recall where we started. I commented on the expression “absence of evidence is evidence of absence” by saying “Only provided you have looked, and looked in the right place.”
The first part should be fairly uncontroversial. If you don’t look you don’t get any new evidence, so there’s no reason to update your beliefs.
Now, the second part, “the right place”. In this thread Wes_W gives a numerical example that involves searching for tigers in houses and says that you need to search about 5 billion houses to drop your confidence to 90% -- and if you search a trillion houses and still don’t find a tiger, “then you’d be insane to still claim that tigers probably do exist.”
Well, let’s take this example as given but change one little thing. Let’s say I’m not looking for tigers—instead, I heard that there are two big rocks, Phobos and Deimos, and I’m looking for evidence of their existence.
I search a house and I don’t find them. I search 5 billion houses and I don’t find them. I search a trillion houses and still don’t find them. At this point would I be insane to believe Phobos and Deimos exist?
That is the issue of “looking in the right place”.
I agree that the “looking” part is important: Looking and not finding evidence is a different kind of “absence of evidence” than just not looking.
Well, let’s take this example as given but change one little thing. Let’s say I’m not looking for tigers—instead, I heard that there are two big rocks, Phobos and Deimos, and I’m looking for evidence of their existence.
I search a house and I don’t find them. I search 5 billion houses and I don’t find them. I search a trillion houses and still don’t find them. At this point would I be insane to believe Phobos and Deimos exist?
I think it would indeed be pretty silly to maintain that a) they exist and b) each house has an independent 10^-9 chance of containing them, after searching a trillion houses and finding neither. But if you didn’t place much credence in anything like b) in the first place, your confidence in a) may not be meaningfully altered.
If you already thought Phobos and Deimos were moons of Mars, then you would have extremely minimal evidence against their existence. But again, we can construct a Paradox of the Heap-type setup where you search the solar system, one household-volume at a time, and if all of them come up empty you should end up thinking Phobos and Deimos probably aren’t real, so each individual household-volume must be some degree of evidence.
My thought here—and perhaps we agree on this, in which case I’m happy to concede the point—is that the need to look in the right place is technically already covered by the relevant math, specifically by the different strengths of evidence. But for us puny humans that are doing this without explicit numerical estimates, and who aren’t well-calibrated to nine significant figures, it’s a good rule of thumb.
(This comment has been edited multiple times. My apologies for any confusion.)
where you search the solar system, one household-volume at a time
Well, you’d do better to search all of those volumes at once. Doing it one volume at a time has a significant chance of failing to find the moons even if they exist, since the moons move over time, and therefore failing to find them isn’t significant evidence of their nonexistence.
Well, you’d do better to search all of those volumes at once.
Kinda hard to do, but more to the point, the assumption that a single search is sufficient (= nothing changes with time) may not be true.
In fact, if you want to update your beliefs with absence of evidence, then every time your glance sweeps across a volume of space which physically could hold a tiger you need to update your beliefs about non-existence of tigers.
And then you get into more trouble because if your beliefs in (non)existence of tigers are time-specific, as they should be, the evidence from the previous second might not be relevant to the (non)existence of tigers in the next second. You need specific assumptions about persistence of entities like tigers on certain time scales (e.g. tigers don’t persist on the time scale where the unit of time is a billion years).
(nods) Systems that don’t assign very low priors to such “evasive” events can easily wind up incorrigibly believing falsehoods, even if they process evidence properly.
your confidence in a) may not be meaningfully altered
Meaningfully? I thought we were counting infintesimals :-D
If we are talking about “meaningfully altered” (or what I’d call “detectable”) then not finding a tiger in my rubbish bin does not meaningfully alter my beliefs and the absence of evidence is NOT evidence of absence.
the need to look in the right place is technically already covered by the relevant math
I am not sure of that. First, we’re concerned with statistics, not math (and I think this is a serious difference). Second, I haven’t thought this through, but I suspect a big issue here is what exactly your belief is. To give a quick example, when you don’t find a tiger in your garbage, is the tiger live and real or plush and a toy? When you’re unsure about the existence of something, your idea of what exactly that something is can be fuzzy and that affects what kind of evidence you’ll accept and where will you look for it.
Meaningfully? I thought we were counting infintesimals :-D
As in “for most practical purposes, and with human computational abilities, this is no update at all”. I’m not sure we can usefully say this isn’t really evidence after all, or we run into Paradox of the Heap problems.
When you’re unsure about the existence of something, your idea of what exactly that something is can be fuzzy and that affects what kind of evidence you’ll accept and where will you look for it.
Let me give an example where I think “absence of evidence is evidence of absence” is applicable, even though I’m not sure anyone has ever looked in the right place: Bigfoot.
Bigfoot moves around. It is possible that all of our searches happen to have missed it, like the one-volume-at-a-time search mentioned above. We don’t really know much about Bigfoot, so it’s hard to be sure if we’ve been looking in the right place. Nor are we quite sure what we’re looking for. And any individual hike through the woods has a very, very small chance of encountering Bigfoot, even if it does exist, so any looking that has happened by accident won’t be especially rigorous.
Nevertheless, if Bigfoot DID exist, we would expect there to be some good photographs by now. No individual instance of not finding evidence for Bigfoot is particularly significant, but all of the searches combined failing to produce any good evidence for Bigfoot makes me reasonably confident that Bigfoot doesn’t exist, and every year of continued non-findings would drive that down a little more, if I cared enough to keep track.
Similar reasoning is useful for, say, UFOs and the power of prayer. In both cases, it is plausible that none of our evidence is really “looking in the right place” (because aliens might have arbitrarily good evasion capabilities [although beware of Giant Cheesecake Fallacy], because s/he who demands a miracle shall not receive one and running a study on prayer is like demanding a miracle, etc), but the dearth of positive evidence is pretty important evidence of absence, and justifies low confidence in those claims until/unless some strong positive evidence shows up.
an example where I think “absence of evidence is evidence of absence” is applicable
Oh, of course there are situation where “absence of evidence is evidence of absence” is applicable.
For a very simple example, consider belief in my second head. The absence of evidence that I have a second head is for me excellent evidence that I do not, in fact, have a second head.
The discussion is really about whether AoE=EoA is universal.
The second half of the sentence was the reason I was bringing it up in this context. We’ve looked, kinda, and not very systematically, and maybe not in the right places, but haven’t found any evidence. Is it fair to call this evidence against paranormal claims?
It’s complicated, I don’t think this problem can be resolved in one or two sentences.
For example, there is clear relationship to how specific the claim/belief is. Lack of evidence is more important for very specific and easily testable claims (“I can bend this very spoon in front of your eyes”) and less important for general claims (“some people can occasionally perform telekinesis”).
Oh, and there’s lot of evidence for paranormal claims. It’s just that this evidence is contested. Some of it has been conclusively debunked, but not all.
Trying to not get sidetracked into that specific sub-discussion: should you be skeptical of any given paranormal claim (specific or general), if some people have tried but nobody has been able to produce clear evidence for it? “Clear evidence” here meaning “better evidence than we would expect if the claim is false”, per the Bayesian definition of evidence.
Should you be more or less skeptical than upon first hearing the claim, but before examining the evidence about it?
I think I’m not getting why you object to “AoE is EoA”, if appending ”...but sometimes it’s so weak that we humans can’t actually make use of it” doesn’t resolve the disagreement in much the same way that ”...but only provided you have looked, and looked in the right place” does.
“Clear evidence” here meaning “better evidence than we would expect if the claim is false”
I am not sure that that means. Example: I claim that this coin is biased. I do a hundred coin flips, it comes up heads 55 times. Is this “clear evidence”?
Should you be more or less skeptical than upon first hearing the claim, but before examining the evidence about it?
Oh-oh, that’s a question about how you should form your prior. The Bayesian approach is notoriously reticent about discussing this.
But you can think about it this way: invert the belief and make it “Everyone who claims to have paranormal powers is a fraud”. When another one is debunked, it’s positive evidence for your belief and you should update it. The more people get debunked, the stronger your belief gets.
Does it ever get strong enough for you to dismiss all claimed evidence of paranormal powers sight unseen? I don’t know—it depends on your prior and on how did you update. I expect different results with different people.
I am not sure that that means. Example: I claim that this coin is biased. I do a hundred coin flips, it comes up heads 55 times. Is this “clear evidence”?
Without crunching the numbers, my best guess is no, a fair coin is not very unlikely to come up heads 55 times out of 100. I would guess that no possible P(heads) would have a likelihood ratio much greater than 1 from that test. If one of the hypotheses is that the coin is unfair in a way that causes it to always get exactly 55 heads in 100 flips, that might be clear/strong evidence, but this would require a different mechanism than usually implied when discussing coin flips.
Does it ever get strong enough for you to dismiss all claimed evidence of paranormal powers sight unseen? I don’t know—it depends on your prior and on how did you update. I expect different results with different people.
I don’t know either. This is a rather different question from whether you’re getting evidence at all, though.
No need for best guesses—this is a standard problem in statistics. What it boils down to is that there is a specific distribution of the number of heads that 100 tosses of a fair coin would produce. You look at this distribution, note where 55 heads are on it… and then? What is clear evidence? how high a probability number makes things “likely” or “unlikely”? It’s up to you to decide what level of certainty is acceptable to you.
The Bayesian approach, of course, sidesteps all this and just updates the belief. The downside is that the output you get is not a simple “likely” or “unlikely”, it’s a full distribution and it’s still up to you what to make out of it.
...whether you’re getting evidence at all
As I said, it’s complicated and, in particular, depends on the specifics of the belief you’re interested in.
I would expect it to be hard to get to high levels of certainty in beliefs of the type “It’s impossible to do X” unless there are e.g. obvious physical constraints.
No need for best guesses—this is a standard problem in statistics. What it boils down to is that there is a specific distribution of the number of heads that 100 tosses of a fair coin would produce. You look at this distribution, note where 55 heads are on it… and then? What is clear evidence? how high a probability number makes things “likely” or “unlikely”? It’s up to you to decide what level of certainty is acceptable to you.
The Bayesian approach, of course, sidesteps all this and just updates the belief. The downside is that the output you get is not a simple “likely” or “unlikely”, it’s a full distribution and it’s still up to you what to make out of it.
Right, it’s definitely not a hard problem to calculate directly; I specifically chose not to do so, because I don’t think you need to run the numbers here to know roughly what they’ll look like. Specifically, this test shouldn’t yield even a 2:1 likelihood ratio for any specific P(heads):fair coin, and it’s only one standard deviation from the mean. Either way, it doesn’t give us much confidence that the coin isn’t fair.
Asking what is clear evidence sounds to me like asking what is hot water; it’s a quantitative thing which we describe with qualitative words. 55 heads is not very clear; 56 would be a little clearer; 100 heads is much clearer, but still not perfectly so.
Suppose the chance of finding a tiger somewhere in a given household, on a given day, is one in a billion.
Or so say the pro-tigerians. The tiger denialist faction, of course, claims that statistic is made-up, and tigers don’t actually exist. But one household in a trillion might hallucinate a tiger, on any given day.
Today, you search your entire house—the dishwasher AND the fridge AND the trashcan etc. P(You find a tiger|tigers exist) = .000000001 P(You don’t find a tiger|tigers don’t exist) = .000000000001 P(You don’t find a tiger|tigers exist) = .999999999 P(You don’t find a tiger|tigers don’t exist) = .999999999999
And suppose you are 99.9% confident that tigers exist—you think you could make statements like that a thousand times in a row, and be wrong only once. (Perhaps rattling off all the animals you know.) Your prior odds ratio is 999 to 1.
So you take your prior odds, (.999/.001) and multiply by the likelihood ratio, (.999999999/.999999999999), to get a posterior odds ratio of 998.999999002 to 1. This is, clearly, a VERY small adjustment.
What if you search more households: how many would you have to search, without finding a tiger, before you dropped just to 90% confidence in tigers, where you still think tigers exist but would not willingly bet your life on it? If I’ve done the math right, about five billion. There probably aren’t that many households in the world, so searching every house would be insufficient to get you down to just 90% confidence, much less 10% or whatever threshold you’d like to use for “tigers probably don’t exist”.
(And my one-in-a-billion figure is probably far too high, and so searching every household in the world should get you even less adjustment...)
But if you could search a trillion houses at those odds, and still never found a tiger—then you’d be insane to still claim that tigers probably do exist.
And if a trillion searches can produce such a shift, then each individual search can’t produce no evidence. Just very little.
Bayes’ Theorem implies that you can take the prior odds of the hypothesis A, or the ratio of its probability to the probability of its being false, A/a, and update that to take the evidence E into account by multiplying in the ratio of the probability of that evidence given A and given A: new odds = old odds * (E|A)/(E|a).
Play around with that until you see the truth of the claim you asked about. Note that A = 1-a.
Only provided you have looked, and looked in the right place.
Many things, if real, would have some nonzero chance of obtruding on your awareness, even if you haven’t looked for them. The fact that this hasn’t happened is evidence against their existence.
That presents and interesting chicken-and-egg problem, don’t you think?
I can’t consider existence or non-existence of something without that something “obtruding on my awareness” which automatically grants it evidence for existence. And I cannot provide this evidence against the existence of anything because as soon as it enters my mind, poof! the evidence against disappears and the evidence for magically appears in its place.
Anyway, I know the point you’re trying to make. But taking it to absurd lengths leads to absurd results which are generally not the desired outcome.
Sorry, I wasn’t clear. I didn’t mean “obtruding on your awareness” in the sense of having the idea of the thing occur to you. I meant that you encounter the thing in a way that is overwhelming evidence for its existence. Like, maybe you aren’t looking for goblins, but you might one day open the lid of your trashcan and see a goblin in there.
I am confused. So if you DON’T “encounter the thing in a way that is overwhelming evidence for its existence” then you have evidence against its existence?
That doesn’t seem reasonable to me.
Yes. Let
H = “Goblins exist.”
E = “I’ve seen a goblin in my trashcan under circumstances that make the existence of goblins overwhelmingly likely (in particular, the probability that I was hallucinating or dreaming is very low).”
Let us further suppose that the prior probability that I assign to the existence of goblins is very low.
Then P(H | E) > P(H). Hence, P(H | ~E) < P(H). Therefore, the fact that I haven’t seen a goblin in my trashcan is evidence against the existence of goblins.
Of course, it may be very weak evidence. It may not be evidence that I, as a computationally limited being, should take any time to weigh consciously. But it is still evidence.
As I said, I understand the point. To demonstrate my problem, replace goblins with tigers. I don’t think the fact that I haven’t seen a tiger in my trashcan is evidence against the existence of tigers.
In a world where tigers didn’t exist, I wouldn’t expect to see one in my trashcan. In a world where tigers did exist, I also wouldn’t expect to see a tiger in my trashcan, but I wouldn’t be quite as surprised if I did see one. My prior probability that tigers exist is very high, since I have lots of independent reasons to believe that they do exist. The conditional probability of observing no tiger in my trashcan is skewed very slightly towards the world where tigers do not exist, but not enough to affect a prior probability that is very close to 100% already. You could say the same for the goblin example, etc–my prior probability is close to zero, and although I’m more likely not to observe a goblin in my trashcan in the world where goblins don’t exist, I’m also not likely to see one in the world where goblins do exist. The prior probability is far more skewed than the conditional probability, so the evidence of not observing a goblin doesn’t affect my belief much.
The fact that you haven’t seen a tiger in your trashcan is, however, evidence that there is no tiger in your trashcan.
Edit: Which I think is more or less harmonious with your original post. It appears to me, however, that at some step in the discussion, there was a leap of levels from “absence of evidence for goblins in the trashcan is evidence of absence of goblins from the trashcan” to “absence of evidence for goblins in the trashcan is evidence for the complete nonexistence of goblins”.
For practical purposes, sure, this is a case where “absence of evidence is evidence of absence” is not a very useful refrain. The evidence is so weak that it’s a waste of time to think about it. P(I see a tiger in my trashcan|Tigers exist) is very small, and not much higher than P(I see [hallucinate] a tiger in my trashcan|Tigers don’t exist). A very small adjustment to P(Tigers exist), of which you already have very high confidence, isn’t worth keeping track of… unless maybe you’re systematically searching the world for tigers, by examining small regions one at a time, each no more likely to contain a tiger than your own trashcan. Then you really would want to keep track of that very small amount of evidence: if you round it down to no evidence at all, then even after searching the whole world, you’d still have no evidence about tigers!
It’s not fully accurate to say
but it might be a useful heuristic. “Be mindful of the strength of evidence, not just its presence” would be more precise, because looking in the right place does provide a much higher likelihood ratio than not looking at all.
Is it because you deny that P(H | E) > P(H) in this case? Or do you acknowledge that P(H | ~E) < P(H) is true in this case, but you don’t interpret it as meaning “the fact that I haven’t seen a tiger in my trashcan is evidence against the existence of tigers.”
If you deny that P(H | E) > P(H), this might be because your implicit prior knowledge already screens off E from H. Perhaps we should, following Jaynes, always keep track of your prior knowledge X. Then we should rewrite P(H | E) > P(H) as P(H | E & X) > P(H | X). But if your prior knowledge already includes, say, seeing tigers at the zoo, then the additional experience of seeing a tiger in your trashcan may not make tigers any more likely to exist. That is, you could have that P(H | E & X) = P(H | X).
In that case, if you’ve already seen tigers at the zoo, then their absence from your trashcan does not count as evidence against their existence.
In this case I don’t think P(H | ~E) < P(H) applies.
/me looks into the socks drawer, doesn’t find any tigers
/me adjusts downwards the possibility of tigers existing
/me looks into the dishwasher, doesn’t find any tigers
/me further adjusts downwards the possibility of tigers existing
/me looks into the fridge, doesn’t find any tigers
...
You get the idea.
Sorry, I think that I was editing my comment after you replied. (I have no excuse. I think what happened was that I was going to make a quick typofix, but the edit grew longer, and by the end I’d forgotten that I had already submitted the comment.)
How do you react to my conjecture that your background knowledge screens off (or seems to) the experience of seeing a tiger in your trashcan from the hypothesis that tigers exist?
I don’t think screening off helps with the underlying problem.
Let’s recall where we started. I commented on the expression “absence of evidence is evidence of absence” by saying “Only provided you have looked, and looked in the right place.”
The first part should be fairly uncontroversial. If you don’t look you don’t get any new evidence, so there’s no reason to update your beliefs.
Now, the second part, “the right place”. In this thread Wes_W gives a numerical example that involves searching for tigers in houses and says that you need to search about 5 billion houses to drop your confidence to 90% -- and if you search a trillion houses and still don’t find a tiger, “then you’d be insane to still claim that tigers probably do exist.”
Well, let’s take this example as given but change one little thing. Let’s say I’m not looking for tigers—instead, I heard that there are two big rocks, Phobos and Deimos, and I’m looking for evidence of their existence.
I search a house and I don’t find them. I search 5 billion houses and I don’t find them. I search a trillion houses and still don’t find them. At this point would I be insane to believe Phobos and Deimos exist?
That is the issue of “looking in the right place”.
I agree that the “looking” part is important: Looking and not finding evidence is a different kind of “absence of evidence” than just not looking.
I think it would indeed be pretty silly to maintain that a) they exist and b) each house has an independent 10^-9 chance of containing them, after searching a trillion houses and finding neither. But if you didn’t place much credence in anything like b) in the first place, your confidence in a) may not be meaningfully altered. If you already thought Phobos and Deimos were moons of Mars, then you would have extremely minimal evidence against their existence. But again, we can construct a Paradox of the Heap-type setup where you search the solar system, one household-volume at a time, and if all of them come up empty you should end up thinking Phobos and Deimos probably aren’t real, so each individual household-volume must be some degree of evidence.
My thought here—and perhaps we agree on this, in which case I’m happy to concede the point—is that the need to look in the right place is technically already covered by the relevant math, specifically by the different strengths of evidence. But for us puny humans that are doing this without explicit numerical estimates, and who aren’t well-calibrated to nine significant figures, it’s a good rule of thumb.
(This comment has been edited multiple times. My apologies for any confusion.)
Well, you’d do better to search all of those volumes at once. Doing it one volume at a time has a significant chance of failing to find the moons even if they exist, since the moons move over time, and therefore failing to find them isn’t significant evidence of their nonexistence.
But that’s largely orthogonal to your point.
Kinda hard to do, but more to the point, the assumption that a single search is sufficient (= nothing changes with time) may not be true.
In fact, if you want to update your beliefs with absence of evidence, then every time your glance sweeps across a volume of space which physically could hold a tiger you need to update your beliefs about non-existence of tigers.
And then you get into more trouble because if your beliefs in (non)existence of tigers are time-specific, as they should be, the evidence from the previous second might not be relevant to the (non)existence of tigers in the next second. You need specific assumptions about persistence of entities like tigers on certain time scales (e.g. tigers don’t persist on the time scale where the unit of time is a billion years).
(nods) Systems that don’t assign very low priors to such “evasive” events can easily wind up incorrigibly believing falsehoods, even if they process evidence properly.
Meaningfully? I thought we were counting infintesimals :-D
If we are talking about “meaningfully altered” (or what I’d call “detectable”) then not finding a tiger in my rubbish bin does not meaningfully alter my beliefs and the absence of evidence is NOT evidence of absence.
I am not sure of that. First, we’re concerned with statistics, not math (and I think this is a serious difference). Second, I haven’t thought this through, but I suspect a big issue here is what exactly your belief is. To give a quick example, when you don’t find a tiger in your garbage, is the tiger live and real or plush and a toy? When you’re unsure about the existence of something, your idea of what exactly that something is can be fuzzy and that affects what kind of evidence you’ll accept and where will you look for it.
As in “for most practical purposes, and with human computational abilities, this is no update at all”. I’m not sure we can usefully say this isn’t really evidence after all, or we run into Paradox of the Heap problems.
Let me give an example where I think “absence of evidence is evidence of absence” is applicable, even though I’m not sure anyone has ever looked in the right place: Bigfoot.
Bigfoot moves around. It is possible that all of our searches happen to have missed it, like the one-volume-at-a-time search mentioned above.
We don’t really know much about Bigfoot, so it’s hard to be sure if we’ve been looking in the right place. Nor are we quite sure what we’re looking for.
And any individual hike through the woods has a very, very small chance of encountering Bigfoot, even if it does exist, so any looking that has happened by accident won’t be especially rigorous.
Nevertheless, if Bigfoot DID exist, we would expect there to be some good photographs by now. No individual instance of not finding evidence for Bigfoot is particularly significant, but all of the searches combined failing to produce any good evidence for Bigfoot makes me reasonably confident that Bigfoot doesn’t exist, and every year of continued non-findings would drive that down a little more, if I cared enough to keep track.
Similar reasoning is useful for, say, UFOs and the power of prayer. In both cases, it is plausible that none of our evidence is really “looking in the right place” (because aliens might have arbitrarily good evasion capabilities [although beware of Giant Cheesecake Fallacy], because s/he who demands a miracle shall not receive one and running a study on prayer is like demanding a miracle, etc), but the dearth of positive evidence is pretty important evidence of absence, and justifies low confidence in those claims until/unless some strong positive evidence shows up.
Oh, of course there are situation where “absence of evidence is evidence of absence” is applicable.
For a very simple example, consider belief in my second head. The absence of evidence that I have a second head is for me excellent evidence that I do not, in fact, have a second head.
The discussion is really about whether AoE=EoA is universal.
The second half of the sentence was the reason I was bringing it up in this context. We’ve looked, kinda, and not very systematically, and maybe not in the right places, but haven’t found any evidence. Is it fair to call this evidence against paranormal claims?
It’s complicated, I don’t think this problem can be resolved in one or two sentences.
For example, there is clear relationship to how specific the claim/belief is. Lack of evidence is more important for very specific and easily testable claims (“I can bend this very spoon in front of your eyes”) and less important for general claims (“some people can occasionally perform telekinesis”).
Oh, and there’s lot of evidence for paranormal claims. It’s just that this evidence is contested. Some of it has been conclusively debunked, but not all.
Trying to not get sidetracked into that specific sub-discussion: should you be skeptical of any given paranormal claim (specific or general), if some people have tried but nobody has been able to produce clear evidence for it? “Clear evidence” here meaning “better evidence than we would expect if the claim is false”, per the Bayesian definition of evidence.
Should you be more or less skeptical than upon first hearing the claim, but before examining the evidence about it?
I think I’m not getting why you object to “AoE is EoA”, if appending ”...but sometimes it’s so weak that we humans can’t actually make use of it” doesn’t resolve the disagreement in much the same way that ”...but only provided you have looked, and looked in the right place” does.
I am not sure that that means. Example: I claim that this coin is biased. I do a hundred coin flips, it comes up heads 55 times. Is this “clear evidence”?
Oh-oh, that’s a question about how you should form your prior. The Bayesian approach is notoriously reticent about discussing this.
But you can think about it this way: invert the belief and make it “Everyone who claims to have paranormal powers is a fraud”. When another one is debunked, it’s positive evidence for your belief and you should update it. The more people get debunked, the stronger your belief gets.
Does it ever get strong enough for you to dismiss all claimed evidence of paranormal powers sight unseen? I don’t know—it depends on your prior and on how did you update. I expect different results with different people.
Without crunching the numbers, my best guess is no, a fair coin is not very unlikely to come up heads 55 times out of 100. I would guess that no possible P(heads) would have a likelihood ratio much greater than 1 from that test.
If one of the hypotheses is that the coin is unfair in a way that causes it to always get exactly 55 heads in 100 flips, that might be clear/strong evidence, but this would require a different mechanism than usually implied when discussing coin flips.
I don’t know either. This is a rather different question from whether you’re getting evidence at all, though.
No need for best guesses—this is a standard problem in statistics. What it boils down to is that there is a specific distribution of the number of heads that 100 tosses of a fair coin would produce. You look at this distribution, note where 55 heads are on it… and then? What is clear evidence? how high a probability number makes things “likely” or “unlikely”? It’s up to you to decide what level of certainty is acceptable to you.
The Bayesian approach, of course, sidesteps all this and just updates the belief. The downside is that the output you get is not a simple “likely” or “unlikely”, it’s a full distribution and it’s still up to you what to make out of it.
As I said, it’s complicated and, in particular, depends on the specifics of the belief you’re interested in.
I would expect it to be hard to get to high levels of certainty in beliefs of the type “It’s impossible to do X” unless there are e.g. obvious physical constraints.
Right, it’s definitely not a hard problem to calculate directly; I specifically chose not to do so, because I don’t think you need to run the numbers here to know roughly what they’ll look like. Specifically, this test shouldn’t yield even a 2:1 likelihood ratio for any specific P(heads):fair coin, and it’s only one standard deviation from the mean. Either way, it doesn’t give us much confidence that the coin isn’t fair.
Asking what is clear evidence sounds to me like asking what is hot water; it’s a quantitative thing which we describe with qualitative words. 55 heads is not very clear; 56 would be a little clearer; 100 heads is much clearer, but still not perfectly so.
Suppose the chance of finding a tiger somewhere in a given household, on a given day, is one in a billion. Or so say the pro-tigerians. The tiger denialist faction, of course, claims that statistic is made-up, and tigers don’t actually exist. But one household in a trillion might hallucinate a tiger, on any given day.
Today, you search your entire house—the dishwasher AND the fridge AND the trashcan etc.
P(You find a tiger|tigers exist) = .000000001
P(You don’t find a tiger|tigers don’t exist) = .000000000001
P(You don’t find a tiger|tigers exist) = .999999999
P(You don’t find a tiger|tigers don’t exist) = .999999999999
And suppose you are 99.9% confident that tigers exist—you think you could make statements like that a thousand times in a row, and be wrong only once. (Perhaps rattling off all the animals you know.) Your prior odds ratio is 999 to 1. So you take your prior odds, (.999/.001) and multiply by the likelihood ratio, (.999999999/.999999999999), to get a posterior odds ratio of 998.999999002 to 1. This is, clearly, a VERY small adjustment.
What if you search more households: how many would you have to search, without finding a tiger, before you dropped just to 90% confidence in tigers, where you still think tigers exist but would not willingly bet your life on it? If I’ve done the math right, about five billion. There probably aren’t that many households in the world, so searching every house would be insufficient to get you down to just 90% confidence, much less 10% or whatever threshold you’d like to use for “tigers probably don’t exist”.
(And my one-in-a-billion figure is probably far too high, and so searching every household in the world should get you even less adjustment...)
But if you could search a trillion houses at those odds, and still never found a tiger—then you’d be insane to still claim that tigers probably do exist.
And if a trillion searches can produce such a shift, then each individual search can’t produce no evidence. Just very little.
I’ve posted a comment that answers you here
Bayes’ Theorem implies that you can take the prior odds of the hypothesis A, or the ratio of its probability to the probability of its being false, A/a, and update that to take the evidence E into account by multiplying in the ratio of the probability of that evidence given A and given A: new odds = old odds * (E|A)/(E|a).
Play around with that until you see the truth of the claim you asked about. Note that A = 1-a.
Under the technical definition of “evidence”, yes. In practice, it’s a question of how likely you would be to have seen one by now if they were real.