canonical dataset of observations [...] unreliable prior should gradually be diluted
Indeed, if you have enough observations then the prior eventually doesn’t matter. The difficulty is in the selection of the observations. Ideally you should include every potentially relevant observation—including, e.g., every time someone looks up at the sky and doesn’t see an alien spaceship, and every time anyone operates a radar or a radio telescope or whatever and sees nothing out of the ordinary.
In practice it’s simply impractical to incorporate every potentially relevant observation into our thinking. But that makes it awfully easy to have some bias in selection, and that can make a huge difference to the conclusions.
In practice it’s simply impractical to incorporate every potentially relevant observation into our thinking. But that makes it awfully easy to have some bias in selection, and that can make a huge difference to the conclusions.
Yes these circumstances induce bias and this is unfortunate if one wants to say anything about frequency and such things.
Another somewhat simpler question is this: given n observations of something the observer thinks is a UAP, what is the probability that at least one of these observations originated from a UAP?
If for each of these observations P( observation | UAP ) is strictly greater than 0, then I suspect P(UAP) will go towards 1, monotonously, as the number of observations increases.
Is this hunch somewhat correct? How do I express this hunch mathematically..?
I also touch on this question in the section ‘Future work’ in my article, but I don’t have the answer.
If for each of these observations P( observation | UAP ) is strictly greater than 0, then I suspect P(UAP) will go towards 1, monotonously, as the number of observations increases.
No. This violates the law of conservation of expected evidence. The relevant question is whether P( observation | UAP ) is bigger or smaller than P( observation | ~UAP ).
The problem, as I mentioned above, is that it’s hard to estimate P( observation | UAP ).
What if we have n observations where P( observation | ~UAP ) through investigation has been found to be 0 and, while hard to determine, P( observation | UAP ) is reasonably said to be strictly greater than 0.
Then P(UAP) will go towards 1, monotonously, as the number of observations increases, right?
What if we have n observations where P( observation | ~UAP ) through investigation has been found to be 0
Um, I don’t think you understand what it means for P( observation | ~UAP ) to equal 0. If P( observation | ~UAP ) were really 0, then a single such observation would be enough to comclude the P(UAP) is 1.
So how should one interpret findings like this:
“We investigated n observations and out of these there were k observations which had sufficient observation data to rule out all known aerial phenomena as being the cause”.
So that would imply that P(UAP) is pretty much 1?
So what remains is “merely” to determine what lies in this set ‘UAP’ as it could pretty much be anything.
So how should one interpret findings like this: “We investigated n observations and out of these there were k observations which had sufficient observation data to rule out all known aerial phenomena as being the cause”.
If I take that statement at face value it means the observations were caused by some unknown phenomenon. Therefore, unknown phenomena of this type exist.
Indeed, if you have enough observations then the prior eventually doesn’t matter. The difficulty is in the selection of the observations. Ideally you should include every potentially relevant observation—including, e.g., every time someone looks up at the sky and doesn’t see an alien spaceship, and every time anyone operates a radar or a radio telescope or whatever and sees nothing out of the ordinary.
In practice it’s simply impractical to incorporate every potentially relevant observation into our thinking. But that makes it awfully easy to have some bias in selection, and that can make a huge difference to the conclusions.
Yes these circumstances induce bias and this is unfortunate if one wants to say anything about frequency and such things.
Another somewhat simpler question is this: given n observations of something the observer thinks is a UAP, what is the probability that at least one of these observations originated from a UAP?
If for each of these observations P( observation | UAP ) is strictly greater than 0, then I suspect P(UAP) will go towards 1, monotonously, as the number of observations increases.
Is this hunch somewhat correct? How do I express this hunch mathematically..?
I also touch on this question in the section ‘Future work’ in my article, but I don’t have the answer.
http://myinnerouterworldsimulator.neocities.org/index.html
No. This violates the law of conservation of expected evidence. The relevant question is whether P( observation | UAP ) is bigger or smaller than P( observation | ~UAP ).
The problem, as I mentioned above, is that it’s hard to estimate P( observation | UAP ).
What if we have n observations where P( observation | ~UAP ) through investigation has been found to be 0 and, while hard to determine, P( observation | UAP ) is reasonably said to be strictly greater than 0.
Then P(UAP) will go towards 1, monotonously, as the number of observations increases, right?
Um, I don’t think you understand what it means for P( observation | ~UAP ) to equal 0. If P( observation | ~UAP ) were really 0, then a single such observation would be enough to comclude the P(UAP) is 1.
So how should one interpret findings like this: “We investigated n observations and out of these there were k observations which had sufficient observation data to rule out all known aerial phenomena as being the cause”.
So that would imply that P(UAP) is pretty much 1?
So what remains is “merely” to determine what lies in this set ‘UAP’ as it could pretty much be anything.
If I take that statement at face value it means the observations were caused by some unknown phenomenon. Therefore, unknown phenomena of this type exist.