Bayesian approach: UFO vs. AI hypotheses
The goal of this post is not to prove or disprove existing of so called UFO or feasibility of AI but to study limits of Bayesian approach to complex problems.
Here we will test two hypotheses:
1) UFOs exist. We will take for simplicity the following form of this thesis: Unknown nonhuman intelligence exists on the Earth and manifests itself with unknown laws of physics.
2) AI will be created. In the XXI century will be created computer program which will surpass humans in every kind of intellectual activity by many orders of magnitude.
From the point of view of a layman both hypothesis are bizarre and so belongs to the reference class of “strange ideas”, most of each are false.
But both hypotheses have large communities which accumulated many evidences to support this ideas. (Here we could see “confirmation bias”).
In the begging we should point on isomorphism of the two hypotheses: in both cases the question is an existence of nonhuman intelligence. In the first case it is said that nonhuman intelligence already exists on the Earth, in second case that nonhuman intelligence would soon be created on the Earth.
For Bayesian estimation we need a priori probability, and then change it with some evidences.
Supporters of AI-hypothesis usually say that a priori probability is quite high: if the human mind exists, then AI is possible, and in addition, it is typical to human to repeat the achievements of nature. Therefore, a priori, we can assume that the creation of AI is possible and highly likely.
The situation with evidence in the field of AI is worse because creation of AI is a future event and direct empirical evidence is impossible. Moreover, many failed attempts to create AI in the past are used as evidence against the possibility of it creation.
Therefore, information about the success in “helping” disciplines is used as evidence of the possibility of AI: performance of computers and its continued growth, the success of brain scans, the success of various computer programs to recognize images and games. These circumstantial evidences cannot be directly substituted into the formula for calculating the probability, therefore, their credibility will always include taking something for granted.
In the case of UFOs a priori hypothesis is less convincing, since it argues not only that that nonhuman intelligence does exist on Earth, but also that it uses unknown physical laws (for flying). So this hypothesis is more complex and so less probable. Also it is not clear how nonhuman intelligence would appear evolutionary on Earth but didn’t eat all other types of live beings. Here come in play alien theory of the origin of UFOs as a priori hypothesis.
The proponents of alien UFO hypothesis say that if human intelligence exists on Earth, then some kind of intelligence could also appear on other planets of our Galaxy long before and come to our planet with some more or less rational goals (exploration, game etc). Saying this they think that they create high a priory probability for UFO hypothesis. (It is not true, because they have to assume that aliens have very strange goal systems – for example that they fly many light years to drink cattle blood in so called cattle mutilation cases. This improbable goal system completely neutralizes high probability of alien origin of UFO.)
We could note immediately that the a priori hypothesis about UFO uses the same premise as the hypothesis about AI: namely, the possibility of nonhuman intelligence is justified by the existence of the human mind!
However, the hypothesis of UFOs requires the existence of new physical laws, whereas the hypothesis about AI requires their absence (in the sense that for creating AI is necessary that the brain could be described as an algorithmic computer without any Penrose style things).
History of science shows that the list of physical laws will never be complete—every time we discover something new (e.g. dark energy recently) - but on the other hand, there are no physical effects in our environment, which are inexplicable within the framework of known physical laws (except perhaps that of ball lightning). Yet due to the need for new laws of physics a priori probability of the existence of UFOs is less.
In terms of evidence the hypothesis about UFOs has a sharp contrast to the hypothesis of AI. There are thousands of empirical evidences about UFO sightings. However Bayesian interference (increase the credibility) of each of the evidence is very small. That is, most of these evidences have an equal probability of being true or false and do not carry any information. Note that if we have 20 evidences with a probability of truth greater than 50%, say 60%, then Bays formula give a very substantial total evidence of 3000 to 1 - that is, would increase the validity of a priori hypothesis of 3000 times.
Thus, the hypothesis of UFO has a lower a priori probability, but more empirical evidence (the truth of which we will not discuss here, but see Don Berliner “UFO Briefing Document: The Best Available Evidence» My position is that I not convinces UFO believer, but assume they could exist).
Discussions about AI are always tend to come to discussion about rationality, but most UFO band is a bastion of irrationality. In fact, both can be described in terms of Bayesian logic. The belief that some topics are more rational than others is irrational.
The credibility of Bayesian evidence is not how close it is to equal probability of true or false. It’s how different the world would look with and without the hypothesis in question.
That is, we look at the world, and say “How many bright-light-in-the-sky-moving-erratically events would we expect if UFOs were not possible?” and compare it to the number of UFO sightings. If the numbers are very similar (they are) then UFO sightings are uncorrelated with UFO existence. Something like 100,000 events that have no relationship with UFO existence doesn’t change the probability of UFOs existing at all.
Now, a harder target. From the document:
The extremely common case of people privileging the hypothesis due to availability bias is almost wholly the explanation for individuals “knowledgeable about the sky” to see lights and think “UFOs”.
This is due largely to the author’s lack of understanding of publication bias and false positive rates. If a radar controller fails to identify a signal as an actual aircraft or cloud about one time in ten thousand, and we have 100 million radar events a year, we’d expect about 500,000 radar UFO sightings—five times more than we apparently have,
Physical-traces-cases reduce to burned shrubs and depressions in the dirt. Again, “how many burned-shrub-and-disturbed-dirt events should we expect if UFOs don’t exist”.
The document’s case for UFOs is negligible. If this is the best available evidence, you should not believe in UFOs.
Yes, I mean exactly what you said about Bayesian probability: that UFO evidences do not support UFO existence hypothesis more than UFO non existence hypothesis.
The last time you posted an essay multiple people including me suggested that you get people with better English skills to proofread your essays. (Edit: The spelling in this piece is better than you earlier piece but the grammar is not much better.) Don’t be surprised if people aren’t going to slog through your ideas if you aren’t going to take that step.
Despite this, I’ve made some effort to read what you’ve wrote. It seems that you are essentially trying to compare discussion about the possibility of strong AI to the possibility of alien visitors to Earth. Is this correct?
I don’t follow some of what you have wrote. For example you say:
This is not how that theorem works. You can’t assume that each observation has a 50% chance of being an actual alien entity. I’m not sure if this is what you are saying. If it is then I’m very confused about how you are trying to use Bayes’s theorem.
Considering that the hard-core Bayesians here think that Bayesianism is the core of epistemology, the claim that something can be approached with Bayesian estimates is not a novel claim. Also, no one is claiming that topics are inherently more rational than others. It isn’t clear to me what that would mean. However, that doesn’t mean that some probability estimates aren’t more reasonable or rational than others. That’s not the same claim.
He’s talking about the likelihood ratio P(sighting|aliens)/(P(sighting|aliens) + P(sighting|no aliens)), which is a good measurement of the evidence gained from a sighting.
I gave up trying to read this, because grammatical errors and awkward sentence structure in this post seriously interfere with communication. You need a proofreader who speaks English natively.