A black swan is generally an event we knew was possible that had less than the majority of the probability mass.
The flaw with them is not actually an issue with rationality(or other forms of decision making) but due to human compute and memory limits.
If your probability distribution for each trading day on a financial market is p=0.51 +, p=0.48 -, p=0.01 black swan, you may simply drop that long tail term from your decisionmaking. Only considering the highest probability terms is an approximation and is arguably still “rational” since you are reasoning on math and evidence, but you will be surprised by the black swan.
This leads naturally into the next logical offshoot. A human meat computer doesn’t have the memory or available compute to consider every low probability long tail event, but you could build an artificial system that does. Part of the reason AI is so critically important and directly relevant to rationality.
Now a true black swan, one we didn’t even know was possible? Yeah you are going to be surprised every time. If aliens start invading from another dimension you need to be able to rapidly update your assumptions about how the universe works and respond accordingly. Which rationality, vs alternatives like “the word of the government sanctioned authority on a subject is truth”, adapts well too.
This is where being too overconfident hurts. In the event of an ontology breaking event like the invasion example, if you believe p=1.0 the laws of physics as discovered in the 20th century are absolute and complete, what you are seeing in front of your eyes as you reload your shotgun, alien blood splattered everywhere, can’t be real. Has to be some other explanation. This kind of thinking is suboptimal.
Similarly if you have the same confidence in theories constructed on decades of high quality data and carefully reasoned on, with lots of use of mathematical proofs, as some random rumor you hear online, you will see nonexistent aliens everywhere. You were not weighting your information inputs by probability.
A Black Swan is better formulated as: - Extreme Tail Event : Probabilities cannot compute in current paradigm. Its weight is p<Epsilon. - Extreme Impact if it happens : Paradigm Revolution. - Can be rationalised in hindsight, because there were hints. “Most” did not spot the pattern. Some may have.
The Argument: ”Math + Evidence + Rationality + Limits makes it Rational to drop Long Tail for Decision Making” is a prime example of an heuristic which fails into what Taleb calls “Blind Faith in Degenerate MetaProbabilities”.
It is likely based on an instance of {Absence of Evidence is Evidence of Absence : Ad Ignorantiam : Logical Fallacy}
The central argument of Anti-Fragility is that Heuristics allocating some resources to Black Swans / Dragon Kings studies & contingency plans are infinitely more rational than “drop the long tail” heuristics.
When it comes to rationality, the Black Swan Theory ( https://en.wikipedia.org/wiki/Black_swan_theory ) is an extremely useful test.
A truly rational paradigm should be built with anti-fragility in mind, especially towards Black Swan events which would challenge its axiomatic.
A black swan is generally an event we knew was possible that had less than the majority of the probability mass.
The flaw with them is not actually an issue with rationality(or other forms of decision making) but due to human compute and memory limits.
If your probability distribution for each trading day on a financial market is p=0.51 +, p=0.48 -, p=0.01 black swan, you may simply drop that long tail term from your decisionmaking. Only considering the highest probability terms is an approximation and is arguably still “rational” since you are reasoning on math and evidence, but you will be surprised by the black swan.
This leads naturally into the next logical offshoot. A human meat computer doesn’t have the memory or available compute to consider every low probability long tail event, but you could build an artificial system that does. Part of the reason AI is so critically important and directly relevant to rationality.
Now a true black swan, one we didn’t even know was possible? Yeah you are going to be surprised every time. If aliens start invading from another dimension you need to be able to rapidly update your assumptions about how the universe works and respond accordingly. Which rationality, vs alternatives like “the word of the government sanctioned authority on a subject is truth”, adapts well too.
This is where being too overconfident hurts. In the event of an ontology breaking event like the invasion example, if you believe p=1.0 the laws of physics as discovered in the 20th century are absolute and complete, what you are seeing in front of your eyes as you reload your shotgun, alien blood splattered everywhere, can’t be real. Has to be some other explanation. This kind of thinking is suboptimal.
Similarly if you have the same confidence in theories constructed on decades of high quality data and carefully reasoned on, with lots of use of mathematical proofs, as some random rumor you hear online, you will see nonexistent aliens everywhere. You were not weighting your information inputs by probability.
A Black Swan is better formulated as:
- Extreme Tail Event : Probabilities cannot compute in current paradigm. Its weight is p<Epsilon.
- Extreme Impact if it happens : Paradigm Revolution.
- Can be rationalised in hindsight, because there were hints. “Most” did not spot the pattern. Some may have.
If spotted a priori, one could call it a Dragon King: https://en.wikipedia.org/wiki/Dragon_king_theory
The Argument:
”Math + Evidence + Rationality + Limits makes it Rational to drop Long Tail for Decision Making”
is a prime example of an heuristic which fails into what Taleb calls “Blind Faith in Degenerate MetaProbabilities”.
It is likely based on an instance of {Absence of Evidence is Evidence of Absence : Ad Ignorantiam : Logical Fallacy}
The central argument of Anti-Fragility is that Heuristics allocating some resources to Black Swans / Dragon Kings studies & contingency plans are infinitely more rational than “drop the long tail” heuristics.