He doesn’t have any evidence like that. He is merely pointing out that if we were to ask that question, among experts, we would get post-facto explanations which he would take with a heap of salt (because of the anthropic bias).
Taleb’s brand of rationality, which he calls empirical skepticism (as opposted to just empiricism or just skepticism), largely trupmpets uncertainty. I think he sees this as against Bayesians (because Bayesians will usually choose an artificially narrow hypothesis space when making formal Bayesian models, and as a result, will usually get answers which are much more certain that is merited). He hasn’t yet spoken about Bayesians specifically, though—just “nerds” (statisticians who lack street smarts). When reading his stuff, though, I feel it converts well into Bayes. He is just saying that we shouldn’t allow our beliefs to converge faster than is merited.
People are overconfident far more often than underconfidant.
So, his point with the black plague is really that we should answer “I don’t know” if we are asked such a question, and even if an expert gives a better-sounding answer, we should assume it’s an example of the narrative fallacy.
The point of his argument, if I understand correctly, is that we should expect a bubonic plague in the future to be more of an x-risk than it was in the past, because our past evidence is filtered by anthropic considerations. And because his argument isn’t in any way specific to the plague, he will expect x-risks in general to be more prevalent in the future.
However, I don’t understand how to quantify this. How much should I update towards the next bubonic plague being an x-risk? A little? A lot?
The historical plague could have wiped out humanity, but for anthropic reasons. And also, the flu of 1918 could have wiped out humanity, but for anthropic reasons. And the flu virus created recently in the lab could have escaped and wiped out humanity, but didn’t, for anthropic reasons. And also I have in my garage the pestilent bacterium Draco invisibilis, and if it ever infects a human, we are all doomed; but it never has, for anthropic reasons...
Why does it surprise him that it didn’t? What is his evidence that the plague would have been expected to kill more people than it did?
He doesn’t have any evidence like that. He is merely pointing out that if we were to ask that question, among experts, we would get post-facto explanations which he would take with a heap of salt (because of the anthropic bias).
Taleb’s brand of rationality, which he calls empirical skepticism (as opposted to just empiricism or just skepticism), largely trupmpets uncertainty. I think he sees this as against Bayesians (because Bayesians will usually choose an artificially narrow hypothesis space when making formal Bayesian models, and as a result, will usually get answers which are much more certain that is merited). He hasn’t yet spoken about Bayesians specifically, though—just “nerds” (statisticians who lack street smarts). When reading his stuff, though, I feel it converts well into Bayes. He is just saying that we shouldn’t allow our beliefs to converge faster than is merited.
People are overconfident far more often than underconfidant.
So, his point with the black plague is really that we should answer “I don’t know” if we are asked such a question, and even if an expert gives a better-sounding answer, we should assume it’s an example of the narrative fallacy.
The point of his argument, if I understand correctly, is that we should expect a bubonic plague in the future to be more of an x-risk than it was in the past, because our past evidence is filtered by anthropic considerations. And because his argument isn’t in any way specific to the plague, he will expect x-risks in general to be more prevalent in the future.
However, I don’t understand how to quantify this. How much should I update towards the next bubonic plague being an x-risk? A little? A lot?
The historical plague could have wiped out humanity, but for anthropic reasons. And also, the flu of 1918 could have wiped out humanity, but for anthropic reasons. And the flu virus created recently in the lab could have escaped and wiped out humanity, but didn’t, for anthropic reasons. And also I have in my garage the pestilent bacterium Draco invisibilis, and if it ever infects a human, we are all doomed; but it never has, for anthropic reasons...