I don’t recall him saying that he’s only a cultural Christian and doesn’t care whether any of it is actually true.
You take a certain epistemology for granted that Taleb doesn’t share.
Taleb follows heuristics of not wanting to be wrong on issues where being wrong is costly and putting less energy into updating beliefs on issues where being wrong is not costly.
He doesn’t care about whether Christianity is true in the sense that he cares about analysing evidence about whether Christianity is true. He might care in the sense that he has an emotional attachment to it being true. If I lend you a book I care about whether you give it back to me because I trust you to give it back. That’s a different kind of caring than I have about pure matter of facts.
One of Taleb’s examples is how in the 19th century someone who went through to a doctor who would treat him based on intellectual reasoning would have probably have done worse than someone who went to a priest.
Taleb is skeptic that you get very far with intellectual reasoning and thinks that only empiricism has made medicine better than doing nothing.
We might have made some progress but Taleb still thinks that there are choices where the Christian ritual will be useful even if the Christian ritual is build on bad assumptions, because following the ritual keeps people from acting based on hubris. It keeps people from thinking they understand enough to act based on understanding.
That’s also the issue with the new atheists. They are too confident in their own knowledge and not skeptic enough. That lack of skepticism is in turn dangerous because they believe that just because no study showed gene manipulated plants to be harmful they are safe.
(thank you for helping me try to understand him on this point, by the way)
This seems coherent. But, to be honest, weak (which could mean I still don’t get it).
We also seem to have gotten back to the beginning, and the quote. Leaving aside for now the motivated stopping regarding religion, we have a combination of the Precautionary Principle, the logic of Chesterton’s Fence, and the difficulty of assessing risks on account of Black Swans.
… which would prescribe inaction in any question I can think of. It looks as if we’re not even allowed to calculate the probability of outcomes, because no matter how much information we think we have, there can always be black swans just outside our models.
Should we have ever started mass vaccination campaigns? Smallpox was costly, but it was a known, bounded cost that we had been living with for thousands of years, and, although for all we knew the risks looked obviously worth it, relying on all we know to make decisions is a manifestation of hubris. I have no reason to expect being violenty assaulted when I go out tonight, but of course I can’t possibly have taken all factors in consideration, so I should stay home, as it will be safer if I’m wrong. There’s no reason to think pursuing GMOs will be dangerous, but that’s only considering all we know, which can’t be enough to meet the burden of proof under the strong precautionary principle. There’s not close to enough evidence to even locate Christianity in hypothesis space, but that’s just intellectual reasoning… We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn’t a catastrophic black swan hiding behind that Fence?
Note that probably all crops are “genetically modified” by less technologically advanced methods. I’m not sure if that disproves the criticism or shows that we should be cautious about eating anything.
You changed your demand. If GM crops have less mutations than conventional crops, which are genetically modified by irradiation + selection (and have a track record of being safe), this establishes that GM crops are safe, if you accept the claim that, say, the antifreeze we already eat in fish is safe. Requiring GM crops themselves to have a track record is a bigger requirement.′
No, I’m saying we need some track record for each new crop including the GMO ones, roughly proportionate to how different they are from existing crops.
But then we look, and this turns into “we haven’t looked enough”. Which can be true, so maybe we go “can anyone think of something concrete that can go wrong with this?”, and ideally we will look into that, and try to calculate the expected utility.
But then it becomes “we can’t look enough—no matter how hard we try, it will always be possible that there’s something we missed”.
Which is also true. But if, just in case, we decide to act as if unknown unknowns are both certain and significant enough to override the known variables, then we start vetoing the development of things like antibiotics or the internet, and we stay Christians because “it can’t be proven wrong”.
We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn’t a catastrophic black swan hiding behind that Fence?
The history here says the African epidemic was spread primarily heterosexually. There is also the confounder of differing levels of medical facilities in different countries.
That aside, which is not to say that Africa does not matter, in the US and Europe the impact was primarily in the gay community.
I recognise that this is a contentious area though, and would rather avoid a lengthy thread.
The point was just that we should be allowed to weight expected positives against expected negatives. Yes, there can be invisible items in the “cons” column (also on the “pros”), and it may make sense to require extra weight on the “pros” column to account for this, but we shouldn’t be required to act as if the invisible “cons” definitely outweigh all “pros”.
You take a certain epistemology for granted that Taleb doesn’t share.
Taleb follows heuristics of not wanting to be wrong on issues where being wrong is costly and putting less energy into updating beliefs on issues where being wrong is not costly.
He doesn’t care about whether Christianity is true in the sense that he cares about analysing evidence about whether Christianity is true. He might care in the sense that he has an emotional attachment to it being true. If I lend you a book I care about whether you give it back to me because I trust you to give it back. That’s a different kind of caring than I have about pure matter of facts.
One of Taleb’s examples is how in the 19th century someone who went through to a doctor who would treat him based on intellectual reasoning would have probably have done worse than someone who went to a priest. Taleb is skeptic that you get very far with intellectual reasoning and thinks that only empiricism has made medicine better than doing nothing.
We might have made some progress but Taleb still thinks that there are choices where the Christian ritual will be useful even if the Christian ritual is build on bad assumptions, because following the ritual keeps people from acting based on hubris. It keeps people from thinking they understand enough to act based on understanding.
That’s also the issue with the new atheists. They are too confident in their own knowledge and not skeptic enough. That lack of skepticism is in turn dangerous because they believe that just because no study showed gene manipulated plants to be harmful they are safe.
(thank you for helping me try to understand him on this point, by the way)
This seems coherent. But, to be honest, weak (which could mean I still don’t get it).
We also seem to have gotten back to the beginning, and the quote. Leaving aside for now the motivated stopping regarding religion, we have a combination of the Precautionary Principle, the logic of Chesterton’s Fence, and the difficulty of assessing risks on account of Black Swans.
… which would prescribe inaction in any question I can think of. It looks as if we’re not even allowed to calculate the probability of outcomes, because no matter how much information we think we have, there can always be black swans just outside our models.
Should we have ever started mass vaccination campaigns? Smallpox was costly, but it was a known, bounded cost that we had been living with for thousands of years, and, although for all we knew the risks looked obviously worth it, relying on all we know to make decisions is a manifestation of hubris. I have no reason to expect being violenty assaulted when I go out tonight, but of course I can’t possibly have taken all factors in consideration, so I should stay home, as it will be safer if I’m wrong. There’s no reason to think pursuing GMOs will be dangerous, but that’s only considering all we know, which can’t be enough to meet the burden of proof under the strong precautionary principle. There’s not close to enough evidence to even locate Christianity in hypothesis space, but that’s just intellectual reasoning… We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn’t a catastrophic black swan hiding behind that Fence?
The phrase “no reason to think” should raise alarm bells. It can mean we’ve looked and haven’t found any, or that we haven’t looked.
There’s no reason to think that there’s a teapot-shaped asteroid resembling Russell’s teapot either.
And I’m pretty sure we haven’t looked for one, either. Yet it would be ludicrous to treat it as if it had a substantial probability of existing.
A prior eating most things is a bad idea. Thus the burden is on the GMO advocates to show their products are safe.
Note that probably all crops are “genetically modified” by less technologically advanced methods. I’m not sure if that disproves the criticism or shows that we should be cautious about eating anything.
We should be cautious about eating anything that doesn’t have a track record of being safe.
You changed your demand. If GM crops have less mutations than conventional crops, which are genetically modified by irradiation + selection (and have a track record of being safe), this establishes that GM crops are safe, if you accept the claim that, say, the antifreeze we already eat in fish is safe. Requiring GM crops themselves to have a track record is a bigger requirement.′
No, I’m saying we need some track record for each new crop including the GMO ones, roughly proportionate to how different they are from existing crops.
Yes, this is different from merely “showing that GMO products are safe”. Because we also have the inside view.
I agree with this.
But then we look, and this turns into “we haven’t looked enough”. Which can be true, so maybe we go “can anyone think of something concrete that can go wrong with this?”, and ideally we will look into that, and try to calculate the expected utility.
But then it becomes “we can’t look enough—no matter how hard we try, it will always be possible that there’s something we missed”.
Which is also true. But if, just in case, we decide to act as if unknown unknowns are both certain and significant enough to override the known variables, then we start vetoing the development of things like antibiotics or the internet, and we stay Christians because “it can’t be proven wrong”.
HIV.
Its worst impact was and is in Sub-Saharan Africa where the “laws and customs against homosexuality” are fully in place.
The history here says the African epidemic was spread primarily heterosexually. There is also the confounder of differing levels of medical facilities in different countries.
That aside, which is not to say that Africa does not matter, in the US and Europe the impact was primarily in the gay community.
I recognise that this is a contentious area though, and would rather avoid a lengthy thread.
The point was just that we should be allowed to weight expected positives against expected negatives. Yes, there can be invisible items in the “cons” column (also on the “pros”), and it may make sense to require extra weight on the “pros” column to account for this, but we shouldn’t be required to act as if the invisible “cons” definitely outweigh all “pros”.
This suggests we actually need laws and customs against promiscuity. Or just better public education re STIs.