What does Nassim Taleb think about existential risks and existential risk research? He sounds like a kind of person who might be interested in such things?
He liked Bostrom’s new institute dedicated to existential risks. He doesn’t think AI is a ruin-style risk, saying it requires “risk vigilance” but isn’t a ruin type risk yet, and that he would be willing to reconsider later.
He has his own risk initiative called the “Extreme risk initiative”.
What does Nassim Taleb think about existential risks and existential risk research? He sounds like a kind of person who might be interested in such things?
My guess is that he is worried about existential risks, but of the Black Swan type: risks that can’t be predicted or theorized about far in advance.
He liked Bostrom’s new institute dedicated to existential risks. He doesn’t think AI is a ruin-style risk, saying it requires “risk vigilance” but isn’t a ruin type risk yet, and that he would be willing to reconsider later.
He has his own risk initiative called the “Extreme risk initiative”.