First, the most important ones: those we don’t know about yet, but have a better chance to fight using either increased wisdom (from living hundreds of years or more), practically unlimited skilled labor, guaranteed reproductible decisions, or any combination of that plus all the fruits from the scientific revolutions that will follow.
Second, the usual boring ones: runaway global warming, pathogens with kuru-like properties, collapse of governance shifting threats from endurable to existential, etc.
Third, the long term mandatory need of conquering the stars, which sounds much easier using robots followed by photons for uploading our minds.
Finally, and iff such concepts are actually valid (I’m not sure), reproducible AGI will help us becoming AGI+, which might be necessary to align ourselves as AGI++, and so on.
What makes you estimate that AI may reduce x-risk?
I don’t get the logic here. Once you agree there’s at least one x-risk AGI may reduce, isn’t that enough to answer both the OP and your last question? Maybe you meant: « What makes you estimate that AI may reduce x-risk more than EY’s estimate of how much it’d increase it? ». In which case I don’t, but that’s just a property of EY’s estimate being maximally high.
First, the most important ones: those we don’t know about yet, but have a better chance to fight using either increased wisdom (from living hundreds of years or more), practically unlimited skilled labor, guaranteed reproductible decisions, or any combination of that plus all the fruits from the scientific revolutions that will follow.
Second, the usual boring ones: runaway global warming, pathogens with kuru-like properties, collapse of governance shifting threats from endurable to existential, etc.
Third, the long term mandatory need of conquering the stars, which sounds much easier using robots followed by photons for uploading our minds.
Finally, and iff such concepts are actually valid (I’m not sure), reproducible AGI will help us becoming AGI+, which might be necessary to align ourselves as AGI++, and so on.
I don’t get the logic here. Once you agree there’s at least one x-risk AGI may reduce, isn’t that enough to answer both the OP and your last question? Maybe you meant: « What makes you estimate that AI may reduce x-risk more than EY’s estimate of how much it’d increase it? ». In which case I don’t, but that’s just a property of EY’s estimate being maximally high.