For example from wikipedia, forbidding nuclear power plants based on concerns about low-probability high-impact risks means continuing to rely on power plants that burn fossil fuels. In the same vein, future AGIs would most likely help with many existential risks like detecting rogue asteroids and improving the economy enough that we don’t let a few million human children die from starvation each year.
You mean: how to balance a low-probability x-risk with a high-probability of saving a large number/small fraction of human children? Good point it’s hard, but we don’t actually need this orange-apple comparison: the point is that AGI may well decrease overall x-risks.
(I mentioned starving children because some count large impacts as x-risk, but on second thought it was probably a mistake)
Which x-risks do you think AI will reduce? I have heard arguments that it would improve our ability to respond to potential asteroid impacts. However this reduction in x-risk seems very small in comparison to the x-risk that unaligned AGI poses. What makes you estimate that AI may reduce x-risk?
First, the most important ones: those we don’t know about yet, but have a better chance to fight using either increased wisdom (from living hundreds of years or more), practically unlimited skilled labor, guaranteed reproductible decisions, or any combination of that plus all the fruits from the scientific revolutions that will follow.
Second, the usual boring ones: runaway global warming, pathogens with kuru-like properties, collapse of governance shifting threats from endurable to existential, etc.
Third, the long term mandatory need of conquering the stars, which sounds much easier using robots followed by photons for uploading our minds.
Finally, and iff such concepts are actually valid (I’m not sure), reproducible AGI will help us becoming AGI+, which might be necessary to align ourselves as AGI++, and so on.
What makes you estimate that AI may reduce x-risk?
I don’t get the logic here. Once you agree there’s at least one x-risk AGI may reduce, isn’t that enough to answer both the OP and your last question? Maybe you meant: « What makes you estimate that AI may reduce x-risk more than EY’s estimate of how much it’d increase it? ». In which case I don’t, but that’s just a property of EY’s estimate being maximally high.
It’s a [precautionary principle]/(https://en.m.wikipedia.org/wiki/Precautionary_principle#Criticisms), so the main flaw is: it fails to balance risks with benefits.
For example from wikipedia, forbidding nuclear power plants based on concerns about low-probability high-impact risks means continuing to rely on power plants that burn fossil fuels. In the same vein, future AGIs would most likely help with many existential risks like detecting rogue asteroids and improving the economy enough that we don’t let a few million human children die from starvation each year.
Okay, how much risk is worth the benefit? Would you advocate for a comparison of expected gains and expected losses?
You mean: how to balance a low-probability x-risk with a high-probability of saving a large number/small fraction of human children? Good point it’s hard, but we don’t actually need this orange-apple comparison: the point is that AGI may well decrease overall x-risks.
(I mentioned starving children because some count large impacts as x-risk, but on second thought it was probably a mistake)
Which x-risks do you think AI will reduce? I have heard arguments that it would improve our ability to respond to potential asteroid impacts. However this reduction in x-risk seems very small in comparison to the x-risk that unaligned AGI poses. What makes you estimate that AI may reduce x-risk?
First, the most important ones: those we don’t know about yet, but have a better chance to fight using either increased wisdom (from living hundreds of years or more), practically unlimited skilled labor, guaranteed reproductible decisions, or any combination of that plus all the fruits from the scientific revolutions that will follow.
Second, the usual boring ones: runaway global warming, pathogens with kuru-like properties, collapse of governance shifting threats from endurable to existential, etc.
Third, the long term mandatory need of conquering the stars, which sounds much easier using robots followed by photons for uploading our minds.
Finally, and iff such concepts are actually valid (I’m not sure), reproducible AGI will help us becoming AGI+, which might be necessary to align ourselves as AGI++, and so on.
I don’t get the logic here. Once you agree there’s at least one x-risk AGI may reduce, isn’t that enough to answer both the OP and your last question? Maybe you meant: « What makes you estimate that AI may reduce x-risk more than EY’s estimate of how much it’d increase it? ». In which case I don’t, but that’s just a property of EY’s estimate being maximally high.