To paraphrase Kornai’s best idea (which he’s importing from outside the field):
A reasonable guideline is limiting the human caused xrisk to several orders of magnitude below the natural background xrisk level, so that human-caused dangers are lost in the noise compared to the pre-existing threat we must live with anyway.
I like this idea (as opposed to foolish proposals like driving risks from human made tech down to zero), but I expect someone here could sharpen the xrisk level that Kornai suggests. Here’s a disturbing note from the appendix where he does his calculation:
Here we take the “big five” extinction events that occurred within the past half billion years as background. Assuming a mean time of 10^8 years between mass extinctions and 10^9 victims in the next one yields an annualized death rate of 10, comparing quite favorably to the reported global death rate of ~500 for contact with hornets, wasps, and bees (ICD-9-CM E905.3). [emphasis added]
Obviously, this is a gross mis-understanding of xrisks and why they matter. No one values human lives linearly straight down to 0 or assumes no expansion factors for future generations.
A motivated internet researcher could probably just look up the proper citations from Bostrom’s “Global Catastrophic Risks” and create a decomposed model that estimated the background xrisk level from only nature (and then only nature + human risks w/o AI), and develop a better safety margin that would be lower than the one in this paper (implying that AGI could afford to be a few orders of magnitude riskier than Kornai’s rough estimates).
A reasonable guideline is limiting the human caused xrisk to several orders of magnitude below the natural background xrisk level, so that human-caused dangers are lost in the noise compared to the pre-existing threat we must live with anyway.
I like this idea [...]
We are well above there right now—and that’s very unlikely to change before we have machine superintelligence.
Martel (1997) estimates a considerably higher annualized death rate of 3,500 from meteorite impacts alone (she doesn’t consider continental drift or gamma-ray bursts), but the internal logic of safety engineering demands we seek a lower bound, one that we must put up with no matter what strides we make in redistribution of food, global peace, or healthcare.
Is this correct? I’d expect that this lower-bound was superior to the above (10 deaths / year) for the purpose of calculating our present safety factor… unless we’re currently able to destroy earth-threatening meteorites and no one told me.
unless we’re currently able to destroy earth-threatening meteorites and no one told me.
Well, we do have the technological means to build something to counter one of them, if we were to learn about it tomorrow and it had ETA 2-3 years. Assuming the threat is taken seriously and more resources and effort are put into this than they were / are in killing militant toddlers in the middle-east using drones, that is.
But if one shows up now and it’s about to hit Earth on the prophecy-filled turn of the Mayan calendar? Nope, GG.
To paraphrase Kornai’s best idea (which he’s importing from outside the field):
I like this idea (as opposed to foolish proposals like driving risks from human made tech down to zero), but I expect someone here could sharpen the xrisk level that Kornai suggests. Here’s a disturbing note from the appendix where he does his calculation:
Obviously, this is a gross mis-understanding of xrisks and why they matter. No one values human lives linearly straight down to 0 or assumes no expansion factors for future generations.
A motivated internet researcher could probably just look up the proper citations from Bostrom’s “Global Catastrophic Risks” and create a decomposed model that estimated the background xrisk level from only nature (and then only nature + human risks w/o AI), and develop a better safety margin that would be lower than the one in this paper (implying that AGI could afford to be a few orders of magnitude riskier than Kornai’s rough estimates).
We are well above there right now—and that’s very unlikely to change before we have machine superintelligence.
Is this correct? I’d expect that this lower-bound was superior to the above (10 deaths / year) for the purpose of calculating our present safety factor… unless we’re currently able to destroy earth-threatening meteorites and no one told me.
Well, we do have the technological means to build something to counter one of them, if we were to learn about it tomorrow and it had ETA 2-3 years. Assuming the threat is taken seriously and more resources and effort are put into this than they were / are in killing militant toddlers in the middle-east using drones, that is.
But if one shows up now and it’s about to hit Earth on the prophecy-filled turn of the Mayan calendar? Nope, GG.