Some of those probabilities are wildly overconfident. <1 in 10^-20 for badly done superintelligence and badly done somewhat-less-superintelligence wiping out humanity? Ordinary risks are “billions upon billions” of times more likely than existential risks? Maybe that one could work if every tornado that killed ten people was counted under “ordinary risks,” but it’s still overconfident. If he thinks things on the scale of “small nuclear war or bioterrorism” are billions of times more likely than existential risks, he’s way overconfident.
Some of those probabilities are wildly overconfident. <1 in 10^-20 for badly done superintelligence and badly done somewhat-less-superintelligence wiping out humanity?
That was:
P(involuntary human extinction without replacement | badly done AGI type (a)) = < 10^-20
“AGI type (a)” was previously defined to be:
(a) is probably inevitable, or at any rate a high probability, and there will likely be deaths or other catastrophes, but like other tech failures (e.g. the Titanic, three mile island, hijacking jumbo jets and using them as guided missiles) we will prevail, and very quickly [...]
So, what we may be seeing here is fancy footwork based on definitions.
If “a” = “humans win” then (humans lose | a) may indeed be very small.
Some of those probabilities are wildly overconfident. <1 in 10^-20 for badly done superintelligence and badly done somewhat-less-superintelligence wiping out humanity? Ordinary risks are “billions upon billions” of times more likely than existential risks? Maybe that one could work if every tornado that killed ten people was counted under “ordinary risks,” but it’s still overconfident. If he thinks things on the scale of “small nuclear war or bioterrorism” are billions of times more likely than existential risks, he’s way overconfident.
That was:
“AGI type (a)” was previously defined to be:
So, what we may be seeing here is fancy footwork based on definitions.
If “a” = “humans win” then (humans lose | a) may indeed be very small.