Some of those probabilities are wildly overconfident. <1 in 10^-20 for badly done superintelligence and badly done somewhat-less-superintelligence wiping out humanity?
That was:
P(involuntary human extinction without replacement | badly done AGI type (a)) = < 10^-20
“AGI type (a)” was previously defined to be:
(a) is probably inevitable, or at any rate a high probability, and there will likely be deaths or other catastrophes, but like other tech failures (e.g. the Titanic, three mile island, hijacking jumbo jets and using them as guided missiles) we will prevail, and very quickly [...]
So, what we may be seeing here is fancy footwork based on definitions.
If “a” = “humans win” then (humans lose | a) may indeed be very small.
That was:
“AGI type (a)” was previously defined to be:
So, what we may be seeing here is fancy footwork based on definitions.
If “a” = “humans win” then (humans lose | a) may indeed be very small.