‘Avoidable’ in above toy numbers are purely that 1 - avoidable doom directly caused by AI is in fact avoided if we destroy all (relevantly capable?) AI when testing for doom. 2 - avoidable doom directly caused by humans or nature is in fact avoided by AI technology we possess when testing for doom.
Still not sure I follow. “testing for doom” is done by experiencing the doom or non-doom-yet future at some point in time, right? And we can’t test under conditions that don’t actually obtain. Or do you have some other test that works on counterfactual (or future-unknown-maybe-factual) worlds?
‘Avoidable’ in above toy numbers are purely that
1 - avoidable doom directly caused by AI is in fact avoided if we destroy all (relevantly capable?) AI when testing for doom.
2 - avoidable doom directly caused by humans or nature is in fact avoided by AI technology we possess when testing for doom.
Still not sure I follow. “testing for doom” is done by experiencing the doom or non-doom-yet future at some point in time, right? And we can’t test under conditions that don’t actually obtain. Or do you have some other test that works on counterfactual (or future-unknown-maybe-factual) worlds?
Yeah, the test is just if doom is experienced (and I have no counterfactual world testing, useful as that would be).