If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn’t I concentrate on the less probable but solvable risk?
I don’t think so—assuming we are trying to maximise p(save all humans).
It appears that at least one of us is making a math mistake.
I don’t think so—assuming we are trying to maximise p(save all humans).
It appears that at least one of us is making a math mistake.
It’s not clear whether “confidence in averting” means P(avert disaster) or P(avert disaster|disaster).
Likewise. ETA: on what I take as the default meaning of “confidence in averting” in this context, P(avert disaster|disaster otherwise impending).