So, where did you get those numbers from? 10^-6? 10^-2? Why not, say, 1-10^-6 instead? Gut feeling again, and that’s inevitable. You either name a number, or make decisions without the help of even this feeble model, choosing directly. From what people on this site know, they believe differently from you.
I have one of the lowest estimates, 30% for not killing off 90% of the population by 2100. Most of it comes from Unfriendly AI, with estimate of 50% of AGI foom by 2070, or 70% by 2100 (expectation of relatively low-hanging fruit, it levels off as time goes on) if nothing goes wrong with the world, 3⁄4 of that to Unfriendly AI, given my understanding of how hard it is to find the right answer from many efficient world-eating possibilities, and human irrationality, making it likely that the person to invent the first mind won’t think about the consequences. That’s already 55% total extinction risk, add some more for biological (at least, human-inhabiting) weapons, such as an engineered pandemic (not total extinction, but easily 90%), and new possible goodies the future has to offer. It’ll only get worse until it gets better. On second thought, I should lower my confidence from these explicit models, they seem too much like planning. Make that 50%.
So, where did you get those numbers from? 10^-6? 10^-2? Why not, say, 1-10^-6 instead? Gut feeling again, and that’s inevitable. You either name a number, or make decisions without the help of even this feeble model, choosing directly. From what people on this site know, they believe differently from you.
I have one of the lowest estimates, 30% for not killing off 90% of the population by 2100. Most of it comes from Unfriendly AI, with estimate of 50% of AGI foom by 2070, or 70% by 2100 (expectation of relatively low-hanging fruit, it levels off as time goes on) if nothing goes wrong with the world, 3⁄4 of that to Unfriendly AI, given my understanding of how hard it is to find the right answer from many efficient world-eating possibilities, and human irrationality, making it likely that the person to invent the first mind won’t think about the consequences. That’s already 55% total extinction risk, add some more for biological (at least, human-inhabiting) weapons, such as an engineered pandemic (not total extinction, but easily 90%), and new possible goodies the future has to offer. It’ll only get worse until it gets better. On second thought, I should lower my confidence from these explicit models, they seem too much like planning. Make that 50%.