The topic question is “Why is Toby Ord’s likelihood of human extinction due to AI so low?”
My response is that it isn’t low; as a human-extinction event, that likelihood is very high.
You ask for a comparison to MIRI, but link to EY’s commentary; EY implies a likelihood of human extinction of, basically, 100%. From a Bayesian updating perspective, 10% is closer to 50% than 100% is to 99%; Ord is basically in line with everybody else, it is EY who is entirely off the charts. So the question, why is Ord’s number so low, is being raised in the context of a number which is genuinely unusually high; the meaningful question isn’t what differentiates Ord from EY, but what distinguishes EY from everybody else. And honestly, it wouldn’t surprise me if EY also thought the risk was 10%, and thought that a risk of 10% justifies lying and saying the risk is 100%, and that’s the entirety of the discrepancy.
As for potential reasons, any number—for example, maybe superintelligence of the sort that would reliably be capable of wiping humanity out just isn’t possible and what we could create would only succeed 10% of the time, or maybe it isn’t possible on hardware we’ll have available in the next century, or maybe there’s a lot more convergence in what we might think of as morality-space than we currently have reason to expect, or maybe there is a threshold of intelligence where acausal negotiation is standard and any given superintelligence will limit particular kinds of actions—not to mention the many possibilities where the people developing the superintelligence get it horribly wrong but in a way that doesn’t lead to human extinction. We’re basically guessing at unknown unknowns.
From my perspective, I think intelligence is a lot more complicated than most people think, and the current batch of people are doing the intelligence-construction-equivalent of trying to build a house by randomly nailing boards to other boards, and thinking they’re onto something when they manage to create something that manages to behave like a roof in that it can be used to keep the rain off your head if you hold it just right; I think even a .01% risk of human extinction is giving AI development a lot of credit.
(Also, I think people greatly underestimate how difficult it will be, once they get the right framework to enable intelligence, to get that framework to produce anything useful, as opposed to a superstitious idiot / internet troll.)
The topic question is “Why is Toby Ord’s likelihood of human extinction due to AI so low?”
My response is that it isn’t low; as a human-extinction event, that likelihood is very high.
You ask for a comparison to MIRI, but link to EY’s commentary; EY implies a likelihood of human extinction of, basically, 100%. From a Bayesian updating perspective, 10% is closer to 50% than 100% is to 99%; Ord is basically in line with everybody else, it is EY who is entirely off the charts. So the question, why is Ord’s number so low, is being raised in the context of a number which is genuinely unusually high; the meaningful question isn’t what differentiates Ord from EY, but what distinguishes EY from everybody else. And honestly, it wouldn’t surprise me if EY also thought the risk was 10%, and thought that a risk of 10% justifies lying and saying the risk is 100%, and that’s the entirety of the discrepancy.
As for potential reasons, any number—for example, maybe superintelligence of the sort that would reliably be capable of wiping humanity out just isn’t possible and what we could create would only succeed 10% of the time, or maybe it isn’t possible on hardware we’ll have available in the next century, or maybe there’s a lot more convergence in what we might think of as morality-space than we currently have reason to expect, or maybe there is a threshold of intelligence where acausal negotiation is standard and any given superintelligence will limit particular kinds of actions—not to mention the many possibilities where the people developing the superintelligence get it horribly wrong but in a way that doesn’t lead to human extinction. We’re basically guessing at unknown unknowns.
From my perspective, I think intelligence is a lot more complicated than most people think, and the current batch of people are doing the intelligence-construction-equivalent of trying to build a house by randomly nailing boards to other boards, and thinking they’re onto something when they manage to create something that manages to behave like a roof in that it can be used to keep the rain off your head if you hold it just right; I think even a .01% risk of human extinction is giving AI development a lot of credit.
(Also, I think people greatly underestimate how difficult it will be, once they get the right framework to enable intelligence, to get that framework to produce anything useful, as opposed to a superstitious idiot / internet troll.)