FWIW if you look at Rob Bensinger’s survey of people who work on long-term AI risk, the average P(AI doom) is closer to Ord than MIRI. So I’d say that Ord isn’t that different from most people he talks to.
You might enjoy these posts where people argue for particular values of P(AI doom), all of which are much lower than Eliezer’s:
Paul Christiano interviewed by AI impacts
Rohin Shah and me on the FLI podcast (ctrl-f “probability of AI-induced existential risk”)
FWIW if you look at Rob Bensinger’s survey of people who work on long-term AI risk, the average P(AI doom) is closer to Ord than MIRI. So I’d say that Ord isn’t that different from most people he talks to.
You might enjoy these posts where people argue for particular values of P(AI doom), all of which are much lower than Eliezer’s:
Paul Christiano interviewed by AI impacts
Rohin Shah and me on the FLI podcast (ctrl-f “probability of AI-induced existential risk”)