Eliezer very specifically talks about AI systems that “go foom,” after which they are so much better at R&D than the rest of the world that they can very rapidly build molecular nanotechnology, and then build more stuff than the rest of the world put together.
This isn’t related to offense vs defense, that’s just >$300 trillion of output conventionally-measured. We’re not talking about random terrorists who find a way to cause harm, we are talking about the entire process of (what we used to call) economic growth now occurring inside a lab in fast motion.
I think he lays this all out pretty explicitly. And for what it’s worth I think that’s the correct implication of the other parts of Eliezer’s view. That is what would happen if you had a broadly human-level AI with nothing of the sort anywhere else. (Though I also agree that maybe there’d be a war or decisive first strike first, it’s a crazy world we’re talking about.)
And I think in many ways that’s quite to what will happen. It just seems most likely to take years instead of months, to use huge amounts of compute (and therefore share proceeds with compute providers and a bunch of the rest of the economy), to result in “AI improvements” that look much more similar to conventional human R&D, and so on.
Eliezer very specifically talks about AI systems that “go foom,” after which they are so much better at R&D than the rest of the world that they can very rapidly build molecular nanotechnology, and then build more stuff than the rest of the world put together.
This isn’t related to offense vs defense, that’s just >$300 trillion of output conventionally-measured. We’re not talking about random terrorists who find a way to cause harm, we are talking about the entire process of (what we used to call) economic growth now occurring inside a lab in fast motion.
I think he lays this all out pretty explicitly. And for what it’s worth I think that’s the correct implication of the other parts of Eliezer’s view. That is what would happen if you had a broadly human-level AI with nothing of the sort anywhere else. (Though I also agree that maybe there’d be a war or decisive first strike first, it’s a crazy world we’re talking about.)
And I think in many ways that’s quite to what will happen. It just seems most likely to take years instead of months, to use huge amounts of compute (and therefore share proceeds with compute providers and a bunch of the rest of the economy), to result in “AI improvements” that look much more similar to conventional human R&D, and so on.