I don’t know that “the AI doomer argument” is a coherent thing. At least I haven’t seen an attempt to gather or summarize it in an authoritative way. In fact, it’s not really an argument (as far as I’ve seen), it’s somewhere between a vibe and a prediction.
For me, when I’m in a doomer mood, it’s easy to give a high probability to the idea that humanity will be extinct fairly soon (it may take centuries to fully die out, but will be fully irreversible path in 10-50 years, if it’s not already). Note that this has been a common belief long before AI was a thing—nuclear war/winter, ecological collapse, pandemic, etc. are pretty scary, and humans are fragile.
My optimistic “argument” is really not better-formed. Humans are clever, and when they can no longer ignore a problem, they solve it. We might lose 90%+ of the current global population, and a whole lot of supply-chain and tech capability, but that’s really only a few doublings lost, maybe a millennium to recover, and maybe we’ll be smarter/luckier in the next cycle.
From your perspective, what do you think the argument is, in terms of thesis and support?
There are a lot of detailed arguments for doom by misaligned AGI.
Coming to grips with them, and the conterarguments in actual proposals for aligning AGI and managing the political and economic fallout, is a herculean task. I feel it’s taken me about two years of spending the majority of my work time on doing that to even have my head mostly around most of the relevant arguments. Having done that, my p(doom) is still roughly 50%, with wide uncertainty for unknown unknows still to be revealed or identified.
So if someone isn’t going to do that, I think the above summary is pretty accurate. Alignment and managing the resulting shifts in the world is not easy, but it’s not impossible. Sometimes humans do amazing things. Sometimes they do amazingly stupid things. So again, roughly 50% from this much rougher method.
I don’t know that “the AI doomer argument” is a coherent thing. At least I haven’t seen an attempt to gather or summarize it in an authoritative way. In fact, it’s not really an argument (as far as I’ve seen), it’s somewhere between a vibe and a prediction.
For me, when I’m in a doomer mood, it’s easy to give a high probability to the idea that humanity will be extinct fairly soon (it may take centuries to fully die out, but will be fully irreversible path in 10-50 years, if it’s not already). Note that this has been a common belief long before AI was a thing—nuclear war/winter, ecological collapse, pandemic, etc. are pretty scary, and humans are fragile.
My optimistic “argument” is really not better-formed. Humans are clever, and when they can no longer ignore a problem, they solve it. We might lose 90%+ of the current global population, and a whole lot of supply-chain and tech capability, but that’s really only a few doublings lost, maybe a millennium to recover, and maybe we’ll be smarter/luckier in the next cycle.
From your perspective, what do you think the argument is, in terms of thesis and support?
There are a lot of detailed arguments for doom by misaligned AGI.
Coming to grips with them, and the conterarguments in actual proposals for aligning AGI and managing the political and economic fallout, is a herculean task. I feel it’s taken me about two years of spending the majority of my work time on doing that to even have my head mostly around most of the relevant arguments. Having done that, my p(doom) is still roughly 50%, with wide uncertainty for unknown unknows still to be revealed or identified.
So if someone isn’t going to do that, I think the above summary is pretty accurate. Alignment and managing the resulting shifts in the world is not easy, but it’s not impossible. Sometimes humans do amazing things. Sometimes they do amazingly stupid things. So again, roughly 50% from this much rougher method.