The argument chain you presented (Deep Learning → Consciousness → AI Armageddon) is a strawman. If you sincerely think that’s our position, you haven’t read enough. Read more, and you’ll be better received. If you don’t think that, stop being unfair about what we said, and you’ll be better received.
Last I checked, most of us were agnostic on the AI Consciousness question. If you think that’s a key point to our Doom arguments, you haven’t understood us; that step isn’t necessarily required; it’s not a link in the chain of argument. Maybe AI can be dangerous, even existentially so, without “having qualia”. But neither are we confident that AI necessarily won’t be conscious. We’re not sure how it works in humans but seems to be an emergent property of brains, so why not artificial brains as well? We don’t understand how the inscrutable matrices work either, so it seems like a possibility. Maybe gradient descent and evolution stumbled upon similar machinery for similar reasons. AI consciousness is mostly beside the point. Where it does come up is usually not in the AI Doom arguments, but questions about what we ethically owe AIs, as moral patients.
Deep Learning is also not required for AI Doom. Doom is a disjunctive claim; there are multiple paths for getting there. The likely-looking path at this point would go through the frontier LLM paradigm, but that isn’t required for Doom. (However, it probably is required for most short timelines.)
The argument chain you presented (Deep Learning → Consciousness → AI Armageddon) is a strawman. If you sincerely think that’s our position, you haven’t read enough. Read more, and you’ll be better received. If you don’t think that, stop being unfair about what we said, and you’ll be better received.
Last I checked, most of us were agnostic on the AI Consciousness question. If you think that’s a key point to our Doom arguments, you haven’t understood us; that step isn’t necessarily required; it’s not a link in the chain of argument. Maybe AI can be dangerous, even existentially so, without “having qualia”. But neither are we confident that AI necessarily won’t be conscious. We’re not sure how it works in humans but seems to be an emergent property of brains, so why not artificial brains as well? We don’t understand how the inscrutable matrices work either, so it seems like a possibility. Maybe gradient descent and evolution stumbled upon similar machinery for similar reasons. AI consciousness is mostly beside the point. Where it does come up is usually not in the AI Doom arguments, but questions about what we ethically owe AIs, as moral patients.
Deep Learning is also not required for AI Doom. Doom is a disjunctive claim; there are multiple paths for getting there. The likely-looking path at this point would go through the frontier LLM paradigm, but that isn’t required for Doom. (However, it probably is required for most short timelines.)