My apologies for not being clear in my Quick Take, Chris. As Zach pointed out in his reply, I posed two issues.
The first being an obvious parallel for me between EA and Judeo-Christian religions. You may or may not agree with me, which is fine. I’m not looking to convince anyone of my point-of-view. I was merely interested in seeing if others here had a similar POV.
The second issue I raised was what I saw as a failure in the reasoning chain where you go from Deep Learning to Consciousness to an AI Armageddon. Why was that leap in faith so compelling to people?
I don’t see either of those questions as not being in the interest of the “public good”, but perhaps you just said that because my first attempt wasn’t clear. Hopefully, I’ve remedied that with this answer.
Oh, they’re definitely valid questions. The problem is that the second question is rather vague. You need to either state what a good answer would look like or why existing answers aren’t satisifying.
The argument chain you presented (Deep Learning → Consciousness → AI Armageddon) is a strawman. If you sincerely think that’s our position, you haven’t read enough. Read more, and you’ll be better received. If you don’t think that, stop being unfair about what we said, and you’ll be better received.
Last I checked, most of us were agnostic on the AI Consciousness question. If you think that’s a key point to our Doom arguments, you haven’t understood us; that step isn’t necessarily required; it’s not a link in the chain of argument. Maybe AI can be dangerous, even existentially so, without “having qualia”. But neither are we confident that AI necessarily won’t be conscious. We’re not sure how it works in humans but seems to be an emergent property of brains, so why not artificial brains as well? We don’t understand how the inscrutable matrices work either, so it seems like a possibility. Maybe gradient descent and evolution stumbled upon similar machinery for similar reasons. AI consciousness is mostly beside the point. Where it does come up is usually not in the AI Doom arguments, but questions about what we ethically owe AIs, as moral patients.
Deep Learning is also not required for AI Doom. Doom is a disjunctive claim; there are multiple paths for getting there. The likely-looking path at this point would go through the frontier LLM paradigm, but that isn’t required for Doom. (However, it probably is required for most short timelines.)
My apologies for not being clear in my Quick Take, Chris. As Zach pointed out in his reply, I posed two issues.
The first being an obvious parallel for me between EA and Judeo-Christian religions. You may or may not agree with me, which is fine. I’m not looking to convince anyone of my point-of-view. I was merely interested in seeing if others here had a similar POV.
The second issue I raised was what I saw as a failure in the reasoning chain where you go from Deep Learning to Consciousness to an AI Armageddon. Why was that leap in faith so compelling to people?
I don’t see either of those questions as not being in the interest of the “public good”, but perhaps you just said that because my first attempt wasn’t clear. Hopefully, I’ve remedied that with this answer.
Oh, they’re definitely valid questions. The problem is that the second question is rather vague. You need to either state what a good answer would look like or why existing answers aren’t satisifying.
The argument chain you presented (Deep Learning → Consciousness → AI Armageddon) is a strawman. If you sincerely think that’s our position, you haven’t read enough. Read more, and you’ll be better received. If you don’t think that, stop being unfair about what we said, and you’ll be better received.
Last I checked, most of us were agnostic on the AI Consciousness question. If you think that’s a key point to our Doom arguments, you haven’t understood us; that step isn’t necessarily required; it’s not a link in the chain of argument. Maybe AI can be dangerous, even existentially so, without “having qualia”. But neither are we confident that AI necessarily won’t be conscious. We’re not sure how it works in humans but seems to be an emergent property of brains, so why not artificial brains as well? We don’t understand how the inscrutable matrices work either, so it seems like a possibility. Maybe gradient descent and evolution stumbled upon similar machinery for similar reasons. AI consciousness is mostly beside the point. Where it does come up is usually not in the AI Doom arguments, but questions about what we ethically owe AIs, as moral patients.
Deep Learning is also not required for AI Doom. Doom is a disjunctive claim; there are multiple paths for getting there. The likely-looking path at this point would go through the frontier LLM paradigm, but that isn’t required for Doom. (However, it probably is required for most short timelines.)