The former. Aside from making fun of people who say things like “ah but DL is just X” or “AI can never really Y” for their blatant question-begging and goalpost-moving, the serious point there is that unless any of these ‘just’ or ‘really’ can pragmatically cash out as a permanently-missing fatal unworkable-around capability gaps (and they’d better start cashing out soon!), they are not just philosophically dubious but completely irrelevant to AI safety questions. If qualia or consciousness are just epiphenoma and you can have human or superhuman-level capabilities like fold proteins or operate robot drone fleets without them, then we pragmatically do not care about what qualia or consciousness are and what entities do or do not have them, and should drop those words and concepts from AI safety discussions entirely.
I agree it’s irrelevant, but I’ve never actually seen these terms in the context of AI safety. It’s more about how we should treat powerful AIs. Are we supposed to give them rights? It’s a difficult question which requires us to rethink much of our moral code, and one which may shift it to the utilitarian side. While it’s definitely not as important as AI safety, I can still see it causing upheavals in the future.
The former. Aside from making fun of people who say things like “ah but DL is just X” or “AI can never really Y” for their blatant question-begging and goalpost-moving, the serious point there is that unless any of these ‘just’ or ‘really’ can pragmatically cash out as a permanently-missing fatal unworkable-around capability gaps (and they’d better start cashing out soon!), they are not just philosophically dubious but completely irrelevant to AI safety questions. If qualia or consciousness are just epiphenoma and you can have human or superhuman-level capabilities like fold proteins or operate robot drone fleets without them, then we pragmatically do not care about what qualia or consciousness are and what entities do or do not have them, and should drop those words and concepts from AI safety discussions entirely.
I agree it’s irrelevant, but I’ve never actually seen these terms in the context of AI safety. It’s more about how we should treat powerful AIs. Are we supposed to give them rights? It’s a difficult question which requires us to rethink much of our moral code, and one which may shift it to the utilitarian side. While it’s definitely not as important as AI safety, I can still see it causing upheavals in the future.