Making AIs wiser seems most important in worlds where humanity stays in control of AI. It’s unclear to me what the sign of this work is if humanity doesn’t stay in control of AI.
A significant fraction of work on AI assumes that humans will somehow be able to control entities which are far smarter than we are, and maintain such control indefinitely. My favorite flippant reply to that is, “And how did that work out for Homo erectus? Surely they must have benefited enormously from all the technology invented by Homo sapiens!” Intelligence is the ultimate force multiplier.
If there’s no mathematical “secret” to alignment, and I strongly suspect there isn’t, then we’re unlikely to remain in control.
So I see four scenarios if there’s no magic trick to stay in control:
We’re wise enough refrain from building anything significantly smarter than us.
We’re pets. (Loss of control)
We’re dead. (X-risk)
We envy the dead. (S-risk)
I do not have a lot of hope for (1) without dramatic changes in public opinion and human society. I’ve phrased (2) provocatively, but the essence is that we would lose control. (Fictional examples are dangerous, but this category would include the Culture, CelestAI or arguably the Matrix.) Pets might be beloved or they might be abused, but they rarely get asked to participate in human decisions. And sometimes pets get spayed or euthanized based on logic they don’t understand. They might even be happier than wild animals, but they’re not in control of their own fate.
Even if we could control AI indefinitely (and I don’t think we can), there is literally no human organization or institution I would trust with that power. Not governments, not committees, and certainly not a democratic vote.
So if we must regrettably build AI, and lose all control over the future, then I do think it matters that the AI has a decent moral and philosophical system. What kind of entity would you trust with vast, unaccountable, inescapable power? If we’re likely to wind up as pets of our own creations, then we should definitely try to create kind, ethical and what you call “unfussy” pet owners, and ones that respect real consent.
Or to use a human analogy, try to raise the sort of children you’d want to pick your nursing home. So I do think the philosophical and moral questions matter even if humans lose control.
A significant fraction of work on AI assumes that humans will somehow be able to control entities which are far smarter than we are, and maintain such control indefinitely. My favorite flippant reply to that is, “And how did that work out for Homo erectus? Surely they must have benefited enormously from all the technology invented by Homo sapiens!” Intelligence is the ultimate force multiplier.
If there’s no mathematical “secret” to alignment, and I strongly suspect there isn’t, then we’re unlikely to remain in control.
So I see four scenarios if there’s no magic trick to stay in control:
We’re wise enough refrain from building anything significantly smarter than us.
We’re pets. (Loss of control)
We’re dead. (X-risk)
We envy the dead. (S-risk)
I do not have a lot of hope for (1) without dramatic changes in public opinion and human society. I’ve phrased (2) provocatively, but the essence is that we would lose control. (Fictional examples are dangerous, but this category would include the Culture, CelestAI or arguably the Matrix.) Pets might be beloved or they might be abused, but they rarely get asked to participate in human decisions. And sometimes pets get spayed or euthanized based on logic they don’t understand. They might even be happier than wild animals, but they’re not in control of their own fate.
Even if we could control AI indefinitely (and I don’t think we can), there is literally no human organization or institution I would trust with that power. Not governments, not committees, and certainly not a democratic vote.
So if we must regrettably build AI, and lose all control over the future, then I do think it matters that the AI has a decent moral and philosophical system. What kind of entity would you trust with vast, unaccountable, inescapable power? If we’re likely to wind up as pets of our own creations, then we should definitely try to create kind, ethical and what you call “unfussy” pet owners, and ones that respect real consent.
Or to use a human analogy, try to raise the sort of children you’d want to pick your nursing home. So I do think the philosophical and moral questions matter even if humans lose control.