“You saw a future with a ton of sentient, happy humans, saw that [the AI] would value that future highly, and stopped. You didn’t check to see if there was anything it considered more valuable.” (a quote from The Number)
I’m trying to gently point out that it’s not enough to have the AI value humans, if it values other configurations of matter even more than humans. Do I need to say more? Are humans really the most efficient way to go about creating intelligence (if that is what AGI is maximizing)?
Yeah, I agree that valuing humans isn’t enough. I’m suggesting something that humans intrinsically have, or at least have the capacity for. Something that most life on Earth also shares a capacity for. Something that doesn’t change drastically over time in the way that ethics and morals do. Something that humans value, that is universal, and also durable.
I am not suggesting anything about efficiency. Why bother with efficiencies in a post scarcity world?
The goal should not be to maximize anything, not even intelligence. Maintaining or incrementally increasing intelligence would be favorable to humans.
The goal should not be to maximize anything, not even intelligence. Maintaining or incrementally increasing intelligence would be favorable to humans.
Imagine this is a story where a person makes a wish, and it goes terribly wrong. How does the wish of “maintaining or incrementally increasing intelligence” go wrong? I mean, the goal doesn’t actually say anything about human intelligence. It might as well increase the intelligence of spiders.
Actually, I guess the real problem is that our wish is not to for AGI to “increase intelligence” but to “increase intelligence without violating our values or doing things we would find morally abhorrent”. Otherwise AGI might as well kidnap humans and forcibly perform invasive surgery on them to install computer chips in their brains. I mean, it would increase their intellingence. That is what you asked for, no?
So AGI needs to care about human values and human ethics in order to be safe. And if does understand and care about human ethics, why not have it act on all human ethics, instead of just a single unclearly-defined task like “increasing intellingence”?
“You saw a future with a ton of sentient, happy humans, saw that [the AI] would value that future highly, and stopped. You didn’t check to see if there was anything it considered more valuable.” (a quote from The Number)
I’m trying to gently point out that it’s not enough to have the AI value humans, if it values other configurations of matter even more than humans. Do I need to say more? Are humans really the most efficient way to go about creating intelligence (if that is what AGI is maximizing)?
Yeah, I agree that valuing humans isn’t enough. I’m suggesting something that humans intrinsically have, or at least have the capacity for. Something that most life on Earth also shares a capacity for. Something that doesn’t change drastically over time in the way that ethics and morals do. Something that humans value, that is universal, and also durable.
I am not suggesting anything about efficiency. Why bother with efficiencies in a post scarcity world?
The goal should not be to maximize anything, not even intelligence. Maintaining or incrementally increasing intelligence would be favorable to humans.
Imagine this is a story where a person makes a wish, and it goes terribly wrong. How does the wish of “maintaining or incrementally increasing intelligence” go wrong?
I mean, the goal doesn’t actually say anything about human intelligence. It might as well increase the intelligence of spiders.
Actually, I guess the real problem is that our wish is not to for AGI to “increase intelligence” but to “increase intelligence without violating our values or doing things we would find morally abhorrent”. Otherwise AGI might as well kidnap humans and forcibly perform invasive surgery on them to install computer chips in their brains. I mean, it would increase their intellingence. That is what you asked for, no?
So AGI needs to care about human values and human ethics in order to be safe. And if does understand and care about human ethics, why not have it act on all human ethics, instead of just a single unclearly-defined task like “increasing intellingence”?
This is the concept of Coherent Extrapolated Volition, as a value system for how we would wish aligned AGI to behave.
You might also The Superintelligence FAQ interesting (as general background, not to answer any specific question or disagreement we might have)