I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn’t be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.
“Preferring humans just because they’re humans” or “letting us be selfish” does prevent the risk of prematurely declaring that we’ve figured out what makes a being morally valuable and handing over society’s steering wheel to AI agents that, upon further reflection, aren’t actually morally valuable.
For example, say some AGI researcher believes that intelligence is the property which determines the worth of a being and blindly unleashes a superintelligent AI into the world because they believe that whatever it does with society is definitionally good, simply based on the fact that the AI system is more intelligent than us. But then maybe it turns out that phenomenological consciousness doesn’t necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don’t actually experience the world they’ve created.
Having an ideological allegiance to humanism and a strict rejection of non-humans running the world even if we think they might deserve to would prevent this catastrophe. But I think that a posthuman utopia is ultimately something we should strive for. Eventually, we should pass the torch to beings which exemplify the human traits we like (consciousness, love, intelligence, art) and exclude those we don’t (selfishness, suffering, irrationality).
So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn’t the best way to prevent a valueless posthuman society.
But then maybe it turns out that phenomenological consciousness doesn’t necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don’t actually experience the world they’ve created.
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence? If you gave me a choice between two futures, one with humans reasonably thriving for a few more thousand years and then going extinct, and the other with human-made robo-Hitler eating the galaxy, I’d pick the first without hesitation. I’d rather we leave no legacy at all than create literal cosmic cancer, sentient or not.
So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn’t the best way to prevent a valueless posthuman society.
I don’t want “humanism” to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn’t require us passing any torch at all and could just coexist with us, unless it was a desperate situation in which it’s simply become impossible for organic beings to survive and then the synthetics truly are our only realistic chance at leaving a legacy behind. Otherwise, all that would happen is that we’ll live together and then if replacement happens it’ll barely be noticeable as it does.
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?
This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
I don’t want “humanism” to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn’t require us passing any torch at all and could just coexist with us…
I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.
maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
You’re talking about this as if it was a matter of science and discovery. I’m not a moral realist so to me that doesn’t compute. We don’t discover what constitutes moral worth; we decide it. The only discovery involved here may be self-discovery. We could have moral instincts and then introspect to figure out more straightforwardly what do they map to precisely. But deciding to follow our moral instincts at all is as arbitrary a call as any other.
I’m open to the possibility of non-humans populating the universe instead of humans
As I said, only situation in which this would be true for me is IMO if either humans voluntarily just stop having children (e.g. they see the artificial beings as having happier lives and thus would rather raise one of them than an organic child) or conditions get so harsh that it’s impossible for organic beings to keep existing and artificial ones are the only hope (e.g. Earth about to get wiped out by the expanding Sun, we don’t have enough energy to send away a working colony ship with a self-sustaining population but we CAN send small and light Von Neumann interstellar probes full of AIs of the sort we deeply care about).
I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn’t be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.
“Preferring humans just because they’re humans” or “letting us be selfish” does prevent the risk of prematurely declaring that we’ve figured out what makes a being morally valuable and handing over society’s steering wheel to AI agents that, upon further reflection, aren’t actually morally valuable.
For example, say some AGI researcher believes that intelligence is the property which determines the worth of a being and blindly unleashes a superintelligent AI into the world because they believe that whatever it does with society is definitionally good, simply based on the fact that the AI system is more intelligent than us. But then maybe it turns out that phenomenological consciousness doesn’t necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don’t actually experience the world they’ve created.
Having an ideological allegiance to humanism and a strict rejection of non-humans running the world even if we think they might deserve to would prevent this catastrophe. But I think that a posthuman utopia is ultimately something we should strive for. Eventually, we should pass the torch to beings which exemplify the human traits we like (consciousness, love, intelligence, art) and exclude those we don’t (selfishness, suffering, irrationality).
So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn’t the best way to prevent a valueless posthuman society.
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence? If you gave me a choice between two futures, one with humans reasonably thriving for a few more thousand years and then going extinct, and the other with human-made robo-Hitler eating the galaxy, I’d pick the first without hesitation. I’d rather we leave no legacy at all than create literal cosmic cancer, sentient or not.
I don’t want “humanism” to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn’t require us passing any torch at all and could just coexist with us, unless it was a desperate situation in which it’s simply become impossible for organic beings to survive and then the synthetics truly are our only realistic chance at leaving a legacy behind. Otherwise, all that would happen is that we’ll live together and then if replacement happens it’ll barely be noticeable as it does.
This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.
You’re talking about this as if it was a matter of science and discovery. I’m not a moral realist so to me that doesn’t compute. We don’t discover what constitutes moral worth; we decide it. The only discovery involved here may be self-discovery. We could have moral instincts and then introspect to figure out more straightforwardly what do they map to precisely. But deciding to follow our moral instincts at all is as arbitrary a call as any other.
As I said, only situation in which this would be true for me is IMO if either humans voluntarily just stop having children (e.g. they see the artificial beings as having happier lives and thus would rather raise one of them than an organic child) or conditions get so harsh that it’s impossible for organic beings to keep existing and artificial ones are the only hope (e.g. Earth about to get wiped out by the expanding Sun, we don’t have enough energy to send away a working colony ship with a self-sustaining population but we CAN send small and light Von Neumann interstellar probes full of AIs of the sort we deeply care about).