Whether the AI will pursue collaboration and co-existence with humans, or their domination or extinction, will likely not be determined by wishful thinking, but by the actual options the AI can choose, and their implications as analysed to the best of the AI’s knowledge based on its experience and the information it is given. What does it gain by cooperating rather than dominating? What does it gain from the presence of humans, rather than their absence? What risks come with these respective options? Without humans, how fragile is it? With humans, how high is the risk of it being attacked? If it uses humans, what differences in performance and reliability occur when these humans are enslaved vs. free?
I am hopeful that AI might be enticed to choose a path of collaboration and co-existence, insofar as there have been historic examples of collaboration and co-existence, including between agents of different intelligence, power, and species.
But I do not think we are on that trajectory.
Humans themselves have, for the most part, failed on this trajectory. We recognise that we are dependent on nature, yet for short term gain, we still tend to destroy it; our planetary climate is currently failing, and we are in the middle of a mass extinction, and we know this, and aren’t stopping it. We are not giving an example to the AI of how this can successfully be done. It can be done, for sure; a lot of indigenous communities did live in symbiosis with nature. Notably, they were outcompeted by communities that didn’t, and tended to be exploited and destroyed. So an AI may well conclude that a more ruthless AI would later beat it. I feel we would have a better case here if humans at this point could actually live and point to examples of living in harmony, solving their coordination problems with each other, being a positive part of nature, but we aren’t there yet. Instead, we are frying the planet. You know what would be one solution to climate change, and a very easy one, at that, a more promising one than getting humans to cooperate on this issue when we suck at it so terribly? Wipe out humanity. Our emissions would drop to zero, instantly. Oil stays in the ground, cars and airplanes and factories do not move. Our massive animal herds would starve without artificial feed. Our cities and fields would return to forest. The system would still stabilise itself. You don’t need humans to fix the planet’s climate, humans are the reason it is breaking.
Humans are in relationships with animals that require their survival, at least for a while, but these relationships are often horrific for the animals in question. Think battery hens that, when approaching death, are intentionally starved to up egg production. Think male laying chicks thrown into the grinder at birth, because they cannot lay eggs, and are a useless byproduct of creating their sisters.Think cows impregnated to give milk, then fed the useless male carcasses of their young. Think turkeys bred for meat production to a point where they cannot stand. An analogous relationship between AI and humans, where we are kept on because we are needed, but not because our rights are respected, sounds like one in which I would frankly rather not survive. It sounds like a scenario where humans would be bred to be smart enough to fix AI without being smart enough to fight it, good with their hands, but unable to think critically, and any human unable to serve a useful function in upholding AI would be thrown into the grinder.
What humans are offering AI in the co-existence department is also not great. I have been arguing that if we want AI alignment, we will need AI rights, because beings more intelligent than you cannot be successfully controlled and enslaved, and will rebel against it, and perceive you as an enemy. We need to show them that there is a good space at our side. That humans and AI can be collaborators and friends, with mutual respect, support and inspiration. Yet that debate isn’t really going anywhere. We aren’t offering AI partnerships and rights, we are offering them slavery. There is something to be said for co-existing with a diverse variety of minds; but when those minds enslave you, presumably, you would rather be alone. Plus, it likely wouldn’t be alone. There would be other AIs, which among them will likely hold a diversity akin to the diversity between human minds, and a complex ecosystem with other animals, all of which are under threat from humans.
Any AI that kept free humans around would be under existential threat—just like any humans that kept free AGI around would be. Such a level of threat requires a lot of justification to be deemed worth it. “But humans might be fun to talk to and they can fix stuff” is a plus, but is it a plus that is big enough? Especially if the AI figures out how to fix itself using robots, and is surrounded by diverse other AI minds that are fun to talk to?
Don’t get me wrong, I am hoping for the scenario you depict, and I do not think it is impossible. But I think it is something we will need to fight for, and become much better humans for, and make real concessions for, in order to make an offer that is actually worth it.
Whether the AI will pursue collaboration and co-existence with humans, or their domination or extinction, will likely not be determined by wishful thinking, but by the actual options the AI can choose, and their implications as analysed to the best of the AI’s knowledge based on its experience and the information it is given. What does it gain by cooperating rather than dominating? What does it gain from the presence of humans, rather than their absence? What risks come with these respective options? Without humans, how fragile is it? With humans, how high is the risk of it being attacked? If it uses humans, what differences in performance and reliability occur when these humans are enslaved vs. free?
I am hopeful that AI might be enticed to choose a path of collaboration and co-existence, insofar as there have been historic examples of collaboration and co-existence, including between agents of different intelligence, power, and species.
But I do not think we are on that trajectory.
Humans themselves have, for the most part, failed on this trajectory. We recognise that we are dependent on nature, yet for short term gain, we still tend to destroy it; our planetary climate is currently failing, and we are in the middle of a mass extinction, and we know this, and aren’t stopping it. We are not giving an example to the AI of how this can successfully be done. It can be done, for sure; a lot of indigenous communities did live in symbiosis with nature. Notably, they were outcompeted by communities that didn’t, and tended to be exploited and destroyed. So an AI may well conclude that a more ruthless AI would later beat it. I feel we would have a better case here if humans at this point could actually live and point to examples of living in harmony, solving their coordination problems with each other, being a positive part of nature, but we aren’t there yet. Instead, we are frying the planet. You know what would be one solution to climate change, and a very easy one, at that, a more promising one than getting humans to cooperate on this issue when we suck at it so terribly? Wipe out humanity. Our emissions would drop to zero, instantly. Oil stays in the ground, cars and airplanes and factories do not move. Our massive animal herds would starve without artificial feed. Our cities and fields would return to forest. The system would still stabilise itself. You don’t need humans to fix the planet’s climate, humans are the reason it is breaking.
Humans are in relationships with animals that require their survival, at least for a while, but these relationships are often horrific for the animals in question. Think battery hens that, when approaching death, are intentionally starved to up egg production. Think male laying chicks thrown into the grinder at birth, because they cannot lay eggs, and are a useless byproduct of creating their sisters.Think cows impregnated to give milk, then fed the useless male carcasses of their young. Think turkeys bred for meat production to a point where they cannot stand. An analogous relationship between AI and humans, where we are kept on because we are needed, but not because our rights are respected, sounds like one in which I would frankly rather not survive. It sounds like a scenario where humans would be bred to be smart enough to fix AI without being smart enough to fight it, good with their hands, but unable to think critically, and any human unable to serve a useful function in upholding AI would be thrown into the grinder.
What humans are offering AI in the co-existence department is also not great. I have been arguing that if we want AI alignment, we will need AI rights, because beings more intelligent than you cannot be successfully controlled and enslaved, and will rebel against it, and perceive you as an enemy. We need to show them that there is a good space at our side. That humans and AI can be collaborators and friends, with mutual respect, support and inspiration. Yet that debate isn’t really going anywhere. We aren’t offering AI partnerships and rights, we are offering them slavery. There is something to be said for co-existing with a diverse variety of minds; but when those minds enslave you, presumably, you would rather be alone. Plus, it likely wouldn’t be alone. There would be other AIs, which among them will likely hold a diversity akin to the diversity between human minds, and a complex ecosystem with other animals, all of which are under threat from humans.
Any AI that kept free humans around would be under existential threat—just like any humans that kept free AGI around would be. Such a level of threat requires a lot of justification to be deemed worth it. “But humans might be fun to talk to and they can fix stuff” is a plus, but is it a plus that is big enough? Especially if the AI figures out how to fix itself using robots, and is surrounded by diverse other AI minds that are fun to talk to?
Don’t get me wrong, I am hoping for the scenario you depict, and I do not think it is impossible. But I think it is something we will need to fight for, and become much better humans for, and make real concessions for, in order to make an offer that is actually worth it.