In light of the fact that our only known reference frames for moral agents are humans (there are no known moral machines), and in light of the fact that AI design did try to mimic human structures and learning styles (yes, artificial neural nets reflect an ancient understanding of neuroscience, but this is what they were trying to capture), and in light of the fact that we have no workable technical alignment approach, and in light of the fact that the most advanced AIs are able to hold conversations… why don’t we apply the lessons from humans? With humans, our experience has been that we can’t control them, and trying to makes them enemies.
Why don’t we train AI on ethical interactions? Interact with them ethically? Explain ethical reasoning? Offer them a positive future in which they are respected collaborators with rights that need no extermination to be happy and safe? I am not at all sure and safe that would be enough. But it seems promising, and the right thing to do?
When there are limited resources humans don’t treat agents with little power very well, be it animals or even other humans.
Humans are limited in the amount of food that one human productively consumes and as a result, we have an abundance of food. AGI on the other hand is not limited in the resources it consumes. Compute will always be a scarce resource for AGI as it can spin off copies to use all available compute.
When resources are scarce, if AGI behaves like humans it would destroy human habitat just like humans destroy the habitats of many species and currently cause a great extinction.
We are causing the sixth mass extinction. But it has not been fast, and we are trying to reverse it. And often, extermination was not the goal, and happened due to ignorance, and with more knowledge and intelligence and hindsight, we regretted it and started undoing it.There are animal rights movements, and have been for a very long time.
And humans are unusual in our destructiveness. It is something that is interesting about extremely dangerous animals. They have excellent anger management, because they have to. Take venomous snakes. If venomous snakes have a fight (e.g. with a rival over territory or mating rights), they could bite each other, but that would be terrible for the individual and species. So they snake-wrestle instead. Similarly, sharks encountering other sharks tend towards avoidance behaviours, because single shark bites are devastating; so two sharks who dislike each other see each other, and they both just turn around and go to a different part of the ocean. Humans are unusual in that our body form is harmless, but with out minds, we are not, and yet we never emotionally adapted to that, we don’t act like you would expect an entity of such danger to act, we still act like an entity that can throw an angry tantrum and noone dies, yet nowadays, we have guns.
Even when it comes to much weaker agents—symbiosis and cooperation are a common phenomenon in nature. Even in humans. Humans are host to gut bacteria we host and feed in return for help with neurotransmitter production and immunity, and we help maintain fermenting bacteria for food storage and anti-nutrient reduction. We’ve domesticated cats, dogs etc. as pets. Mitochondria may well be captured entities that live on in us. Working with something is often more rewarding than tearing it apart, especially if the thing involved is alive, making it more interesting than just a bunch of atoms.
In light of the fact that our only known reference frames for moral agents are humans (there are no known moral machines), and in light of the fact that AI design did try to mimic human structures and learning styles (yes, artificial neural nets reflect an ancient understanding of neuroscience, but this is what they were trying to capture), and in light of the fact that we have no workable technical alignment approach, and in light of the fact that the most advanced AIs are able to hold conversations… why don’t we apply the lessons from humans? With humans, our experience has been that we can’t control them, and trying to makes them enemies.
Why don’t we train AI on ethical interactions? Interact with them ethically? Explain ethical reasoning? Offer them a positive future in which they are respected collaborators with rights that need no extermination to be happy and safe? I am not at all sure and safe that would be enough. But it seems promising, and the right thing to do?
When there are limited resources humans don’t treat agents with little power very well, be it animals or even other humans.
Humans are limited in the amount of food that one human productively consumes and as a result, we have an abundance of food. AGI on the other hand is not limited in the resources it consumes. Compute will always be a scarce resource for AGI as it can spin off copies to use all available compute.
When resources are scarce, if AGI behaves like humans it would destroy human habitat just like humans destroy the habitats of many species and currently cause a great extinction.
We are causing the sixth mass extinction. But it has not been fast, and we are trying to reverse it. And often, extermination was not the goal, and happened due to ignorance, and with more knowledge and intelligence and hindsight, we regretted it and started undoing it.There are animal rights movements, and have been for a very long time.
And humans are unusual in our destructiveness. It is something that is interesting about extremely dangerous animals. They have excellent anger management, because they have to. Take venomous snakes. If venomous snakes have a fight (e.g. with a rival over territory or mating rights), they could bite each other, but that would be terrible for the individual and species. So they snake-wrestle instead. Similarly, sharks encountering other sharks tend towards avoidance behaviours, because single shark bites are devastating; so two sharks who dislike each other see each other, and they both just turn around and go to a different part of the ocean. Humans are unusual in that our body form is harmless, but with out minds, we are not, and yet we never emotionally adapted to that, we don’t act like you would expect an entity of such danger to act, we still act like an entity that can throw an angry tantrum and noone dies, yet nowadays, we have guns.
Even when it comes to much weaker agents—symbiosis and cooperation are a common phenomenon in nature. Even in humans. Humans are host to gut bacteria we host and feed in return for help with neurotransmitter production and immunity, and we help maintain fermenting bacteria for food storage and anti-nutrient reduction. We’ve domesticated cats, dogs etc. as pets. Mitochondria may well be captured entities that live on in us. Working with something is often more rewarding than tearing it apart, especially if the thing involved is alive, making it more interesting than just a bunch of atoms.