With a grain of salt: for 2 million years there were various species of homo dotting africa, and eventually the world. Then humans became generally intelligent, and immediately wiped all of them out. Even hiding on an island in the middle of an ocean and specializing biologically for living on small islands was not enough to survive.
Yeah, humans are very misaligned with other animal species that are less powerful, and has driven a lot of species to extinction, so I don’t agree with the premise at all.
humans have fairly significant alignment issues and have developed a number of fields of research to deal with them. those fields include game theory, psychology, moral philosophy, law, economics, some religions, defense analysis… there are probably a few other key ones that didn’t come to mind.
humans were fairly well aligned by our species’ long history of strong self cooperation, at least compared to many other species, but being able to coordinate in groups well enough that we can reliably establish shared language is already very impressive, and the fact that we still have misalignments between each other isn’t shocking. The concern is that AI could potentially be as unaligned as an arbitrary animal, but even more alien than the most alien species depending on the AI architecture.
Firstly—humans are by far the strongest natural general intelligences around, and most humans are aligned with humans. We wouldn’t have much problem building an AI that was aligned with itself.
Secondly—almost no humans have the ability to actually cause major damage to the world even if they set about trying to. Nick Bostrom’s Vulnerable World Hypothesis explores the idea that this may not always be true, as technology improves.
Why not or is alignment an issue with natural general intelligences?
With a grain of salt: for 2 million years there were various species of homo dotting africa, and eventually the world. Then humans became generally intelligent, and immediately wiped all of them out. Even hiding on an island in the middle of an ocean and specializing biologically for living on small islands was not enough to survive.
Yeah, humans are very misaligned with other animal species that are less powerful, and has driven a lot of species to extinction, so I don’t agree with the premise at all.
humans have fairly significant alignment issues and have developed a number of fields of research to deal with them. those fields include game theory, psychology, moral philosophy, law, economics, some religions, defense analysis… there are probably a few other key ones that didn’t come to mind.
humans were fairly well aligned by our species’ long history of strong self cooperation, at least compared to many other species, but being able to coordinate in groups well enough that we can reliably establish shared language is already very impressive, and the fact that we still have misalignments between each other isn’t shocking. The concern is that AI could potentially be as unaligned as an arbitrary animal, but even more alien than the most alien species depending on the AI architecture.
Firstly—humans are by far the strongest natural general intelligences around, and most humans are aligned with humans. We wouldn’t have much problem building an AI that was aligned with itself.
Secondly—almost no humans have the ability to actually cause major damage to the world even if they set about trying to. Nick Bostrom’s Vulnerable World Hypothesis explores the idea that this may not always be true, as technology improves.