The point that other humans fought against it doesn’t change the central point that a very large fraction of humans could have a radically different effective morality. Moreover, if Germany hadn’t gone to war but had instead done the exact same thing to its internal minorities, most of the world likely would not have intervened.
If you don’t like this example so much, one can just look at changing attitudes on many issues. See for example Pinker’s book “The Better Angels of Our Nature” where he documents extreme changes in historical attitudes about the ethics of violence. For example, war is considered much more of a negative now than it was a few centuries ago. Going to war to gain territory is essentially unthinkable today. Similarly, attitudes about animals have changed a lot. In the Middle Ages, forms of entertainment that were considered normal included not just bear bating and similar actions but such crude behavior as lighting a cat on fire and seeing how long it took to die. Our moral attitudes are very much a product of our culture and how we are raised.
Most of our changes to where we are now seem to be a result of what works better in complex society and I therefore have difficulty accepting that a society in the highly advanced state it would be in by the time we had strong AI could be pushed to a non-productive doomsday set of values. So lets make the argument more clear then: what set of values do you think the AI could push us to through persuasion that would be effectively what we consider a doomsday scenario while and allowed the AI to more easily satisfy well-being?
I’m not sure why running a complex society needs to be a condition. If we all revert to hunter-gatherers then it still satisfies the essential conditions.
That’s a problem even if it isn’t a doomsday scenario. Changes in animal welfare attitudes would probably make most of us unhappy, but having a society where torturing cute animals to death wouldn’t hurt running a complex society. Similarly, allowing infanticide would work fine (heck for that one I can think of some pretty decent arguments why we should allow it). And while not a doomsday scenario, other scenarios that could suck a lot can also be constructed. For example, you could have a situation where we’re all stuck with 1950s gender roles. That would be really bad but wouldn’t destroy a complex society.
Hunter gathers is not something sustainable for a large scale complex society. It is not a position we would favor at all and I’m struggling to see why an AI would try to make us value that set up or how you think a society with technology strong enough to make strong AI would be able to be convinced to it.
Views of killing animals is more flexible as the reason humans object to it seems to come from a level of innate compassion for life itself. So I could see that value being more manipulatable as a result. I don’t see what that has to do with a doomsday set of values though.
1950s gener roles were abandoned because (1) women didn’t like it (in which case maximizing people’s well being would suggest not having such gender roles) and (2) it was less productive for society in that suppressing women limits the set of contributions to society.
I don’t think you’ve presented here a set of doomsday values to which humans could be manipulated to holding by persuasion alone or demonstrated why they would be a set of values the AI would prefer humans to have for maximization.
The point that other humans fought against it doesn’t change the central point that a very large fraction of humans could have a radically different effective morality. Moreover, if Germany hadn’t gone to war but had instead done the exact same thing to its internal minorities, most of the world likely would not have intervened.
If you don’t like this example so much, one can just look at changing attitudes on many issues. See for example Pinker’s book “The Better Angels of Our Nature” where he documents extreme changes in historical attitudes about the ethics of violence. For example, war is considered much more of a negative now than it was a few centuries ago. Going to war to gain territory is essentially unthinkable today. Similarly, attitudes about animals have changed a lot. In the Middle Ages, forms of entertainment that were considered normal included not just bear bating and similar actions but such crude behavior as lighting a cat on fire and seeing how long it took to die. Our moral attitudes are very much a product of our culture and how we are raised.
Most of our changes to where we are now seem to be a result of what works better in complex society and I therefore have difficulty accepting that a society in the highly advanced state it would be in by the time we had strong AI could be pushed to a non-productive doomsday set of values. So lets make the argument more clear then: what set of values do you think the AI could push us to through persuasion that would be effectively what we consider a doomsday scenario while and allowed the AI to more easily satisfy well-being?
I’m not sure why running a complex society needs to be a condition. If we all revert to hunter-gatherers then it still satisfies the essential conditions.
That’s a problem even if it isn’t a doomsday scenario. Changes in animal welfare attitudes would probably make most of us unhappy, but having a society where torturing cute animals to death wouldn’t hurt running a complex society. Similarly, allowing infanticide would work fine (heck for that one I can think of some pretty decent arguments why we should allow it). And while not a doomsday scenario, other scenarios that could suck a lot can also be constructed. For example, you could have a situation where we’re all stuck with 1950s gender roles. That would be really bad but wouldn’t destroy a complex society.
Hunter gathers is not something sustainable for a large scale complex society. It is not a position we would favor at all and I’m struggling to see why an AI would try to make us value that set up or how you think a society with technology strong enough to make strong AI would be able to be convinced to it.
Views of killing animals is more flexible as the reason humans object to it seems to come from a level of innate compassion for life itself. So I could see that value being more manipulatable as a result. I don’t see what that has to do with a doomsday set of values though.
1950s gener roles were abandoned because (1) women didn’t like it (in which case maximizing people’s well being would suggest not having such gender roles) and (2) it was less productive for society in that suppressing women limits the set of contributions to society.
I don’t think you’ve presented here a set of doomsday values to which humans could be manipulated to holding by persuasion alone or demonstrated why they would be a set of values the AI would prefer humans to have for maximization.