You make my point right there. World War 2. We went to war in defiance of nazis and refused to be assimilated. People in Germany didn’t even like what the nazis were doing. And finally, the nazis didn’t care about our outrage and death in the resulting war. An AI trying to maximize well-being, will care profoundly about that, by definition.
You seem to think that you are living in a magical fair universe. Just because nothing really really bad happened to you/us yet, doesn’t mean it cannot.
I don’t think I live in a fair universe at all. Regardless, acknowledging that we don’t live in a fair universe doesn’t support your claim that an AI would be able to radically change the values of all humans on earth without outrage from others through persuasion alone.
I feel like I’ve already responded to this argument multiple times in various other responses I’ve made. If you think there’s something I’ve overlooked in those responses let me know, but this seems like a restatement of things I’ve already addressed. Also, if there is something in one of the responses I’ve made with which you disagree and have a different reason than what’s been presented, let me know.
The point that other humans fought against it doesn’t change the central point that a very large fraction of humans could have a radically different effective morality. Moreover, if Germany hadn’t gone to war but had instead done the exact same thing to its internal minorities, most of the world likely would not have intervened.
If you don’t like this example so much, one can just look at changing attitudes on many issues. See for example Pinker’s book “The Better Angels of Our Nature” where he documents extreme changes in historical attitudes about the ethics of violence. For example, war is considered much more of a negative now than it was a few centuries ago. Going to war to gain territory is essentially unthinkable today. Similarly, attitudes about animals have changed a lot. In the Middle Ages, forms of entertainment that were considered normal included not just bear bating and similar actions but such crude behavior as lighting a cat on fire and seeing how long it took to die. Our moral attitudes are very much a product of our culture and how we are raised.
Most of our changes to where we are now seem to be a result of what works better in complex society and I therefore have difficulty accepting that a society in the highly advanced state it would be in by the time we had strong AI could be pushed to a non-productive doomsday set of values. So lets make the argument more clear then: what set of values do you think the AI could push us to through persuasion that would be effectively what we consider a doomsday scenario while and allowed the AI to more easily satisfy well-being?
I’m not sure why running a complex society needs to be a condition. If we all revert to hunter-gatherers then it still satisfies the essential conditions.
That’s a problem even if it isn’t a doomsday scenario. Changes in animal welfare attitudes would probably make most of us unhappy, but having a society where torturing cute animals to death wouldn’t hurt running a complex society. Similarly, allowing infanticide would work fine (heck for that one I can think of some pretty decent arguments why we should allow it). And while not a doomsday scenario, other scenarios that could suck a lot can also be constructed. For example, you could have a situation where we’re all stuck with 1950s gender roles. That would be really bad but wouldn’t destroy a complex society.
Hunter gathers is not something sustainable for a large scale complex society. It is not a position we would favor at all and I’m struggling to see why an AI would try to make us value that set up or how you think a society with technology strong enough to make strong AI would be able to be convinced to it.
Views of killing animals is more flexible as the reason humans object to it seems to come from a level of innate compassion for life itself. So I could see that value being more manipulatable as a result. I don’t see what that has to do with a doomsday set of values though.
1950s gener roles were abandoned because (1) women didn’t like it (in which case maximizing people’s well being would suggest not having such gender roles) and (2) it was less productive for society in that suppressing women limits the set of contributions to society.
I don’t think you’ve presented here a set of doomsday values to which humans could be manipulated to holding by persuasion alone or demonstrated why they would be a set of values the AI would prefer humans to have for maximization.
You make my point right there. World War 2. We went to war in defiance of nazis and refused to be assimilated. People in Germany didn’t even like what the nazis were doing. And finally, the nazis didn’t care about our outrage and death in the resulting war. An AI trying to maximize well-being, will care profoundly about that, by definition.
You seem to think that you are living in a magical fair universe. Just because nothing really really bad happened to you/us yet, doesn’t mean it cannot.
I don’t think I live in a fair universe at all. Regardless, acknowledging that we don’t live in a fair universe doesn’t support your claim that an AI would be able to radically change the values of all humans on earth without outrage from others through persuasion alone.
Humans can radically change the values of humans through weak social pressure alone.
I feel like I’ve already responded to this argument multiple times in various other responses I’ve made. If you think there’s something I’ve overlooked in those responses let me know, but this seems like a restatement of things I’ve already addressed. Also, if there is something in one of the responses I’ve made with which you disagree and have a different reason than what’s been presented, let me know.
The point that other humans fought against it doesn’t change the central point that a very large fraction of humans could have a radically different effective morality. Moreover, if Germany hadn’t gone to war but had instead done the exact same thing to its internal minorities, most of the world likely would not have intervened.
If you don’t like this example so much, one can just look at changing attitudes on many issues. See for example Pinker’s book “The Better Angels of Our Nature” where he documents extreme changes in historical attitudes about the ethics of violence. For example, war is considered much more of a negative now than it was a few centuries ago. Going to war to gain territory is essentially unthinkable today. Similarly, attitudes about animals have changed a lot. In the Middle Ages, forms of entertainment that were considered normal included not just bear bating and similar actions but such crude behavior as lighting a cat on fire and seeing how long it took to die. Our moral attitudes are very much a product of our culture and how we are raised.
Most of our changes to where we are now seem to be a result of what works better in complex society and I therefore have difficulty accepting that a society in the highly advanced state it would be in by the time we had strong AI could be pushed to a non-productive doomsday set of values. So lets make the argument more clear then: what set of values do you think the AI could push us to through persuasion that would be effectively what we consider a doomsday scenario while and allowed the AI to more easily satisfy well-being?
I’m not sure why running a complex society needs to be a condition. If we all revert to hunter-gatherers then it still satisfies the essential conditions.
That’s a problem even if it isn’t a doomsday scenario. Changes in animal welfare attitudes would probably make most of us unhappy, but having a society where torturing cute animals to death wouldn’t hurt running a complex society. Similarly, allowing infanticide would work fine (heck for that one I can think of some pretty decent arguments why we should allow it). And while not a doomsday scenario, other scenarios that could suck a lot can also be constructed. For example, you could have a situation where we’re all stuck with 1950s gender roles. That would be really bad but wouldn’t destroy a complex society.
Hunter gathers is not something sustainable for a large scale complex society. It is not a position we would favor at all and I’m struggling to see why an AI would try to make us value that set up or how you think a society with technology strong enough to make strong AI would be able to be convinced to it.
Views of killing animals is more flexible as the reason humans object to it seems to come from a level of innate compassion for life itself. So I could see that value being more manipulatable as a result. I don’t see what that has to do with a doomsday set of values though.
1950s gener roles were abandoned because (1) women didn’t like it (in which case maximizing people’s well being would suggest not having such gender roles) and (2) it was less productive for society in that suppressing women limits the set of contributions to society.
I don’t think you’ve presented here a set of doomsday values to which humans could be manipulated to holding by persuasion alone or demonstrated why they would be a set of values the AI would prefer humans to have for maximization.