And as for the others? Or are you saying the AI trying to maximize well-being will try and succeed in effectively wiping out everyone and then condition future generations to have the desired easily maximized values? If so, this behavior is conditioned on the idea that the AI could be very confident in its ability to do so, because otherwise the chance of failing and the cost of war in expected value of human well-being would massively drop the expected value. I think you should also make clear what you think these values might end up being to which it will try to change human values to more easily maximize.
And as for the others? Or are you saying the AI trying to maximize well-being will try and succeed in effectively wiping out everyone and then condition future generations to have the desired easily maximized values? If so, this behavior is conditioned on the idea that the AI could be very confident in its ability to do so, because otherwise the chance of failing and the cost of war in expected value of human well-being would massively drop the expected value. I think you should also make clear what you think these values might end up being to which it will try to change human values to more easily maximize.