My tentative viewpoint on this is that the preference one has over value drift from AI vs. human made value drift comes from an entity’s ability to experience joy and suffering.
In the context of AI safety, many humans who experience mostly positive lives could be killed or made to suffer at the hands of superintelligent AI in the future, and the experience of the AI in terms of pain and suffering is mostly unknown. I’m worried that an AI will optimize for something that does not lead to any subjective increase of wellbeing in the universe at the cost of human happiness.
My tentative viewpoint on this is that the preference one has over value drift from AI vs. human made value drift comes from an entity’s ability to experience joy and suffering.
In the context of AI safety, many humans who experience mostly positive lives could be killed or made to suffer at the hands of superintelligent AI in the future, and the experience of the AI in terms of pain and suffering is mostly unknown. I’m worried that an AI will optimize for something that does not lead to any subjective increase of wellbeing in the universe at the cost of human happiness.