The only reasons that exist for taking any actions at all are desires. In specific—the desires of the being taking the action. Under any given condition the being will always take the action that best fulfills the most/strongest of it’s desires (given it’s beliefs). The question isn’t which action is right/wrong based on some universal bedrock of fairness, but rather what desires we want the being to have. We can shape many desires in humans (and presumably all the desires of an AI) and thus we want to give it the desires that best help and least hurt humanity.
You say this is passing the recursive buck. Unknown says it’s impossible for us to calculate what’s helpfull or hurtfull. I disagree in both cases. The desires that we most want to encourage are those that tend to fulfill the desires of other beings (“helpfull” desires). The desires we most want to discourage are those that tend to thwart the desires of other beings (“harmfull” desires). It doesn’t have to be some grand confusing thing.
The only reasons that exist for taking any actions at all are desires. In specific—the desires of the being taking the action. Under any given condition the being will always take the action that best fulfills the most/strongest of it’s desires (given it’s beliefs). The question isn’t which action is right/wrong based on some universal bedrock of fairness, but rather what desires we want the being to have. We can shape many desires in humans (and presumably all the desires of an AI) and thus we want to give it the desires that best help and least hurt humanity.
You say this is passing the recursive buck. Unknown says it’s impossible for us to calculate what’s helpfull or hurtfull. I disagree in both cases. The desires that we most want to encourage are those that tend to fulfill the desires of other beings (“helpfull” desires). The desires we most want to discourage are those that tend to thwart the desires of other beings (“harmfull” desires). It doesn’t have to be some grand confusing thing.