There’s really no paradox, nor any sharp moral dichotomy between human and machine reasoning. Of course the ends justify the means—to the extent that any moral agent can fully specify the ends.
But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences. Rather the moral agent must necessarily fall back on heuristics, fundamentally hard-to-gain wisdom based on increasingly effective interaction with relevant aspects of the environment of interaction, promoting in principle a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences.
There’s really no paradox, nor any sharp moral dichotomy between human and machine reasoning. Of course the ends justify the means—to the extent that any moral agent can fully specify the ends.
But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences. Rather the moral agent must necessarily fall back on heuristics, fundamentally hard-to-gain wisdom based on increasingly effective interaction with relevant aspects of the environment of interaction, promoting in principle a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences.