Fascinating, and scary, the extent to which we adhere to established models of moral reasoning despite the obvious inconsistencies. Someone here pointed out that the problem wasn’t sufficiently defined, but then proceeded to offer examples of objective factors that would appear necessary to evaluation of a consequentialist solution. Robin seized upon the “obvious” answer that any significant amount of discomfort, over such a vast population, would easily dominate, with any conceivable scaling factor, the utilitarian value of the torture of a single individual. But I think he took the problem statement too literally; the discomfort of the dust mote was intended to be vanishingly small, over a vast population, thus keeping the problem interesting rather than “obvious.”
But most interesting to me is that no one pointed out that fundamentally, the assessed goodness of any act is a function of the values (effective, but not necessarily explicit) of the assessor. And assessed morality as a function of group agreement on the “goodness” of an act, promoting the increasingly coherent values of the group over increasing scope of expected consequences.
Now the values of any agent will necessarily be rooted in an evolutionary branch of reality, and this is the basis for increasing agreement as we move toward the common root, but this evolving agreement in principle on the direction of increasing morality should never be considered to point to any particular destination of goodness or morality in any objective sense, for that way lies the “repugnant conclusion” and other paradoxes of utilitarianism.
Obvious? Not at all, for while we can increasingly converge on principles promoting “what works” to promote our increasingly coherent values over increasing scope, our expression of those values will increasingly diverge.
Fascinating, and scary, the extent to which we adhere to established models of moral reasoning despite the obvious inconsistencies. Someone here pointed out that the problem wasn’t sufficiently defined, but then proceeded to offer examples of objective factors that would appear necessary to evaluation of a consequentialist solution. Robin seized upon the “obvious” answer that any significant amount of discomfort, over such a vast population, would easily dominate, with any conceivable scaling factor, the utilitarian value of the torture of a single individual. But I think he took the problem statement too literally; the discomfort of the dust mote was intended to be vanishingly small, over a vast population, thus keeping the problem interesting rather than “obvious.”
But most interesting to me is that no one pointed out that fundamentally, the assessed goodness of any act is a function of the values (effective, but not necessarily explicit) of the assessor. And assessed morality as a function of group agreement on the “goodness” of an act, promoting the increasingly coherent values of the group over increasing scope of expected consequences.
Now the values of any agent will necessarily be rooted in an evolutionary branch of reality, and this is the basis for increasing agreement as we move toward the common root, but this evolving agreement in principle on the direction of increasing morality should never be considered to point to any particular destination of goodness or morality in any objective sense, for that way lies the “repugnant conclusion” and other paradoxes of utilitarianism.
Obvious? Not at all, for while we can increasingly converge on principles promoting “what works” to promote our increasingly coherent values over increasing scope, our expression of those values will increasingly diverge.