Once again we’ve highlighted the immaturity of present-day moral thinking—the kind that leads inevitably to Parfit’s Repugnant Conclusion. But any paradox is merely a matter of insufficient context; in the bigger picture all the pieces must fit.
Here we have people struggling over the relative moral weight of torture versus dust specks, without recognizing that there is no objective measure of morality, but only objective measures of agreement on moral values.
The issue at hand can be modeled coherently in terms of the relevant distances (regardless of how highly dimensional, or what particular distance metric) between the assessor’s preferred state and the assessor’s perception of the alternative states. Regardless of the particular (necessarily subjective) model and evaluation function, there must be some scalar distance between the two states within the assessor’s model (since a rational assessor can have only a single coherent model of reality, and the alternative states are not identical.) Furthermore introducing a multiplier on the order of a googolplex overwhelms any possible scale in any realizable model, leading to an effective infinity, forcing one (if one’s reasoning is to be coherent) to view that state as dominant.
All of this (as presented by Eliezer) is perfectly rational—but merely a special case and inappropriate to decision-making within a complex evolving context where actual consequences are effectively unpredictable.
If one faces a deep and wide chasm impeding desired trade with a neighboring tribe, should one rationally proceed to achieve the desired outcome: an optimum bridge?
Or should one focus not on perceived outcomes, but rather on most effectively expressing one’s values-complex: Ie, valuing not the bridge, but effective interaction (including trade), and proceeding to exploit best-known principles promoting interaction, for example communications, air transport, replication rather than transport...and maybe even a bridge?
The underlying point is that within a complex evolutionary environment, specific outcomes can’t be reliably predicted. Therefore to the extent that the system (within its environment of interaction) cannot be effectively modeled, an optimum strategy is one that leads to discovering the preferred future through the exercise of increasingly scientific (instrumental) principles promoting an increasingly coherent model of evolving values.
In the narrow case of a completely specified context, it’s all the same. In the broader, more complex world we experience, it means the difference between coherence and paradox.
The Repugnant Conclusion fails (as does all consequentialist ethics when extrapolated) because it presumes to model a moral scenario incorporating an objective point of view. Same problem here.
Once again we’ve highlighted the immaturity of present-day moral thinking—the kind that leads inevitably to Parfit’s Repugnant Conclusion. But any paradox is merely a matter of insufficient context; in the bigger picture all the pieces must fit.
Here we have people struggling over the relative moral weight of torture versus dust specks, without recognizing that there is no objective measure of morality, but only objective measures of agreement on moral values.
The issue at hand can be modeled coherently in terms of the relevant distances (regardless of how highly dimensional, or what particular distance metric) between the assessor’s preferred state and the assessor’s perception of the alternative states. Regardless of the particular (necessarily subjective) model and evaluation function, there must be some scalar distance between the two states within the assessor’s model (since a rational assessor can have only a single coherent model of reality, and the alternative states are not identical.) Furthermore introducing a multiplier on the order of a googolplex overwhelms any possible scale in any realizable model, leading to an effective infinity, forcing one (if one’s reasoning is to be coherent) to view that state as dominant.
All of this (as presented by Eliezer) is perfectly rational—but merely a special case and inappropriate to decision-making within a complex evolving context where actual consequences are effectively unpredictable.
If one faces a deep and wide chasm impeding desired trade with a neighboring tribe, should one rationally proceed to achieve the desired outcome: an optimum bridge?
Or should one focus not on perceived outcomes, but rather on most effectively expressing one’s values-complex: Ie, valuing not the bridge, but effective interaction (including trade), and proceeding to exploit best-known principles promoting interaction, for example communications, air transport, replication rather than transport...and maybe even a bridge?
The underlying point is that within a complex evolutionary environment, specific outcomes can’t be reliably predicted. Therefore to the extent that the system (within its environment of interaction) cannot be effectively modeled, an optimum strategy is one that leads to discovering the preferred future through the exercise of increasingly scientific (instrumental) principles promoting an increasingly coherent model of evolving values.
In the narrow case of a completely specified context, it’s all the same. In the broader, more complex world we experience, it means the difference between coherence and paradox.
The Repugnant Conclusion fails (as does all consequentialist ethics when extrapolated) because it presumes to model a moral scenario incorporating an objective point of view. Same problem here.