“Rather, it is essential to the concept of morality that it involves shared standards common to all fully reasonable agents.”
Richard,
If you’re going to define ‘fully reasonable’ to mean sharing your moral axioms, so that a superintelligent pencil maximizer with superhuman understanding of human ethics and philosophy is not a ‘reasonable agent,’ doesn’t this just shift the problem a level? Your morality_objectivenorms is only common to all agents with full reasonableness_RichardChappell, and you don’t seem to have any compelling reason for the latter (somewhat gerrymandered) account of reasonableness save that it’s yours/your culture’s/your species.′
“Rather, it is essential to the concept of morality that it involves shared standards common to all fully reasonable agents.”
Richard,
If you’re going to define ‘fully reasonable’ to mean sharing your moral axioms, so that a superintelligent pencil maximizer with superhuman understanding of human ethics and philosophy is not a ‘reasonable agent,’ doesn’t this just shift the problem a level? Your morality_objectivenorms is only common to all agents with full reasonableness_RichardChappell, and you don’t seem to have any compelling reason for the latter (somewhat gerrymandered) account of reasonableness save that it’s yours/your culture’s/your species.′