Carl—“If you’re going to define ‘fully reasonable’ to mean sharing your moral axioms, so that a superintelligent pencil maximizer with superhuman understanding of human ethics and philosophy is not a ‘reasonable agent,’ doesn’t this just shift the problem a level? Your morality_objectivenorms is only common to all agents with full reasonableness_RichardChappell, and you don’t seem to have any compelling reason for the latter (somewhat gerrymandered) account of reasonableness save that it’s yours/your culture’s/your species.’”
I don’t mean to define ‘fully reasonable’ at all (though it is meant to be minimally ad hoc or gerrymandered). I take this normative notion as a conceptual primitive, and then hypothesize that it entails a certain set of moral norms. They’re probably not even my norms (in any way Eliezer could accommodate), since I’m presumably not fully reasonable myself. But they’re what I’m trying to aim for, even if I don’t always grasp them correctly.
This may sound mysterious and troublingly ungrounded to you. Yet you use terms like ‘superintelligent’ and ‘superhuman understanding’, which are no less normative than my ‘reasonable’. I think that reasonableness is a component of intelligence and (certainly) understanding, so I don’t see how these terms could properly apply to a pencil maximizer. Maybe you simply mean that it is a pencil maximizer that is instrumentally rational and perfectly proficient at Bayesian updating. But that’s not to say it’s intelligent. It might, for example, be a counterinductivist (didn’t someone mention anti-Occamists up-thread?), with completely wacky priors. I take it as a datum that this is simply unreasonable—there are other norms, besides conditionalization and instrumental rationality, which govern ‘intelligent’ or good thinking.
So I say there are brute, unanalysable facts about what’s reasonable. The buck’s gotta stop somewhere. I don’t see that any alternative theory does better than this one.
Carl—“If you’re going to define ‘fully reasonable’ to mean sharing your moral axioms, so that a superintelligent pencil maximizer with superhuman understanding of human ethics and philosophy is not a ‘reasonable agent,’ doesn’t this just shift the problem a level? Your morality_objectivenorms is only common to all agents with full reasonableness_RichardChappell, and you don’t seem to have any compelling reason for the latter (somewhat gerrymandered) account of reasonableness save that it’s yours/your culture’s/your species.’”
I don’t mean to define ‘fully reasonable’ at all (though it is meant to be minimally ad hoc or gerrymandered). I take this normative notion as a conceptual primitive, and then hypothesize that it entails a certain set of moral norms. They’re probably not even my norms (in any way Eliezer could accommodate), since I’m presumably not fully reasonable myself. But they’re what I’m trying to aim for, even if I don’t always grasp them correctly.
This may sound mysterious and troublingly ungrounded to you. Yet you use terms like ‘superintelligent’ and ‘superhuman understanding’, which are no less normative than my ‘reasonable’. I think that reasonableness is a component of intelligence and (certainly) understanding, so I don’t see how these terms could properly apply to a pencil maximizer. Maybe you simply mean that it is a pencil maximizer that is instrumentally rational and perfectly proficient at Bayesian updating. But that’s not to say it’s intelligent. It might, for example, be a counterinductivist (didn’t someone mention anti-Occamists up-thread?), with completely wacky priors. I take it as a datum that this is simply unreasonable—there are other norms, besides conditionalization and instrumental rationality, which govern ‘intelligent’ or good thinking.
So I say there are brute, unanalysable facts about what’s reasonable. The buck’s gotta stop somewhere. I don’t see that any alternative theory does better than this one.