What does your advice “don’t extrapolate if you can possibly avoid it” imply in this case?
I distinguish “extrapolation” in the sense of an extending an empirical regularity (as in Moore’s law) from inferring a logical consequence of of well-supported theory (as in the black hole prediction). This is really a difference of degree, not kind, but for human science, this distinction is a good abstraction. For FAI, I’d say the implication is that an FAI’s morality-predicting component should be a working model of human brains in action.
I distinguish “extrapolation” in the sense of an extending an empirical regularity (as in Moore’s law) from inferring a logical consequence of of well-supported theory (as in the black hole prediction). This is really a difference of degree, not kind, but for human science, this distinction is a good abstraction. For FAI, I’d say the implication is that an FAI’s morality-predicting component should be a working model of human brains in action.