A major problem in Friendly AI is how to extrapolate human morality into >transhuman realms. I don’t know of any parametric approach to this problem that >isn’t without serious difficulties, but “nonparametric” doesn’t really seem to help >either. What does your advice “don’t extrapolate if you can possibly avoid it” imply in >this case? Pursue a non-AI path instead?
I think it implies that a Friendly sysop should not dream up a transhuman society and then try to reshape humanity into that society, but rather let us evolve at our own pace just attending to things that are relevant at each time.
I think it implies that a Friendly sysop should not dream up a transhuman society and then try to reshape humanity into that society, but rather let us evolve at our own pace just attending to things that are relevant at each time.