If my opinion on animal rights differs from my extrapolated opinion, I tend to think it’s because the person I want to be knows something that I do not. Perhaps he has a better understanding of what sentience is.
If my extrapolated opinion is just chaotic value drift, then my problem is that I’m extrapolating it wrong. Under that idea, you’re giving insight into a likely error mode of extrapolation.
I don’t want my current values to be implemented, insomuch as my current values are based on my current understanding. Then again, if I want my extrapolated values to be implemented, isn’t that just another way of saying that my extrapolated values are my current values?
If my opinion on animal rights differs from my extrapolated opinion, I tend to think it’s because the person I want to be knows something that I do not. Perhaps he has a better understanding of what sentience is.
If my extrapolated opinion is just chaotic value drift, then my problem is that I’m extrapolating it wrong. Under that idea, you’re giving insight into a likely error mode of extrapolation.
I don’t want my current values to be implemented, insomuch as my current values are based on my current understanding. Then again, if I want my extrapolated values to be implemented, isn’t that just another way of saying that my extrapolated values are my current values?