Yvian: “So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right.”
- well said. Modulo Eliezer’s lack of explicitness about his definition of “h-right”, I fail to see how the human perspective could be anything other than h-right. This post is just an applause light for the values that we currently like, and I think that that is a bad sign.
If human values were so great, you wouldn’t have to artificially make them look better by saying things like
“So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right ”
@Z.M. Davis: So we are left with a difficult empirical question: to what extent do moral differences amongst humans wash out under CEV, and to what extent are different humans really in different moral reference frames?
- yes, this is a good point. And I fear that the answer depends on the details of the CEV algorithm.
Yvian: “So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right.”
- well said. Modulo Eliezer’s lack of explicitness about his definition of “h-right”, I fail to see how the human perspective could be anything other than h-right. This post is just an applause light for the values that we currently like, and I think that that is a bad sign.
If human values were so great, you wouldn’t have to artificially make them look better by saying things like
“So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right ”
@Z.M. Davis: So we are left with a difficult empirical question: to what extent do moral differences amongst humans wash out under CEV, and to what extent are different humans really in different moral reference frames?
- yes, this is a good point. And I fear that the answer depends on the details of the CEV algorithm.