I say “sort of” because I don’t find “h-right as a narrow cluster” obvious, but I don’t find it obviously wrong either.
I think part of the issue is that “narrow” might not have an obvious reference point. But it seems to me that there is a natural one: a single decision-making agent. That is, one might say “it’s narrow because the moral sense of all humans that have ever lived occupies a dot of measure 0 in the total space of all possible moral senses,” but that seems far less relevant to me than the question of if the intersection of those moral senses is large enough to create a meaningful agent. (Most likely there’s a more interesting aggregation procedure than intersection.)
Even if h-right isn’t a narrow cluster, I don’t think it would make the argument inconsistent; it could still work if different parts of humanity have genuinely different values modeled as, say, h1-right , h2-right, etc. At that point I’m not sure the theory would be all that useful, though.
I do think that it makes the part of it that wants to drop the “h” prefix, and just talk about “right”, useless.
As well, my (limited!) understanding of Eliezer’s broader position is that there is a particular cluster, which I’ll call h0-right, which is an attractor- the “if we knew more, thought faster, were more the people we wished we were, had grown up farther together” cluster- such that we can see h2-right leads to h1-right leads to h0-right, and h-2-right leads to h-1-right leads to h0-right, and h2i-right leads to hi-right leads to h0-right, and so on. If such a cluster does exist, then it makes sense to identify it as a special cluster. Again, it’s non-obvious to me that such a cluster exists, and I haven’t read enough of the CEV paper / other work to see how this is reconciled with the orthogonality thesis, and it appears that word doesn’t appear in the 2004 writeup.
I think part of the issue is that “narrow” might not have an obvious reference point. But it seems to me that there is a natural one: a single decision-making agent. That is, one might say “it’s narrow because the moral sense of all humans that have ever lived occupies a dot of measure 0 in the total space of all possible moral senses,” but that seems far less relevant to me than the question of if the intersection of those moral senses is large enough to create a meaningful agent. (Most likely there’s a more interesting aggregation procedure than intersection.)
I do think that it makes the part of it that wants to drop the “h” prefix, and just talk about “right”, useless.
As well, my (limited!) understanding of Eliezer’s broader position is that there is a particular cluster, which I’ll call h0-right, which is an attractor- the “if we knew more, thought faster, were more the people we wished we were, had grown up farther together” cluster- such that we can see h2-right leads to h1-right leads to h0-right, and h-2-right leads to h-1-right leads to h0-right, and h2i-right leads to hi-right leads to h0-right, and so on. If such a cluster does exist, then it makes sense to identify it as a special cluster. Again, it’s non-obvious to me that such a cluster exists, and I haven’t read enough of the CEV paper / other work to see how this is reconciled with the orthogonality thesis, and it appears that word doesn’t appear in the 2004 writeup.