I think he means that if the pebblesorters came along, and studied humanity, they would come up with a narrow cluster which they would label “h-right” instead of their “p-right”, and that the cluster h-right is accessible to all scientifically-minded observers. It’s objective in the sense that “the number of exterior columns in the design of the Parthenon” is objective, but not in the sense that “15*2+8*2″ is objective. The first is 46, but could have been something else in another universe; the second is 46, and can’t be something else in another universe.
But… it looks like he’s implying that “h-right” is special among “right”s in that it can’t be something else in another universe, but that looks wrong for simple reasons. It’s also not obvious to me that h-right is a narrow cluster.
But… it looks like he’s implying that “h-right” is special among “right”s in that it can’t be something else in another universe, but that looks wrong for simple reasons. It’s also not obvious to me that h-right is a narrow cluster.
It’s because you’re a human. You can’t divorce yourself from being human while thinking about morality.
It’s because you’re a human. You can’t divorce yourself from being human while thinking about morality.
It’s not clear to me that the first of those statements implies the second of those statements. As far as I can tell, I can divorce myself from being human while thinking about morality. Is there some sort of empirical test we can do to determine whether or not that’s correct?
As far as I can tell, I can divorce myself from being human while thinking about morality.
Seems to me that if you weren’t human, you wouldn’t care about morality (and instead care about paperclips or whatever). So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it’s a human in disguise. Otherwise it would be very difficult to locate morality in the vast set of “things a mind could consider valuable”, so there is almost zero probability that the neutral disembodied mind would spend even a few seconds thinking about it.
Seems to me that if you weren’t human, you wouldn’t care about morality (and instead care about paperclips or whatever).
If you take “morality” to be “my peculiar preference for the letter v,” but it seems to me that a more natural meaning of “morality” is “things other people should do.” Any agent which interacts with other agents has both a vested stake in how windfalls are distributed and in the process used to determine how windfalls are distributed, and so I’d like to talk about “fair” in a way that paperclippers, pebblesorters, and humans find interesting.
That is, how is it difficult to think about “my particular value system,” “value systems in general,” “my particular protocols for interaction,” and “protocols for interaction in general” as different things? Why, when Eliezer is so quick to taboo words and get to the heart of things in other areas, does he not do so here?
So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it’s a human in disguise.
But when modelling a paperclipper, the neutral disembodied mind isn’t interested in human morality, and is interested in paperclips, and thinks of desire for paperclips as the universal impulse. That is to say, I think I have more control over my interests than this thought experiment is presuming.
Sort of? I’m not trying to explain morality, but label it, and I think that the word “should” makes a decent label for the cluster of things which make up the “morality” I was trying to point to. The other version I came up with was like thirty words long, and I figured that ‘should’ was a better choice than that.
I dare say that a disembodied, solipsistic mind wouldn’t need to think much about morality. But an embodied mind, in a society, competing for resources with other agents, interacting with them in painful and pleasant ways would need something morality-like, some way of regulating interactions and assigning resources. “Social” isn’t some tiny
speck in mindspace, it’s a large chunk.
It’s true that he can’t divorce himself from human in a sense, but a few nitpicks.
1- In theory (although probably not in practice), Vaniver could imagine himself as another sort of hypothetically or actually possible moral being. Apes have morality, for example. You could counter with Elizier’s definition of morality here, but his case for moral convergence is fairly poor.
2- Even a completely amoral being can “think about morality” in the sense of attempting to predict human actions and taking moral codes into account.
3- I know this is very pedantic, but I would contend there are possible universes in which the phrase “You can’t divorce yourself from being human while thinking about morality” does not apply. An Aristotelean universe in which creatures have purposes and inherently gain satisfication from fullfilling their purpose would use an Aristotelean metaethics of purpose-fullfilment, and a Christian universe a metaethics of the Will of God- both would apply.
No, there’s not, which is rather the point. It’s like asking “what would it be like to move faster than the speed of light?” The very question is silly, and the results of taking it seriously aren’t going to be any less silly.
No, there’s not, which is rather the point. It’s like asking “what would it be like to move faster than the speed of light?” The very question is silly, and the results of taking it seriously aren’t going to be any less silly.
I still don’t think I’m understanding you. I can imagine a wide variety of ways in which it could be possible to move more quickly than c, and a number of empirical results of the universe being those ways, and tests have shown that this universe does not behave in any of those ways.
(If you’re trying to demonstrate a principle by example, I would prefer you discuss the principle explicitly.)
Datapoint: I didn’t find Metaethics all that confusing, although I am not sure I agree with it.
It looks like he’s implying that “h-right” is special among “right”s in that it can’t be something else in another universe, but that looks wrong for simple reasons. It’s also not obvious to me that h-right is a narrow cluster.
I had this impression too, and have more or less the same sort-of-objection to it. I say “sort of” because I don’t find “h-right as a narrow cluster” obvious, but I don’t find it obviously wrong either. It feels like it should be a testable question but I’m not sure how one would go about testing it, given how crap humans are at self-reporting their values and beliefs.
On edit: Even if h-right isn’t a narrow cluster, I don’t think it would make the argument inconsistent; it could still work if different parts of humanity have genuinely different values modeled as, say, h1-right , h2-right, etc. At that point I’m not sure the theory would be all that useful, though.
I say “sort of” because I don’t find “h-right as a narrow cluster” obvious, but I don’t find it obviously wrong either.
I think part of the issue is that “narrow” might not have an obvious reference point. But it seems to me that there is a natural one: a single decision-making agent. That is, one might say “it’s narrow because the moral sense of all humans that have ever lived occupies a dot of measure 0 in the total space of all possible moral senses,” but that seems far less relevant to me than the question of if the intersection of those moral senses is large enough to create a meaningful agent. (Most likely there’s a more interesting aggregation procedure than intersection.)
Even if h-right isn’t a narrow cluster, I don’t think it would make the argument inconsistent; it could still work if different parts of humanity have genuinely different values modeled as, say, h1-right , h2-right, etc. At that point I’m not sure the theory would be all that useful, though.
I do think that it makes the part of it that wants to drop the “h” prefix, and just talk about “right”, useless.
As well, my (limited!) understanding of Eliezer’s broader position is that there is a particular cluster, which I’ll call h0-right, which is an attractor- the “if we knew more, thought faster, were more the people we wished we were, had grown up farther together” cluster- such that we can see h2-right leads to h1-right leads to h0-right, and h-2-right leads to h-1-right leads to h0-right, and h2i-right leads to hi-right leads to h0-right, and so on. If such a cluster does exist, then it makes sense to identify it as a special cluster. Again, it’s non-obvious to me that such a cluster exists, and I haven’t read enough of the CEV paper / other work to see how this is reconciled with the orthogonality thesis, and it appears that word doesn’t appear in the 2004 writeup.
I think he means that if the pebblesorters came along, and studied humanity, they would come up with a narrow cluster which they would label “h-right” instead of their “p-right”, and that the cluster h-right is accessible to all scientifically-minded observers. It’s objective in the sense that “the number of exterior columns in the design of the Parthenon” is objective, but not in the sense that “15*2+8*2″ is objective. The first is 46, but could have been something else in another universe; the second is 46, and can’t be something else in another universe.
But… it looks like he’s implying that “h-right” is special among “right”s in that it can’t be something else in another universe, but that looks wrong for simple reasons. It’s also not obvious to me that h-right is a narrow cluster.
It’s because you’re a human. You can’t divorce yourself from being human while thinking about morality.
It’s not clear to me that the first of those statements implies the second of those statements. As far as I can tell, I can divorce myself from being human while thinking about morality. Is there some sort of empirical test we can do to determine whether or not that’s correct?
Seems to me that if you weren’t human, you wouldn’t care about morality (and instead care about paperclips or whatever). So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it’s a human in disguise. Otherwise it would be very difficult to locate morality in the vast set of “things a mind could consider valuable”, so there is almost zero probability that the neutral disembodied mind would spend even a few seconds thinking about it.
If you take “morality” to be “my peculiar preference for the letter v,” but it seems to me that a more natural meaning of “morality” is “things other people should do.” Any agent which interacts with other agents has both a vested stake in how windfalls are distributed and in the process used to determine how windfalls are distributed, and so I’d like to talk about “fair” in a way that paperclippers, pebblesorters, and humans find interesting.
That is, how is it difficult to think about “my particular value system,” “value systems in general,” “my particular protocols for interaction,” and “protocols for interaction in general” as different things? Why, when Eliezer is so quick to taboo words and get to the heart of things in other areas, does he not do so here?
But when modelling a paperclipper, the neutral disembodied mind isn’t interested in human morality, and is interested in paperclips, and thinks of desire for paperclips as the universal impulse. That is to say, I think I have more control over my interests than this thought experiment is presuming.
You’ve passed the recursive buck here.
Sort of? I’m not trying to explain morality, but label it, and I think that the word “should” makes a decent label for the cluster of things which make up the “morality” I was trying to point to. The other version I came up with was like thirty words long, and I figured that ‘should’ was a better choice than that.
I dare say that a disembodied, solipsistic mind wouldn’t need to think much about morality. But an embodied mind, in a society, competing for resources with other agents, interacting with them in painful and pleasant ways would need something morality-like, some way of regulating interactions and assigning resources. “Social” isn’t some tiny speck in mindspace, it’s a large chunk.
It’s true that he can’t divorce himself from human in a sense, but a few nitpicks.
1- In theory (although probably not in practice), Vaniver could imagine himself as another sort of hypothetically or actually possible moral being. Apes have morality, for example. You could counter with Elizier’s definition of morality here, but his case for moral convergence is fairly poor. 2- Even a completely amoral being can “think about morality” in the sense of attempting to predict human actions and taking moral codes into account. 3- I know this is very pedantic, but I would contend there are possible universes in which the phrase “You can’t divorce yourself from being human while thinking about morality” does not apply. An Aristotelean universe in which creatures have purposes and inherently gain satisfication from fullfilling their purpose would use an Aristotelean metaethics of purpose-fullfilment, and a Christian universe a metaethics of the Will of God- both would apply.
No, there’s not, which is rather the point. It’s like asking “what would it be like to move faster than the speed of light?” The very question is silly, and the results of taking it seriously aren’t going to be any less silly.
I still don’t think I’m understanding you. I can imagine a wide variety of ways in which it could be possible to move more quickly than c, and a number of empirical results of the universe being those ways, and tests have shown that this universe does not behave in any of those ways.
(If you’re trying to demonstrate a principle by example, I would prefer you discuss the principle explicitly.)
Datapoint: I didn’t find Metaethics all that confusing, although I am not sure I agree with it.
I had this impression too, and have more or less the same sort-of-objection to it. I say “sort of” because I don’t find “h-right as a narrow cluster” obvious, but I don’t find it obviously wrong either. It feels like it should be a testable question but I’m not sure how one would go about testing it, given how crap humans are at self-reporting their values and beliefs.
On edit: Even if h-right isn’t a narrow cluster, I don’t think it would make the argument inconsistent; it could still work if different parts of humanity have genuinely different values modeled as, say, h1-right , h2-right, etc. At that point I’m not sure the theory would be all that useful, though.
I think part of the issue is that “narrow” might not have an obvious reference point. But it seems to me that there is a natural one: a single decision-making agent. That is, one might say “it’s narrow because the moral sense of all humans that have ever lived occupies a dot of measure 0 in the total space of all possible moral senses,” but that seems far less relevant to me than the question of if the intersection of those moral senses is large enough to create a meaningful agent. (Most likely there’s a more interesting aggregation procedure than intersection.)
I do think that it makes the part of it that wants to drop the “h” prefix, and just talk about “right”, useless.
As well, my (limited!) understanding of Eliezer’s broader position is that there is a particular cluster, which I’ll call h0-right, which is an attractor- the “if we knew more, thought faster, were more the people we wished we were, had grown up farther together” cluster- such that we can see h2-right leads to h1-right leads to h0-right, and h-2-right leads to h-1-right leads to h0-right, and h2i-right leads to hi-right leads to h0-right, and so on. If such a cluster does exist, then it makes sense to identify it as a special cluster. Again, it’s non-obvious to me that such a cluster exists, and I haven’t read enough of the CEV paper / other work to see how this is reconciled with the orthogonality thesis, and it appears that word doesn’t appear in the 2004 writeup.