It’s because you’re a human. You can’t divorce yourself from being human while thinking about morality.
It’s not clear to me that the first of those statements implies the second of those statements. As far as I can tell, I can divorce myself from being human while thinking about morality. Is there some sort of empirical test we can do to determine whether or not that’s correct?
As far as I can tell, I can divorce myself from being human while thinking about morality.
Seems to me that if you weren’t human, you wouldn’t care about morality (and instead care about paperclips or whatever). So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it’s a human in disguise. Otherwise it would be very difficult to locate morality in the vast set of “things a mind could consider valuable”, so there is almost zero probability that the neutral disembodied mind would spend even a few seconds thinking about it.
Seems to me that if you weren’t human, you wouldn’t care about morality (and instead care about paperclips or whatever).
If you take “morality” to be “my peculiar preference for the letter v,” but it seems to me that a more natural meaning of “morality” is “things other people should do.” Any agent which interacts with other agents has both a vested stake in how windfalls are distributed and in the process used to determine how windfalls are distributed, and so I’d like to talk about “fair” in a way that paperclippers, pebblesorters, and humans find interesting.
That is, how is it difficult to think about “my particular value system,” “value systems in general,” “my particular protocols for interaction,” and “protocols for interaction in general” as different things? Why, when Eliezer is so quick to taboo words and get to the heart of things in other areas, does he not do so here?
So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it’s a human in disguise.
But when modelling a paperclipper, the neutral disembodied mind isn’t interested in human morality, and is interested in paperclips, and thinks of desire for paperclips as the universal impulse. That is to say, I think I have more control over my interests than this thought experiment is presuming.
Sort of? I’m not trying to explain morality, but label it, and I think that the word “should” makes a decent label for the cluster of things which make up the “morality” I was trying to point to. The other version I came up with was like thirty words long, and I figured that ‘should’ was a better choice than that.
I dare say that a disembodied, solipsistic mind wouldn’t need to think much about morality. But an embodied mind, in a society, competing for resources with other agents, interacting with them in painful and pleasant ways would need something morality-like, some way of regulating interactions and assigning resources. “Social” isn’t some tiny
speck in mindspace, it’s a large chunk.
It’s true that he can’t divorce himself from human in a sense, but a few nitpicks.
1- In theory (although probably not in practice), Vaniver could imagine himself as another sort of hypothetically or actually possible moral being. Apes have morality, for example. You could counter with Elizier’s definition of morality here, but his case for moral convergence is fairly poor.
2- Even a completely amoral being can “think about morality” in the sense of attempting to predict human actions and taking moral codes into account.
3- I know this is very pedantic, but I would contend there are possible universes in which the phrase “You can’t divorce yourself from being human while thinking about morality” does not apply. An Aristotelean universe in which creatures have purposes and inherently gain satisfication from fullfilling their purpose would use an Aristotelean metaethics of purpose-fullfilment, and a Christian universe a metaethics of the Will of God- both would apply.
No, there’s not, which is rather the point. It’s like asking “what would it be like to move faster than the speed of light?” The very question is silly, and the results of taking it seriously aren’t going to be any less silly.
No, there’s not, which is rather the point. It’s like asking “what would it be like to move faster than the speed of light?” The very question is silly, and the results of taking it seriously aren’t going to be any less silly.
I still don’t think I’m understanding you. I can imagine a wide variety of ways in which it could be possible to move more quickly than c, and a number of empirical results of the universe being those ways, and tests have shown that this universe does not behave in any of those ways.
(If you’re trying to demonstrate a principle by example, I would prefer you discuss the principle explicitly.)
It’s not clear to me that the first of those statements implies the second of those statements. As far as I can tell, I can divorce myself from being human while thinking about morality. Is there some sort of empirical test we can do to determine whether or not that’s correct?
Seems to me that if you weren’t human, you wouldn’t care about morality (and instead care about paperclips or whatever). So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it’s a human in disguise. Otherwise it would be very difficult to locate morality in the vast set of “things a mind could consider valuable”, so there is almost zero probability that the neutral disembodied mind would spend even a few seconds thinking about it.
If you take “morality” to be “my peculiar preference for the letter v,” but it seems to me that a more natural meaning of “morality” is “things other people should do.” Any agent which interacts with other agents has both a vested stake in how windfalls are distributed and in the process used to determine how windfalls are distributed, and so I’d like to talk about “fair” in a way that paperclippers, pebblesorters, and humans find interesting.
That is, how is it difficult to think about “my particular value system,” “value systems in general,” “my particular protocols for interaction,” and “protocols for interaction in general” as different things? Why, when Eliezer is so quick to taboo words and get to the heart of things in other areas, does he not do so here?
But when modelling a paperclipper, the neutral disembodied mind isn’t interested in human morality, and is interested in paperclips, and thinks of desire for paperclips as the universal impulse. That is to say, I think I have more control over my interests than this thought experiment is presuming.
You’ve passed the recursive buck here.
Sort of? I’m not trying to explain morality, but label it, and I think that the word “should” makes a decent label for the cluster of things which make up the “morality” I was trying to point to. The other version I came up with was like thirty words long, and I figured that ‘should’ was a better choice than that.
I dare say that a disembodied, solipsistic mind wouldn’t need to think much about morality. But an embodied mind, in a society, competing for resources with other agents, interacting with them in painful and pleasant ways would need something morality-like, some way of regulating interactions and assigning resources. “Social” isn’t some tiny speck in mindspace, it’s a large chunk.
It’s true that he can’t divorce himself from human in a sense, but a few nitpicks.
1- In theory (although probably not in practice), Vaniver could imagine himself as another sort of hypothetically or actually possible moral being. Apes have morality, for example. You could counter with Elizier’s definition of morality here, but his case for moral convergence is fairly poor. 2- Even a completely amoral being can “think about morality” in the sense of attempting to predict human actions and taking moral codes into account. 3- I know this is very pedantic, but I would contend there are possible universes in which the phrase “You can’t divorce yourself from being human while thinking about morality” does not apply. An Aristotelean universe in which creatures have purposes and inherently gain satisfication from fullfilling their purpose would use an Aristotelean metaethics of purpose-fullfilment, and a Christian universe a metaethics of the Will of God- both would apply.
No, there’s not, which is rather the point. It’s like asking “what would it be like to move faster than the speed of light?” The very question is silly, and the results of taking it seriously aren’t going to be any less silly.
I still don’t think I’m understanding you. I can imagine a wide variety of ways in which it could be possible to move more quickly than c, and a number of empirical results of the universe being those ways, and tests have shown that this universe does not behave in any of those ways.
(If you’re trying to demonstrate a principle by example, I would prefer you discuss the principle explicitly.)