All of this is why Eliezer’s morality sequence is wrong. Version 2 is basically right. The Baby-Eaters were not immoral, but moral, but according to a different morals. That is not subjectivism, because it is an objective fact that Baby-Eaters are what they are, and are obligated by Baby-Eater morality, and humans are humans, and are obligated by human morality.
But Eliezer (and Bound-Up) do not admit this, nonsensically asserting that non-humans should be obligated by human morality.
To be honest, Eliezer made a slightly different argument: 1) humans share (because of evolution) a psychological unity that is not affected by regional or temporal distinctions; 2) this unity entails a set of values that is inescapable for every human beings, its collective effect on human cognition and actions we dub “morality”; 3) Clippy, Elves and Pebblesorters, being fundamentally different, share a different set of values that guide their actions and what they care about; 4) those are perfectly coherent and sound for those who entertain them, we should though do not call them “Clippy’s, Elves’ or Pebblesorters’ morality”, because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words. That’s it: you can debate any single point, but I think the difference is only formal. The underlying understanding, that “motivating set of values” is a two place predicate, is the same, Yudkowski preferred though to use different words for different partially applied predicates, on the grounds of point 1 and 4.
those are perfectly coherent and sound for those who entertain them, we should though do not call them “Clippy’s, Elves’ or Pebblesorters’ morality”, because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain’t no Mama cause she ain’t my Mama!
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
And it’s not like the issue isn’t important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.
obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
That seems different to what you were saying before.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
There’s not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.
Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”. There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of “the good of society” or “objective ethical truth” are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.
“I think Eliezer is correct in showing that the only solution is avoiding contact at all.”: Assumes that there is such a thing as an objective solution, if implicitly.
“The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.”: Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.
“There’s not much objectivity in that.”: What if objectivity in the sense you describe is impossible?
“Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.”: If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
Maybe we should be abandoning the objectivity requirement as impossible.
Maybe we should also consider in parallel the question of whether objectivity is necessary. If objectivity is both necessary to morality and impossible, then nihilism results.
The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics is that it is connected to practices of reward and punishment, which either happen or not.
As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”.
if you are serious about the unselfish bit, then surely it boils down to “what do they want” or “what do we want”.
What if objectivity in the sense you describe is impossible?
i don’t accept the Moral Void argument, for the reasons given. Do you have another?
If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
The idea that humans are uniquely motivated by human morality isn’t put forward as a an answer to the amoralist challenge, it is put forward as a a way of establishing something like moral objectivism.
“words should be used in such a way to maximize their usefulness in carving reality”
That does not mean that we should not use general words, but that we should have both general words and specific words. That is why it is right to speak of morality in general, and human morality in particular.
As I stated in other replies, it is not true that this disagreement is only about words. In general, when people disagree about how words should be used, that is because they disagree about what should be done. Because when you use words differently, you are likely to end up doing different things. And I gave concrete places where I disagree with Eliezer about what should be done, ways that correspond to how I disagree with him about morality.
In general I would describe the disagreement in the following way, although I agree that he would not accept this characterization: Eliezer believes that human values are intrinsically arbitrary. We just happen to value a certain set of things, and we might have happened to value some other random set. In whatever situation we found ourselves, we would have called those things “right,” and that would have been a name for the concrete values we had.
In contrast, I think that we value the things that are good for us. What is “good for us” is not arbitrary, but an objective fact about relationships between human nature and the world. Now there might well be other rational creatures and they might value other things. That will be because other things are good for them.
I agree that not everything in particular that people value is good for them. I say that everything that they value in a fundamental way is good for them. If you disagree, and think that some people value things that are bad for them in a fundamental way, how are they supposed to find out that those things are bad for them?
You are currently saying that the good is what people fundamentally value, and what people fundamentally value is good....for them. To escape vacuity, the second phrase would need to be cashed out as something like “side survival”.
But whose survival? If I fight for my tribe, I endanger my own survival, if I dodge the draft, I endanger my tribes’.
Real world ethics has a pretty clear answer: the group wins every time. Bravery beats cowardice, generosity beats meanness...these are human universals. if you reverse engineer that observation back into a theoretical understanding, you get the idea that morality is something programned into individuals by communities to promote the survival and thriving of communities.
But that is a rather different claim to The Good is the Good.
Clarification please. How do you avoid this supposed vacuity applying to basically all definitions? Taking a quick definition from a Google Search:
A: “I define a cat as a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws.”
B: “Yes, but is that a cat?”
Which could eventually lead back to A saying that:
A: “Yes you’ve said all these things, but it basically comes back to the claim a cat is a cat.”
Definitions are at best a record of usage. Usage can be broadened to include social practices such as reward and punishment. And the jails are full of people who commit theft (selfishness) , rape (ditto), etc. And the medals and plaudits go to the brave (altruism), the generous (ditto), etc.
I’m not sure how you’re addressing what I said. What do you mean by escaping vacuity? I used “good for them” in that comment because you did, when you said that not everything people value is good for them. I agree with that, if you mean the particular values that people have, but not in regard to their fundamental values.
Saying that something is morally good means “doing this thing, after considering all the factors, is good for me,” and saying that it is morally bad means “doing this thing, after considering all the factors, is bad for me.” Of course something might be somewhat good, without being morally good, because it is good according to some factors, but not after considering all of them. And of course whether or not it will benefit your communities is one of the factors.
I’m going to assume you mean what you say and are not just arguing about definitions. In that case:
You would be an apologist for HP Lovecraft’s Azathoth, at best, if you lived in his universe. There’s no objective criterion you could give to explain why that wouldn’t be moral, unless you beg the question and bring in moral criteria to judge a possible ‘ground of morality.’ Yes, I’m saying Nyarlathotep should follow morality instead of the supposed dictates of his alien god. And that’s not a contradiction but a tautology.
While I’m on the subject, Aquinian theology is an ugly vulgarization of Aristotle’s, the latter being more naturally linked to HPL’s Azathoth or the divine pirates of Pastafarianism.
That is why it is right to speak of morality in general, and human morality in particular.
I prefer Eliezer’s way because it makes evident, when talking to someone who hasn’t read the Sequence, that there are different set of self-consistent values, but it’s an agreement that people should have before starting to debate and I personally would have no problem in talking about different moralities.
Eliezer believes that human values are intrinsically arbitrary
But does he? Because that would be demonstrably false. Maybe arbitrary in the sense of “occupying a tiny space in the whole set of all possible values”, but since our morality is shaped by evolution, it will contain surely some historical accident but also a lot of useful heuristics. No human can value drinking poison, for example.
What is “good for us” is not arbitrary, but an objective fact about relationships between human nature and the world
If you were to unpack “good”, would you insert other meanings besides “what helps our survival”?
“There are different sets of self-consistent values.” This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean “logically consistent set of values”, but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don’t think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary—it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how “good” has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
They eat innocent, sentient beings who suffer and are terrified because of it. That’s wrong, no matter who does it.
It may not be un-baby-eater-ey, but it’s wrong.
Likewise, not eating babies is un-baby-eater-ey, no matter who does it. It might not be wrong, but it is un-baby-eater-ey.
We have two species who agree on the physical effects of certain actions. One species likes the effects of the action, and the other doesn’t. The difference between them is what they value.
“Right” just means “in harmony with this set of values.” Baby-eater-ey means “in harmony with this other set of values.”
There’s no contradiction in saying that something can be in harmony with one set of values and not in harmony with another set of values. Hence, there’s no contradiction in saying that eating babies is wrong, and is also baby-eater-ey. You can also note that the action is found compelling by one species and not compelling by another, and there is no contradiction in this, either.
What could “right” mean if we have “right according to these morals” AND “right according to these other, contradictory morals?”
I see one possibility: “right” is taken to mean ” in harmony with any set of values.” Which, of course, makes it meaningless. Do you see another possibility?
I disagree that it is wrong for them to do that. And this is not just a disagreement about words: I disagree that Eliezer’s preferred outcome for the story is better than the other outcome.
“Right” is just another way of saying “good”, or anyway “reasonably judged to be good.” And good is the kind of thing which naturally results in desire. Note that I did not say it is “what is desired” any more than you want to say that someone values at a particular moment is necessarily right. I said it is what naturally results in desire. This definition is in fact very close to yours, except that I don’t make the whole universe revolve around human beings by saying that nothing is good except what is good for humans. And since different kinds of things naturally result in desire for different kinds of beings (e.g. humans and babyeaters), those different things are right for different kinds of beings.
That does not make “right” or “good” meaningless. It makes it relative to something. And this is an obvious fact about the meaning of the words; to speak of good is to speak of what is good for someone. This is not subjectivism, since it is an objective fact that some things are good for humans, and other things are good for other things.
Nor does this mean that right means “in harmony with any set of values.” It has to be in harmony with some real set of values, not an invented one, nor one that someone simply made up—for the same reasons that you do not allow human morals to be simply invented by a random individual.
Returning to the larger point, as I said, this is not just a disagreement about words, but about what is good. People maintaining your theory (like Eliezer) hope to optimize the universe for human values. I have no such hope, and I think it is a perverse idea in the first place.
“Right” is just another way of saying “good”, or anyway “reasonably judged to be good.”
No, morally rightness and wrongness have implications about rule following and rule breaking, reward and punishment that moral goodness and harness dont. Giving to charity is virus, but not giving to charity isn’t wrong and doesn’t deserve punishment.
Similarly, moral goodness and hedonic goodness are different.
I’m not sure what you’re saying. I would describe giving to charity as morally good without implying that not giving is morally evil.
I agree that moral goodness is different from hedonic goodness (which I assume means pleasure), but I would describe that by saying that pleasure is good in a certain way, but may or may not be good all things considered, while moral goodness means what is good all things considered.
You’re saying that “right” just means “in harmony with any set of values held by sentient beings?”
So, baby-eating is right for baby-eaters, wrong for humans, and all either of those statements means is that they are/aren’t consistent with the fundamental values of the two species?
That is most of it. But again, I insist that the disagreement is real. Because Eliezer would want to stomp out baby-eater values from the cosmos. I would not.
I do not support “letting a sentient being eat babies just because it wants to” in general. So for example if there is a human who wants to eat babies, I would prevent that. But that is because it is bad for humans to eat babies. In the case of the babyeaters, it is by stipulation good for them.
That stipulation itself, by the way, is not really a reasonable one. Some species do sometimes eat babies, and it is possible that such a species could develop reason. But it is likely that the very process of developing reason would impede the eating of babies, and eating babies would become unusual, much as cannibalism is unusual in human societies. And just as cannibalism is wrong for humans, eating babies would become wrong for that species. But Eliezer makes the stipulation because, as I said, he believes that human values are intrinsically arbitrary, from an absolute standpoint.
So there is a metaethical disagreement. You could put it this way: I think that reality is fundamentally good, and therefore actually existing species will have fundamentally good values. Eliezer thinks that reality is fundamentally indifferent, and therefore actually existing species will have fundamentally indifferent values.
But given the stipulation, yes I am serious. And no I would not accept those solutions, unless those solutions were acceptable to them anyway—which would prove my point that eating babies was not actually good for them, and not actually a true part of their values.
When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?
Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”
It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires. Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?
That is, if it wants to kill you because you value that, are you cool with that?
What do you do, in general, when values clash? You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?
“When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?” Sort of, but not quite.
“Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”″ No.
First of all, the word “tautology” is vague. I know it is a tautology to say that red is red. But is it a tautology to say that two is an even number? That’s not clear. But if a tautology means that the subject and predicate mean the same thing, then saying that two is even is definitely not a tautology, because they don’t mean the same thing. And in that way, “reality is fundamentally good” is not a tautology, because “reality” does not have the same meaning as “good.”
Still, if you say that reality is fundamentally something, and you are right, there must be something similar to a tautology there. Because if there is nothing even like a tautology, you will be saying something false, as if you were to say that reality is fundamentally blue. That’s not a tautology at all, but it’s also false. But if what you say is true, then “being real” and “being that way” must be very deeply intertwined, and most likely even the meaning will be very close. Otherwise how would it turn out that reality is fundamentally that way?
I have remarked before that we get the idea of desire from certain feelings, but what makes us call it desire instead of a different feeling is not the subjective quality of the feeling, but the objective fact that when we feel that way, we tend to do a particular thing. E.g. when we are hungry, we tend to go and find food and eat it. So because we notice that we do that, we call that feeling a desire for food. Now this implies that the most important thing about the word “desire” is that it is a tendency to do something, not the fact that it is also a feeling.
So if we said, “everyone does what they desire to do,” it would mean something like “everyone does what they tend to do.” That is not a tautology, because you can occasionally do something that you do not generally tend to do, but it is very close to a tautology.
We get the idea of “good” from the fact that we are tending to do various things, and we assume that those various things must have something in common that explains why we are tending to do all of them. We call that common thing “good.”
Now you could say, “the common thing is that you desire all of those things.” But that is not the way the human mind is working here, whether it is right or wrong. We already know that we desire them all. We want to know “why” we desire them all. And we explain that by saying that they all have something that we call “goodness.” We know it explains our desires, but that does not mean we know anything else about it.
This is really the exact point where I disagree with Eliezer. I think he believes that the common thing is the desire, and there is no other explanation except for random facts in the world that are responsible for our individual desires and for desires generally common in the human species. I think that the natural intuition that there is another explanation is correct. Now you might want to ask, “then what is good, apart from ‘what explains our desires’”?
And I have already started to explain this in other comments, although I did not go into detail. I noted above that the most important thing about “desire” is that it is a tendency to do something. So likewise the most important thing about the word “good” is that it explains the tendency to do something. Now consider this fact about things: things tend to exist. And existing things tend to continue to exist. Why do they tend to do those things? In the first place, it is obvious why things tend to exist. Because they are real, and reality involves existence. And tending to continue to exist might be less obvious, but we can see that at least the particular reality of the thing is responsible for that tendency: why do rocks tend to continue to exist? Part of the reality of the rock (in this case its structure) is responsible for that tendency. It tends to continue to exist because of the reality it has.
In other words, the thing that explains why things tend to do things is reality itself. So reality is fundamentally good, that is, the explanation for why things tend to do the things they do is fundamentally their reality. Note that this last sentence is not a tautology, in that it has a distinct subject and predicate.
Richard Dawkins says that reality looks just as we would expect if it is fundamentally indifferent. And I am pretty sure Eliezer agrees with him about this. But in fact it does not look the way I would expect if it were fundamentally indifferent: I would expect in that situation that things would not have any tendencies at all, so all things would be random.
I will answer the things about my values in another comment.
“It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires.” Yes.
“Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?”
No sentient being has, or can have (at least in a normal way) that desire as a “fundamental desire.” It should be obvious why such a value cannot evolve, if you consider the matter physically. Considered from my point of view, it cannot evolve precisely because it is an evil desire.
Also, it is important here that we are speaking of “fundamental” desires, in that a particular sentient being sometimes has a particular desire for something bad, due to some kind of mistake or bad situation. (E.g. a murderer has the desire to kill someone, but that desire is not fundamental.)
“You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?”
As I said in another comment, the babyeater situation is contrived, and most likely it is impossible for those values to evolve in reality. But stipulating that they do, then the desires of the babies are not fundamental, because if the baby grows up and learns more about reality, it will say, “it would have been right to eat me.”
I am pretty sure that people even in the original context brought attention to the fact that there are a great many ways that we treat children in which they do not want to be treated, to which no one at all objects (e.g. no one objects if you prevent a child from running out into the street, even if it wants to. And that is because the desires are not fundamental.)
Your objection is really something like, “but that desire must be fundamental because everything has the fundamental desire not to be eaten.” Perhaps. But as I said, that simply means that the situation is contrived and false.
The situation can happen with an intelligent species and a non-intelligent species, and has happened on earth—e.g. people kill and eat other animals. And although I do not object to people doing this, and I think it is morally right, I do not take “sides,” because I would change the values neither of the people nor of the animals. Both desires are good, and the behavior on both sides is right (although technically we should not be speaking of right and wrong in respect to non-rational creatures.)
It probably could not happen with two intelligent species, if only for economic reasons.
I don’t know. I wonder if some extra visualization would help.
Would you help catch the children so that their parents could eat them? If they pleaded with you, would you really think “if you were to live, you would one day agree this was good, therefore it is good, even though you don’t currently believe it to be?”
Why say the important desire is the one the child will one day have, instead of the one that the adult used to have?
I would certainly be less interested in aliens obtaining what is good for them, than in humans obtaining what is good for them. However, that said, the basic response (given Eliezer’s stipulations), is yes, I would, and yes I would really think that.
The adult has not only changed his desire, he has changed his mind as well, and he has done that through a normal process of growing up. So (again given Eliezer’s stipulations), it is just as reasonable to believe the adults here as it is to believe human adults. It is not a question of talking about whose desire is important, but whose opinion is correct.
We get the idea of “good” from the fact that we are tending to do various things, and we assume that those various things must have something in common that explains why we are tending to do all of them. We call that common thing “good.”
....a word which means a number of things, which are capable of conflicting with each other. Moral good refers to things that are beneficial at the group level, but which individuals tend not to do without encouragement.
All of this is why Eliezer’s morality sequence is wrong. Version 2 is basically right. The Baby-Eaters were not immoral, but moral, but according to a different morals. That is not subjectivism, because it is an objective fact that Baby-Eaters are what they are, and are obligated by Baby-Eater morality, and humans are humans, and are obligated by human morality.
But Eliezer (and Bound-Up) do not admit this, nonsensically asserting that non-humans should be obligated by human morality.
To be honest, Eliezer made a slightly different argument:
1) humans share (because of evolution) a psychological unity that is not affected by regional or temporal distinctions;
2) this unity entails a set of values that is inescapable for every human beings, its collective effect on human cognition and actions we dub “morality”;
3) Clippy, Elves and Pebblesorters, being fundamentally different, share a different set of values that guide their actions and what they care about;
4) those are perfectly coherent and sound for those who entertain them, we should though do not call them “Clippy’s, Elves’ or Pebblesorters’ morality”, because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.
That’s it: you can debate any single point, but I think the difference is only formal. The underlying understanding, that “motivating set of values” is a two place predicate, is the same, Yudkowski preferred though to use different words for different partially applied predicates, on the grounds of point 1 and 4.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain’t no Mama cause she ain’t my Mama!
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
And it’s not like the issue isn’t important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
That seems different to what you were saying before.
There’s not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.
Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”. There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of “the good of society” or “objective ethical truth” are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.
“I think Eliezer is correct in showing that the only solution is avoiding contact at all.”: Assumes that there is such a thing as an objective solution, if implicitly.
“The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.”: Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.
“There’s not much objectivity in that.”: What if objectivity in the sense you describe is impossible?
“Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.”: If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
Maybe we should also consider in parallel the question of whether objectivity is necessary. If objectivity is both necessary to morality and impossible, then nihilism results.
The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics is that it is connected to practices of reward and punishment, which either happen or not.
The essential problem with the tablet is that it offers conclusions as a fait accompli, with no justification of argument. The point does not generalise against objectivity morality.
if you are serious about the unselfish bit, then surely it boils down to “what do they want” or “what do we want”.
i don’t accept the Moral Void argument, for the reasons given. Do you have another?
The idea that humans are uniquely motivated by human morality isn’t put forward as a an answer to the amoralist challenge, it is put forward as a a way of establishing something like moral objectivism.
“words should be used in such a way to maximize their usefulness in carving reality”
That does not mean that we should not use general words, but that we should have both general words and specific words. That is why it is right to speak of morality in general, and human morality in particular.
As I stated in other replies, it is not true that this disagreement is only about words. In general, when people disagree about how words should be used, that is because they disagree about what should be done. Because when you use words differently, you are likely to end up doing different things. And I gave concrete places where I disagree with Eliezer about what should be done, ways that correspond to how I disagree with him about morality.
In general I would describe the disagreement in the following way, although I agree that he would not accept this characterization: Eliezer believes that human values are intrinsically arbitrary. We just happen to value a certain set of things, and we might have happened to value some other random set. In whatever situation we found ourselves, we would have called those things “right,” and that would have been a name for the concrete values we had.
In contrast, I think that we value the things that are good for us. What is “good for us” is not arbitrary, but an objective fact about relationships between human nature and the world. Now there might well be other rational creatures and they might value other things. That will be because other things are good for them.
But not everything people value is actually good for them. You are retaining the problem of equating morality with values.
I agree that not everything in particular that people value is good for them. I say that everything that they value in a fundamental way is good for them. If you disagree, and think that some people value things that are bad for them in a fundamental way, how are they supposed to find out that those things are bad for them?
You are currently saying that the good is what people fundamentally value, and what people fundamentally value is good....for them. To escape vacuity, the second phrase would need to be cashed out as something like “side survival”.
But whose survival? If I fight for my tribe, I endanger my own survival, if I dodge the draft, I endanger my tribes’.
Real world ethics has a pretty clear answer: the group wins every time. Bravery beats cowardice, generosity beats meanness...these are human universals. if you reverse engineer that observation back into a theoretical understanding, you get the idea that morality is something programned into individuals by communities to promote the survival and thriving of communities.
But that is a rather different claim to The Good is the Good.
Clarification please. How do you avoid this supposed vacuity applying to basically all definitions? Taking a quick definition from a Google Search: A: “I define a cat as a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws.” B: “Yes, but is that a cat?”
Which could eventually lead back to A saying that:
A: “Yes you’ve said all these things, but it basically comes back to the claim a cat is a cat.”
Definitions are at best a record of usage. Usage can be broadened to include social practices such as reward and punishment. And the jails are full of people who commit theft (selfishness) , rape (ditto), etc. And the medals and plaudits go to the brave (altruism), the generous (ditto), etc.
I’m not sure how you’re addressing what I said. What do you mean by escaping vacuity? I used “good for them” in that comment because you did, when you said that not everything people value is good for them. I agree with that, if you mean the particular values that people have, but not in regard to their fundamental values.
Saying that something is morally good means “doing this thing, after considering all the factors, is good for me,” and saying that it is morally bad means “doing this thing, after considering all the factors, is bad for me.” Of course something might be somewhat good, without being morally good, because it is good according to some factors, but not after considering all of them. And of course whether or not it will benefit your communities is one of the factors.
I’m going to assume you mean what you say and are not just arguing about definitions. In that case:
You would be an apologist for HP Lovecraft’s Azathoth, at best, if you lived in his universe. There’s no objective criterion you could give to explain why that wouldn’t be moral, unless you beg the question and bring in moral criteria to judge a possible ‘ground of morality.’ Yes, I’m saying Nyarlathotep should follow morality instead of the supposed dictates of his alien god. And that’s not a contradiction but a tautology.
While I’m on the subject, Aquinian theology is an ugly vulgarization of Aristotle’s, the latter being more naturally linked to HPL’s Azathoth or the divine pirates of Pastafarianism.
I’m pretty sure this is not an attempt at discussion, but an attempt to be insulting, so I won’t discuss it.
I prefer Eliezer’s way because it makes evident, when talking to someone who hasn’t read the Sequence, that there are different set of self-consistent values, but it’s an agreement that people should have before starting to debate and I personally would have no problem in talking about different moralities.
But does he? Because that would be demonstrably false. Maybe arbitrary in the sense of “occupying a tiny space in the whole set of all possible values”, but since our morality is shaped by evolution, it will contain surely some historical accident but also a lot of useful heuristics.
No human can value drinking poison, for example.
If you were to unpack “good”, would you insert other meanings besides “what helps our survival”?
“There are different sets of self-consistent values.” This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean “logically consistent set of values”, but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don’t think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary—it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how “good” has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
Any virulently self-reproducing meme would be another.
This would be a long discussion, but there’s some truth in that, and some falsehood.
They eat innocent, sentient beings who suffer and are terrified because of it. That’s wrong, no matter who does it.
It may not be un-baby-eater-ey, but it’s wrong.
Likewise, not eating babies is un-baby-eater-ey, no matter who does it. It might not be wrong, but it is un-baby-eater-ey.
We have two species who agree on the physical effects of certain actions. One species likes the effects of the action, and the other doesn’t. The difference between them is what they value.
“Right” just means “in harmony with this set of values.” Baby-eater-ey means “in harmony with this other set of values.”
There’s no contradiction in saying that something can be in harmony with one set of values and not in harmony with another set of values. Hence, there’s no contradiction in saying that eating babies is wrong, and is also baby-eater-ey. You can also note that the action is found compelling by one species and not compelling by another, and there is no contradiction in this, either.
What could “right” mean if we have “right according to these morals” AND “right according to these other, contradictory morals?”
I see one possibility: “right” is taken to mean ” in harmony with any set of values.” Which, of course, makes it meaningless. Do you see another possibility?
I disagree that it is wrong for them to do that. And this is not just a disagreement about words: I disagree that Eliezer’s preferred outcome for the story is better than the other outcome.
“Right” is just another way of saying “good”, or anyway “reasonably judged to be good.” And good is the kind of thing which naturally results in desire. Note that I did not say it is “what is desired” any more than you want to say that someone values at a particular moment is necessarily right. I said it is what naturally results in desire. This definition is in fact very close to yours, except that I don’t make the whole universe revolve around human beings by saying that nothing is good except what is good for humans. And since different kinds of things naturally result in desire for different kinds of beings (e.g. humans and babyeaters), those different things are right for different kinds of beings.
That does not make “right” or “good” meaningless. It makes it relative to something. And this is an obvious fact about the meaning of the words; to speak of good is to speak of what is good for someone. This is not subjectivism, since it is an objective fact that some things are good for humans, and other things are good for other things.
Nor does this mean that right means “in harmony with any set of values.” It has to be in harmony with some real set of values, not an invented one, nor one that someone simply made up—for the same reasons that you do not allow human morals to be simply invented by a random individual.
Returning to the larger point, as I said, this is not just a disagreement about words, but about what is good. People maintaining your theory (like Eliezer) hope to optimize the universe for human values. I have no such hope, and I think it is a perverse idea in the first place.
No, morally rightness and wrongness have implications about rule following and rule breaking, reward and punishment that moral goodness and harness dont. Giving to charity is virus, but not giving to charity isn’t wrong and doesn’t deserve punishment.
Similarly, moral goodness and hedonic goodness are different.
I’m not sure what you’re saying. I would describe giving to charity as morally good without implying that not giving is morally evil.
I agree that moral goodness is different from hedonic goodness (which I assume means pleasure), but I would describe that by saying that pleasure is good in a certain way, but may or may not be good all things considered, while moral goodness means what is good all things considered.
I’m saying its a bad idea to collapse together the ideas of moral obligation, moral advisability and pleasure.
I agree.
I think I get it.
You’re saying that “right” just means “in harmony with any set of values held by sentient beings?”
So, baby-eating is right for baby-eaters, wrong for humans, and all either of those statements means is that they are/aren’t consistent with the fundamental values of the two species?
That is most of it. But again, I insist that the disagreement is real. Because Eliezer would want to stomp out baby-eater values from the cosmos. I would not.
Metaethically, I don’t see a disagreement between you and Eliezer. Ethically, I do.
Eliezer says he values babies not being eaten more than he values letting a sentient being eat babies just because it wants to.
You say you don’t, that’s all. Different values.
Are you serious, though? What if you had enough power to stop them from eating babies without having to kill them? Can we just give them fake babies?
I do not support “letting a sentient being eat babies just because it wants to” in general. So for example if there is a human who wants to eat babies, I would prevent that. But that is because it is bad for humans to eat babies. In the case of the babyeaters, it is by stipulation good for them.
That stipulation itself, by the way, is not really a reasonable one. Some species do sometimes eat babies, and it is possible that such a species could develop reason. But it is likely that the very process of developing reason would impede the eating of babies, and eating babies would become unusual, much as cannibalism is unusual in human societies. And just as cannibalism is wrong for humans, eating babies would become wrong for that species. But Eliezer makes the stipulation because, as I said, he believes that human values are intrinsically arbitrary, from an absolute standpoint.
So there is a metaethical disagreement. You could put it this way: I think that reality is fundamentally good, and therefore actually existing species will have fundamentally good values. Eliezer thinks that reality is fundamentally indifferent, and therefore actually existing species will have fundamentally indifferent values.
But given the stipulation, yes I am serious. And no I would not accept those solutions, unless those solutions were acceptable to them anyway—which would prove my point that eating babies was not actually good for them, and not actually a true part of their values.
When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?
Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”
It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires. Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?
That is, if it wants to kill you because you value that, are you cool with that?
What do you do, in general, when values clash? You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?
“When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?” Sort of, but not quite.
“Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”″ No.
First of all, the word “tautology” is vague. I know it is a tautology to say that red is red. But is it a tautology to say that two is an even number? That’s not clear. But if a tautology means that the subject and predicate mean the same thing, then saying that two is even is definitely not a tautology, because they don’t mean the same thing. And in that way, “reality is fundamentally good” is not a tautology, because “reality” does not have the same meaning as “good.”
Still, if you say that reality is fundamentally something, and you are right, there must be something similar to a tautology there. Because if there is nothing even like a tautology, you will be saying something false, as if you were to say that reality is fundamentally blue. That’s not a tautology at all, but it’s also false. But if what you say is true, then “being real” and “being that way” must be very deeply intertwined, and most likely even the meaning will be very close. Otherwise how would it turn out that reality is fundamentally that way?
I have remarked before that we get the idea of desire from certain feelings, but what makes us call it desire instead of a different feeling is not the subjective quality of the feeling, but the objective fact that when we feel that way, we tend to do a particular thing. E.g. when we are hungry, we tend to go and find food and eat it. So because we notice that we do that, we call that feeling a desire for food. Now this implies that the most important thing about the word “desire” is that it is a tendency to do something, not the fact that it is also a feeling.
So if we said, “everyone does what they desire to do,” it would mean something like “everyone does what they tend to do.” That is not a tautology, because you can occasionally do something that you do not generally tend to do, but it is very close to a tautology.
We get the idea of “good” from the fact that we are tending to do various things, and we assume that those various things must have something in common that explains why we are tending to do all of them. We call that common thing “good.”
Now you could say, “the common thing is that you desire all of those things.” But that is not the way the human mind is working here, whether it is right or wrong. We already know that we desire them all. We want to know “why” we desire them all. And we explain that by saying that they all have something that we call “goodness.” We know it explains our desires, but that does not mean we know anything else about it.
This is really the exact point where I disagree with Eliezer. I think he believes that the common thing is the desire, and there is no other explanation except for random facts in the world that are responsible for our individual desires and for desires generally common in the human species. I think that the natural intuition that there is another explanation is correct. Now you might want to ask, “then what is good, apart from ‘what explains our desires’”?
And I have already started to explain this in other comments, although I did not go into detail. I noted above that the most important thing about “desire” is that it is a tendency to do something. So likewise the most important thing about the word “good” is that it explains the tendency to do something. Now consider this fact about things: things tend to exist. And existing things tend to continue to exist. Why do they tend to do those things? In the first place, it is obvious why things tend to exist. Because they are real, and reality involves existence. And tending to continue to exist might be less obvious, but we can see that at least the particular reality of the thing is responsible for that tendency: why do rocks tend to continue to exist? Part of the reality of the rock (in this case its structure) is responsible for that tendency. It tends to continue to exist because of the reality it has.
In other words, the thing that explains why things tend to do things is reality itself. So reality is fundamentally good, that is, the explanation for why things tend to do the things they do is fundamentally their reality. Note that this last sentence is not a tautology, in that it has a distinct subject and predicate.
Richard Dawkins says that reality looks just as we would expect if it is fundamentally indifferent. And I am pretty sure Eliezer agrees with him about this. But in fact it does not look the way I would expect if it were fundamentally indifferent: I would expect in that situation that things would not have any tendencies at all, so all things would be random.
I will answer the things about my values in another comment.
“It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires.” Yes.
“Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?”
No sentient being has, or can have (at least in a normal way) that desire as a “fundamental desire.” It should be obvious why such a value cannot evolve, if you consider the matter physically. Considered from my point of view, it cannot evolve precisely because it is an evil desire.
Also, it is important here that we are speaking of “fundamental” desires, in that a particular sentient being sometimes has a particular desire for something bad, due to some kind of mistake or bad situation. (E.g. a murderer has the desire to kill someone, but that desire is not fundamental.)
“You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?”
As I said in another comment, the babyeater situation is contrived, and most likely it is impossible for those values to evolve in reality. But stipulating that they do, then the desires of the babies are not fundamental, because if the baby grows up and learns more about reality, it will say, “it would have been right to eat me.”
I am pretty sure that people even in the original context brought attention to the fact that there are a great many ways that we treat children in which they do not want to be treated, to which no one at all objects (e.g. no one objects if you prevent a child from running out into the street, even if it wants to. And that is because the desires are not fundamental.)
Your objection is really something like, “but that desire must be fundamental because everything has the fundamental desire not to be eaten.” Perhaps. But as I said, that simply means that the situation is contrived and false.
The situation can happen with an intelligent species and a non-intelligent species, and has happened on earth—e.g. people kill and eat other animals. And although I do not object to people doing this, and I think it is morally right, I do not take “sides,” because I would change the values neither of the people nor of the animals. Both desires are good, and the behavior on both sides is right (although technically we should not be speaking of right and wrong in respect to non-rational creatures.)
It probably could not happen with two intelligent species, if only for economic reasons.
I don’t know. I wonder if some extra visualization would help.
Would you help catch the children so that their parents could eat them? If they pleaded with you, would you really think “if you were to live, you would one day agree this was good, therefore it is good, even though you don’t currently believe it to be?”
Why say the important desire is the one the child will one day have, instead of the one that the adult used to have?
I would certainly be less interested in aliens obtaining what is good for them, than in humans obtaining what is good for them. However, that said, the basic response (given Eliezer’s stipulations), is yes, I would, and yes I would really think that.
The adult has not only changed his desire, he has changed his mind as well, and he has done that through a normal process of growing up. So (again given Eliezer’s stipulations), it is just as reasonable to believe the adults here as it is to believe human adults. It is not a question of talking about whose desire is important, but whose opinion is correct.
....a word which means a number of things, which are capable of conflicting with each other. Moral good refers to things that are beneficial at the group level, but which individuals tend not to do without encouragement.