Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.
No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
We’re told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more right. The only way to judge who’s right in a disagreement seems to be “the one who knows more relevant facts is more right” or “the one who more honestly and deeply considered the question”. This does not appear to be an objectively measurable criterion (to say the least).
The claim that ancients, like Roman soldiers, thought slavery was morally fine because they didn’t understand how much slaves suffer is frankly preposterous. Roman soldiers (and poor Roman citizens in general) were often enslaved, and some of them were later freed (or escaped from foreign captivity). Many Romans were freedmen or their descendants—some estimate that by the late Empire, almost all Roman citizens had at least some slave ancestors. And yet somehow these people, who both knew what slavery was like and were often in personal danger of it, did not think it immoral, while white Americans in no danger of enslavement campaigned for abolition.
I’m getting really sick of this claim that Eliezer says all humans would agree on some morality under extrapolation. That claim is how we get garbage like this. At no point do I recall Eliezer saying psychopaths would definitely become moral under extrapolation. He did speculate about them possibly accepting modification. But the paper linked here repeatedly talks about ways to deal with disagreements which persist under extrapolation:
In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. (emphasis added)
Coherence is not a simple question of a majority vote. Coherence will reflect the balance, concentration, and strength of individual volitions. A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity. The variables are quantitative, not qualitative.
(Naturally, Eugine Nier as “seer” downvoted all of my comments.)
The metaethics sequence does say IMNSHO that most humans’ extrapolated volitions (maybe 95%) would converge on a cluster of goals which include moral ones. It furthermore suggests that this would apply to the Romans if we chose the ‘right’ method of extrapolation, though here my understanding gets hazier. In any case, the preferences that we would loosely call ‘moral’ today, and that also survive some workable extrapolation, are what I seem to mean by “morality”.
One point about the ancient world: the Bhagavad Gita, produced by a warrior culture though seemingly not by the warrior caste, tells a story of the hero Arjuna refusing to fight until his friend Krishna convinces him. Arjuna doesn’t change his mind simply because of arguments about duty. In the climax, Krishna assumes his true form as a god of death with infinitely many heads and jaws, saying, ‘I will eat all of these people regardless of what you do. The only deed you can truly accomplish is to follow your warrior duty or dharma.’ This view seems plainly environment-dependent.
I’ve simplified it a bit for the sake of brevity and comprehension of the central idea, but yeah, it’s probably right to say that humans are all born with ABOUT the same morality equation. And also true that psychopaths’ equation is further away than most’s.
I don’t think it being unfalsifiable is a problem. I think this is more of a definition than a derivation. Morality is a fuzzy concept that we have intuitions about, and we like to formalize these sorts of things into definitions. This can’t be disproven any more than the definition of a triangle can be disproven.
What needs to be done instead is show the definition to be incoherent or that it doesn’t match our intuition.
Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it’s right or wrong is to look for evidence?
If it was incoherent or contradicted itself, it wouldn’t even need evidence to be disproven; we would already know it’s wrong. Have I avoided being wrong in that way?
(by the way, understanding slavery might be necessary, but not sufficient to get someone to be against it. They might also need to figure out that people are equal, too. Good point, I might need to add that note into the post).
Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it’s right or wrong is to look for evidence?
You do understand that debates about objective vs relative morality has been going on for millenia?
They might also need to figure out that people are equal, too
No, they don’t if they themselves are in danger of becoming slaves. Notably, a major source of slaves in the Ancient world was defeated armies. Slaves weren’t clearly different people (like the blacks were in America), anyone could become a slave if his luck turned out to be really bad.
Right. Someone could be against slavery for THEM personally without being against slavery in general if they didn’t realize that what was wrong for them was also wrong for others. That’s all I’m getting at, there.
Or do you mean that they should have opposed slavery for everybody as a sort of game theory move to reduce their chance of ever becoming a slave?
“You do understand that debates about objective vs relative morality has been going on for millenia?”
What I’m getting at here is that most moral theories are so bad you don’t even need to talk about evidence. You can show them to be wrong just because they’re incoherent or self-contradictory.
It’s a pretty low standard, but I’m asking if this theory is at least coherent and consistent enough that you have to look at evidence to know if it’s wrong, instead of just pointing at its self-defeating nature to show it’s wrong. If so, yay, it might be the best I’ve ever seen. :)
Someone could be against slavery for THEM personally without being against slavery in general if they didn’t realize that what was wrong for them was also wrong for others.
Huh? I’m against going to jail personally without being against the idea of jail in general. In any case, wasn’t your original argument that ancient Greeks and Romans just didn’t understand what does it mean to be a slave? That clearly does not hold.
most moral theories are so bad you don’t even need to talk about evidence. You can show them to be wrong just because they’re incoherent or self-contradictory.
Do you mean descriptive or prescriptive moral theories? If descriptive, humans are incoherent and self-contradictory.
Which moral theories do you have in mind? A few examples will help.
Mmm, that’s not quite the right abstraction. You’re probably against innocents going to jail in general, no?
Whereas some Roman might not care, as long as it’s no one they care about.
All I’m getting at is that the Romans didn’t think certain things were wrong, but if they were shown in a sufficiently deep way everything we know, they would be moved by it, whereas if we were shown everything they know, we would not find it persuasive of their position. Neither would they, after they had seen what we’ve seen.
I’m talking metaethics, what makes something moral, what it means for something to be moral. Failed ones include divine command theory, the “whatever contributes to human flourishing” idea, whatever makes people happy, whatever matches some platonic ideals out there somehow, whatever leads to selfish interest, etc.
if they were shown in a sufficiently deep way everything we know, they would be moved by it
That doesn’t seem obvious to me at all.
Let’s try it on gay marriage. Romans certainly knew and practiced homosexuality, same for marriage. What knowledge exactly do you want to convey to them to persuade them that gay marriage is a good thing?
I’m talking metaethics, what makes something moral
So, prescriptive. I am not sure in which way do you consider the theories “failed”—in the sense that they have not risen to the status of physics meaning being able to empirically prove all their claims? That doesn’t look to be a viable criterion. In the sense of not having taken over the world? I don’t know, the divine command theory is (or, at least, has been) pretty good at that. You probably wouldn’t want a single theory to take over the world, anyway.
What knowledge exactly do you want to convey to them to persuade them that gay marriage is a good thing?
Kind of a weird example, but I’ll assume we’re talking about the Praetorian Guard. The Romans seem to have had very little respect for women and for being penetrated. So right off the bat, having them read a lot of women’s minds might change their views. (I’m not sure if I want to classify that as knowledge, though.) They likely also have false beliefs not only about women but about the gods and stable societies. None of this seems like a cure-all, but it does seem extremely promising.
I think hairyfigment is of the belief that the Romans (and in the most coherent version of his claim you would have to say male and female) were under misconceptions about the nature of male and female minds, and believes that “a sufficiently deep way” would mean correcting all these misconceptions.
My view is that we really can’t say that as things stand. We’d have to know a lot more about the Roman beliefs about the male and female minds, and compare them against what we know to be accurate about male and female minds.
I was trying to say with my second paragraph that we specifically cannot be sure about that. My first paragraph was simply my best effort at interpreting what I think hairyfigment thinks, not a statement of what I believe to be true.
From my vague recollections I think the idea is worth looking up one way or the other. After all, a massive portion of modern culture is under the impression there are no gender differences and there are other instances of clear major misconceptions I actually can attest to throughout history. But I don’t have any idea with the Romans.
After all, a massive portion of modern culture is under the impression there are no gender differences
That’s the stupid portion of modern culture, and I’m not sure they actually, um, practice that belief. Here’s a quick suggestion: make competitive sports sex-blind :-/
Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it’s right or wrong is to look for evidence?
Yes, I think it is coherent.
Ideological Turing test: I think your theory is this: there is some set of values, which we shall call Morals. All humans have somewhat different sets of lower-case morals. When people make moral mistakes, they can be corrected by learning or internalizing some relevant truths (which may of course be different in each case). These truths can convince even actual humans to change their moral values for the better (as opposed to values changing only over generations), as long as these humans honestly and thoroughly consider and internalize the truths. Over historical time, humans have approached closer to true Morals, and we can hope to come yet closer, because we generally collect more and more truths over time.
the way to find out if it’s right or wrong is to look for evidence?
If you mean you don’t have any evidence for your theory yet, then how or why did you come by this theory? What facts are you trying to explain or predict with it?
Remember that by default, theories with no evidence for them (and no unexplained facts we’re looking for a theory about) shouldn’t even rise to the level of conscious consideration. It’s far, far more likely that if a theory like that comes to mind, it’s for due to motivated reasoning. For example, wanting to claim your morality is better by some objective measure than that of other people, like slavers.
by the way, understanding slavery might be necessary, but not sufficient to get someone to be against it. They might also need to figure out that people are equal, too.
That’s begging the question. Believing that “people are equal” is precisely the moral belief that you hold and ancient Romans didn’t. Not holding slaves is merely one of many results of having that belief; it’s not a separate moral belief.
But why should Romans come to believe that people are equal? What sort of factual knowledge could lead someone to such a belief, despite the usually accepted idea that should cannot be derived from is?
This is an explanation of Yudkowsky’s idea from the metaethics sequence. I’m just trying to make it accessible in language and length with lots of concept handles and examples.
Technically, you could believe that people are equally allowed to be enslaved. All people equal + it’s wrong to make me a slave = it’s wrong to make anyone a slave.
“All men are created equal” emerges from two or more basic principles people are born with. You might say: “Look, you have value, yah? And your loved ones? Would they stop having value if you forgot about them? No? They have value whether or not you know them? How did you conclude they have value? Could that have happened with other people, too? Would you then think they had value? Would they stop having value if you didn’t know them? No? Well, you don’t know them; do they have value?
You take “people I care about have value” (born with it) and combine it with “be consistent” (also born with), and you get “everyone has value.”
That’s the idea in principle, anyway. You take some things people are all born with, and they combine to make the moral insights people can figure out and teach each other, just like we do with math.
Technically, you could believe that people are equally allowed to be enslaved.
In a sense, the ancient Romans did believe this. Anyone who ended up in the same situation—either taken as a war captive or unable to pay their debts—was liable to be sold as a slave. So what makes you think your position is objectively better than theirs?
“All men are created equal” emerges from two or more basic principles people are born with. You might say: “Look, you have value, yah? And your loved ones? Would they stop having value if you forgot about them? No? They have value whether or not you know them? How did you conclude they have value? Could that have happened with other people, too? Would you then think they had value? Would they stop having value if you didn’t know them? No? Well, you don’t know them; do they have value?
This assumes without argument that “value” is something people intrinsically have or can have. If instead you view value as value-to-someone, i.e. I value my loved ones, but someone else might not value them, then there is no problem.
And it turns out that yes, most people did not have an intuition that anyone has intrinsic value just by virtue of being human. Most people throughout history assigned value only to ingroup members, to the rich and powerful, and to personally valued individuals. The idea that people are intrinsically valuable is historically very new, still in the minority today globally, and for both these reasons doesn’t seem like an idea everyone should naturally arrive at if they only try to universalize their intuitions a bit.
Technically, you could believe that people are equally allowed to be enslaved. All people equal + it’s wrong to make me a slave = it’s wrong to make anyone a slave.
Would this be an accurate summary of what you think is the meta-ethics sequence? I feel that you captured the important bits but I also feel that we disagree on some aspects:
values that motivates actions (set of concepts that agents care about) are two placed computations, one for class of beings (and possibly other parameters locating them) and the other for individual beings.
If V(Humans, Alice) =/= V(Humans, ) that doesn’t make morality subjective, it is rather indicating that Alice is behaving immoraly.
V(Humans, ) (= morality) exists objectively insofar it is a computation instantiated by a class of agents at some point in time, but it is not a property of the world independent from the existence of any agents calculating it.
Morality is there because of evolution, and it happens to be a complicated and somewhat unexplored landscape, which means that it’s also fragile and possibly no one has a hold of it’s entirety.
Except that something is moral whether any being cares about morality or not, just like something is prime regardless of whether or not anyone cares about primality.
It’s not that morality is there because of evolution, but that being who CARE about morality are there because of evolution.
I’m not sure what you mean by fragile morality, but since you’ve gotten pretty much everything right, I suspect you’ve got the right idea, there, too.
Except that something is moral whether any being cares about morality or not, just like something is prime regardless of whether or not anyone cares about primality.
And what happens when you plug in MrMinds claim that there are multiple species specific moralities? Doesn’t that mean that every action is both moral and immoral from multiple perspective?
I think we’ve ceased to argue about anything but definitions.
Cut out “morality” and get:
Different species have different sets of values they respond to. Every action is valued according to some such sets fo values and not valued or negatively valued by other sets of values.
You can call any set of values “a” morality if you want, but I think that ceases to refer to what we’re talking about when we say something is moral whether anybody values it or not.
I’m not advocating the idea that morality is value, I am examining the implications of what other people have said.
You wrote an article purporting to explain the Yudkowskian theory of morality, and, indeed the one true theory of morality, since the two are the same.
Hypothetically, making a few comments about value, and nothing but value, doesn’t do what is advertised on the label. The reader would need to know how value relates back to morality.
And in fact you supplied the rather definitional sounding statement that Morality is Values.
If you base an argument on a definition ,don’t be surprised if people argue about it. The alternative, where someone can stipulate a definition, but no one can challenge it, is a game that will always be won by the first to move.
Except that something is moral whether any being cares about morality or not, just like something is prime regardless of whether or not anyone cares about primality.
And what happens when you plug in MrMinds claim that there are multiple species specific moralities? Doesn’t that mean that every action is both moral and immoral from multiple perspective?
Unpacking “should” as ” morally obligated to” is potentially helpful, so inasmuch as you can give separate accounts of “moral” and “obligatory”.
The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff
That doesn’t generalise to the point that non humans have no morality. You have made things too easy on yourself by having the elves concede that the Christmas spirit isn’t morality. You need to to put forward some criteria for morality and show that the Christmas Spirit doesn’t fulfil them. (One of the odd things about the Yudkowskian theory is that he doesnt feel the need to show that human values are the best match to some pretheoretic botion of morality, he instead jumps straight to the conclusion).
The hard case would be some dwarves, say, who have a behavioural code different from our own, and who haven’t conceded that they are amoral.
Maybe they have a custom whereby any dwarf who hits a rich seam of ore has to raise a cry to let other dwarves have a share, and any dwarf who doesn’t do this is criticised and shunned. If their code of conduct passed the duck test .. is regarded as obligatory, involves praise and blame, and so on … why isn’t that a moral system?
This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares?
If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point… morality means what you should care about, not what you happen to do.
Morality needs to be motivating, and rubber stamping your existing values as moral achieves that, but being motivating is not sufficient. A theory of morality also needs to be able to answer the Open Question objection, meaning in this case, the objection that it is not obvious that you should value something just because you do.
So, to say the elves have their own “morality,” is not quite right. The elves have their own set of things that they care about instead of morality
That is arguing from the point that morality is a label for whatever humans care about, not toward it.
This helps us see the other problem, when people say that “different people at different times in history have been okay with different things, who can This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares?
who’s really right?”
There are many ways of refuting relativism, and most don’t involve the claim that humans are uniquely moral.
Morality is a fixed thing. Frozen, if you will. It doesn’t change.
It is human value, or it is fixed.. choose one. Humans have valued many different things. One of the problems with the rubber stamping approach is that things the audience will see as immoral such as slavery and the subjugation of women have been part of human value.
Rather, humans change. Humans either do or don’t do the moral thing. If they do something else, that doesn’t change morality, but rather, it just means that that human is doing an immoral
If that is true, then you need to stop saying that morality is human values. and start saying morality is human values at time T. And justify the selection of time, etc. And even at that, you won’t support your other claims. because what you need to prove is that morality is unique, that only one thing can fulfil the role.
Rather, humans happen to care about moral things. If they start to care about different things, like slavery, that doesn’t make slavery moral, it just means that humans have stopped caring about moral things.
If it is possible for human values to diverge from morality. then something else must define morality, because human values can’t diverge from human values. So you are not using a stipulative definition… here....although you are when you argue that elves can’t be moral. Here, you and Yudkowsky have noticed that your theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there’s no fixed standard of morality. The label “moral” has been placed on a moving target. (Standard relativism usually has this problem synchronously
, ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
So, when humans disagree about what’s moral, there’s a definite answer.
There is from many perspectives , but given that human values can differ, you get no definite answer by defining morality as human value. You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God’s commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don’t think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory.
How do we find that moral answer, then? Unfortunately, there is no simple answer
Why doesn’t that constitute an admission that you don’t actually have a theory of morality?
You see, we don’t know all the pieces of morality, not so we can write them down on paper. And even if we knew all the pieces, we’d still have to weigh which ones are worth how much compared to each other.
On the assumption that all human value gets thrown into the equation, it certainly would be complex. But not everyone has that problem. since people have criteria for somethings being moral , and others but being. which simplify the equation. and allow you to answer the questions you were struggling with above. You know, you don’t have to pursue assumptions to their illogical conclusions.
Humans all care about the same set of things (in the sense I’ve been talking about). Does this seem contradictory? After all, we all know humans do not agree about what’s right and wrong; they clearly do not all care about the same things.
On the face of it , it’s contradictory. There maybe something else that is smooths out the contradictions, such as the Moral Equation, but that needs justification of its own.
Well, they do. Humans are born with the same Morality Equation in their brains, with them since birth.
Is that a fact? It’s eminently naturalistic, but the flip side to that is that it is, therefore, empirically refutable. If an individual’s Morality Equation is just how their moral intuition works, then the evidence indicates that intuitions can vary enough to start a war or two. So the Morality Equation appears not to be conveniently the same in everybody.
How then all their disagreements? There are three ways for humans to disagree about morals, even though they’re all born with the same morality equation in their heads (1 Don’t do it, 2 don’t do it right, 3 don’t want to do it)
What does it mean to do it wrong, if the moral equation is just a label for black box intuitive reasoning? If you had an external standard, as utilitarians and others do, then you could determine whose use of intuition is right use according to it. But in the absence of an external standard, you could have a situation where both parties intuit differently, and both swear they are taking all factors into account. Given such a stalemate, how do you tell who is right? It would be convenient if the only variations to the output of the Morality Equation were caused by variations in the input, but you cannot assume something is true just because it would be convenient.
If the Moral Equation is something ideal and abstract, why can’t aliens partake? That model of ethics is just what s needed to explain how you can have multiple varieties of object level morality that actually all are morality: different values fed into the same equation produce different results, so object level morality varies although the underlying principle us the same..
Okay. By saying “If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point… morality means what you should care about, not what you happen to do.”
it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?
Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.
For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it’s moral and don’t care, that also means they agree they should do it and don’t care.
Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don’t care about what they spiritould do, they only care about what they should do.
moral is to Christmas spiritey what “should” is to (make up a word like) “spiritould”
Obligatory is just a kind of “should.” Elves agree that some things are obligatory, and don’t care, they care about what’s ochristmastory.
.
Likewise, to say that today’s morality equation is the “best” is to say that today’s morality equation is the equation which is most like today’s morality equation. Tautology.
Best = most good, and good = valued by the morality equation.
it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?
Almost everything. You explain morality by putting forward one theory. Under those circumstances, most people would expect to see some critique of other theories, and explanation of why your theory is the One True Theory. You don’t do the first, and it is not clear that you are even trying to do the second.
Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.
And to say that only humans have morality. But if there is something the Elves should do, then morality applies to them., contradicting that claim.
For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it’s moral and don’t care, that also means they agree they should do it and don’t care.
That doesn’t help. For one thing, humans don’t exactly want to be moral...their moral fibre has to be buttressed bty various punishments and rewards. For another “should” and “want to” are not synonyms..but “moral” and “what you should do” are. So if there is something the Elves should do, at that point you have established that morality applies to the Elves, and the fact that they don’t want to do it is a side-issue. (And of course they could tweak their own motivations by constructing punishments and rewards).
Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don’t care about what they spiritould do, they only care about what they should do.
OK. Now you seem to be saying..without quite making it quite explicit of course, ..that morality is by definition unique to humans, because the word “moral” just labels what motivates humans, in the way that “Earth” or “Terra” labels the planet where humans live. That claim isn’t completely incomprehensible, it’s just strange and arbitrary, and what is considerably strange is the way you feel no need to defend it against alternative theories—the main alternative being that morality is multiply instantiable, that other civilisations could have their own versions. like they have their own versions , in the way they could have their own versions of houses or money.
You state it as though it is obvious, yet it has gone unnoticed for thousands of years.
Suppose I were to announce that dark matter is angels’ tears. Doesn’t it need some expansion? That’s how your claim reads, that’ the outside view.
Obligatory is just a kind of “should.” Elves agree that some things are obligatory, and don’t care, they care about what’s ochristmastory.
Obligatory is a kind of “should” *that shouldn’t be overridden by other considerations. (A failure to do what is obligatory is possible, of course, but it is important to remember that it is seen as a lapse, as something wrong, not a valid choice). Yet the Elves are overriding it, casting doubt on whether they have actually understood the concept of “obligatory”
Likewise, to say that today’s morality equation is the “best” is to say that today’s morality equation is the equation which is most like today’s morality equation. Tautology.
Since anyone can say that at any time, that breaks the meaning of “best”, which is supposed to pick out something unique. That would be a reductio ad absurdum of your own theory.
Every possible creature, and every process of physics SHOULD do XYZ. But practically nothing is moved by that fact.
This sentence means: It is highly valued in the morality equation for XYZ to be the state of affairs, independently of who/what causes it to be so.
Likewise, everything Spiritould do ABC, but only Elves are moved by that fact.
These are objective equations which apply to everything. To say should, spiritould, clipperould, etc., is just to say about different things that they are valued by this equation or that one. It’s an objective truth that they are valued by this equation or that one.
It’s just that humans are not moved by almost any of the possible equations. They ARE moved by the morality equation.
Humans and Elves should AND spiritould do whatever. They are both equally obligated and ochristmasated. But one species finds one of those facts moving and not the other, and the other finds the other moving and not the one.
It is not a clear expression of something that can be seen to work
Version 1.
I am obligated to both do and not do any number of acts by any number of shouldness-equations
If that is the case, anything resembling objectivism is out of the window. If I am obligate to do X, and I do X, then my action is right. If I am obligated not do to X, and I do X, my action is wrong. if I am both obligated and not obligated to do X, then my action is somehow both right and wrong..that is, it has no definite moral status.
But that’s not quite what you were saying.
Version 2.
There are lots of different kinds of morality, but I am only obligated by human morality.
That would work, but it’s not what you mean. You are explicitly embracing...
Version 3.
There are lots of different kinds of morality, but I am only motivated by human morality
There’s only one word of difference between that and version 2, which is the substitution of “motivated” for “obligated”. As we saw under version 1, it’s the existence of multiple conflicting obligations which stymies ethical objectivism. And motivation can’t fix that problem, because it is a different thing to obligation. In fact it is orthogonal, because:
You can be motivated to do what you are not obligated to do.
You can be obligated to d what your are not motivated to do.
Or both.
Or neither.
Because of that, version 3 implies version 1, and has the same problem.
If you are interested, I might recommend trying to write up what you think this idea is, and see if you find any holes in your understanding that way. I’m not sure how to make it any clearer right now, but, for what it’s worth, you have my word that you have not understood the idea.
We are not disagreeing about something we both understand; you are disagreeing with a series of ideas you think I hold, and I am trying to explain the original idea in a way that you find understandable and, apparently, not yet succeeding.
If you are interested, I might recommend trying to write up what you think this idea is, and see if you find any holes in your understanding that way.
I believe I just did something like that. Of course, I attributed the holes to the theory not working. If you want me to attribute them to my not having understood you, you need to put forward a version that works.
All of this is why Eliezer’s morality sequence is wrong. Version 2 is basically right. The Baby-Eaters were not immoral, but moral, but according to a different morals. That is not subjectivism, because it is an objective fact that Baby-Eaters are what they are, and are obligated by Baby-Eater morality, and humans are humans, and are obligated by human morality.
But Eliezer (and Bound-Up) do not admit this, nonsensically asserting that non-humans should be obligated by human morality.
To be honest, Eliezer made a slightly different argument: 1) humans share (because of evolution) a psychological unity that is not affected by regional or temporal distinctions; 2) this unity entails a set of values that is inescapable for every human beings, its collective effect on human cognition and actions we dub “morality”; 3) Clippy, Elves and Pebblesorters, being fundamentally different, share a different set of values that guide their actions and what they care about; 4) those are perfectly coherent and sound for those who entertain them, we should though do not call them “Clippy’s, Elves’ or Pebblesorters’ morality”, because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words. That’s it: you can debate any single point, but I think the difference is only formal. The underlying understanding, that “motivating set of values” is a two place predicate, is the same, Yudkowski preferred though to use different words for different partially applied predicates, on the grounds of point 1 and 4.
those are perfectly coherent and sound for those who entertain them, we should though do not call them “Clippy’s, Elves’ or Pebblesorters’ morality”, because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain’t no Mama cause she ain’t my Mama!
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
And it’s not like the issue isn’t important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.
obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
That seems different to what you were saying before.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
There’s not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.
Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”. There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of “the good of society” or “objective ethical truth” are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.
“I think Eliezer is correct in showing that the only solution is avoiding contact at all.”: Assumes that there is such a thing as an objective solution, if implicitly.
“The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.”: Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.
“There’s not much objectivity in that.”: What if objectivity in the sense you describe is impossible?
“Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.”: If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
Maybe we should be abandoning the objectivity requirement as impossible.
Maybe we should also consider in parallel the question of whether objectivity is necessary. If objectivity is both necessary to morality and impossible, then nihilism results.
The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics is that it is connected to practices of reward and punishment, which either happen or not.
As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”.
if you are serious about the unselfish bit, then surely it boils down to “what do they want” or “what do we want”.
What if objectivity in the sense you describe is impossible?
i don’t accept the Moral Void argument, for the reasons given. Do you have another?
If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
The idea that humans are uniquely motivated by human morality isn’t put forward as a an answer to the amoralist challenge, it is put forward as a a way of establishing something like moral objectivism.
“words should be used in such a way to maximize their usefulness in carving reality”
That does not mean that we should not use general words, but that we should have both general words and specific words. That is why it is right to speak of morality in general, and human morality in particular.
As I stated in other replies, it is not true that this disagreement is only about words. In general, when people disagree about how words should be used, that is because they disagree about what should be done. Because when you use words differently, you are likely to end up doing different things. And I gave concrete places where I disagree with Eliezer about what should be done, ways that correspond to how I disagree with him about morality.
In general I would describe the disagreement in the following way, although I agree that he would not accept this characterization: Eliezer believes that human values are intrinsically arbitrary. We just happen to value a certain set of things, and we might have happened to value some other random set. In whatever situation we found ourselves, we would have called those things “right,” and that would have been a name for the concrete values we had.
In contrast, I think that we value the things that are good for us. What is “good for us” is not arbitrary, but an objective fact about relationships between human nature and the world. Now there might well be other rational creatures and they might value other things. That will be because other things are good for them.
I agree that not everything in particular that people value is good for them. I say that everything that they value in a fundamental way is good for them. If you disagree, and think that some people value things that are bad for them in a fundamental way, how are they supposed to find out that those things are bad for them?
You are currently saying that the good is what people fundamentally value, and what people fundamentally value is good....for them. To escape vacuity, the second phrase would need to be cashed out as something like “side survival”.
But whose survival? If I fight for my tribe, I endanger my own survival, if I dodge the draft, I endanger my tribes’.
Real world ethics has a pretty clear answer: the group wins every time. Bravery beats cowardice, generosity beats meanness...these are human universals. if you reverse engineer that observation back into a theoretical understanding, you get the idea that morality is something programned into individuals by communities to promote the survival and thriving of communities.
But that is a rather different claim to The Good is the Good.
Clarification please. How do you avoid this supposed vacuity applying to basically all definitions? Taking a quick definition from a Google Search:
A: “I define a cat as a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws.”
B: “Yes, but is that a cat?”
Which could eventually lead back to A saying that:
A: “Yes you’ve said all these things, but it basically comes back to the claim a cat is a cat.”
Definitions are at best a record of usage. Usage can be broadened to include social practices such as reward and punishment. And the jails are full of people who commit theft (selfishness) , rape (ditto), etc. And the medals and plaudits go to the brave (altruism), the generous (ditto), etc.
I’m not sure how you’re addressing what I said. What do you mean by escaping vacuity? I used “good for them” in that comment because you did, when you said that not everything people value is good for them. I agree with that, if you mean the particular values that people have, but not in regard to their fundamental values.
Saying that something is morally good means “doing this thing, after considering all the factors, is good for me,” and saying that it is morally bad means “doing this thing, after considering all the factors, is bad for me.” Of course something might be somewhat good, without being morally good, because it is good according to some factors, but not after considering all of them. And of course whether or not it will benefit your communities is one of the factors.
I’m going to assume you mean what you say and are not just arguing about definitions. In that case:
You would be an apologist for HP Lovecraft’s Azathoth, at best, if you lived in his universe. There’s no objective criterion you could give to explain why that wouldn’t be moral, unless you beg the question and bring in moral criteria to judge a possible ‘ground of morality.’ Yes, I’m saying Nyarlathotep should follow morality instead of the supposed dictates of his alien god. And that’s not a contradiction but a tautology.
While I’m on the subject, Aquinian theology is an ugly vulgarization of Aristotle’s, the latter being more naturally linked to HPL’s Azathoth or the divine pirates of Pastafarianism.
That is why it is right to speak of morality in general, and human morality in particular.
I prefer Eliezer’s way because it makes evident, when talking to someone who hasn’t read the Sequence, that there are different set of self-consistent values, but it’s an agreement that people should have before starting to debate and I personally would have no problem in talking about different moralities.
Eliezer believes that human values are intrinsically arbitrary
But does he? Because that would be demonstrably false. Maybe arbitrary in the sense of “occupying a tiny space in the whole set of all possible values”, but since our morality is shaped by evolution, it will contain surely some historical accident but also a lot of useful heuristics. No human can value drinking poison, for example.
What is “good for us” is not arbitrary, but an objective fact about relationships between human nature and the world
If you were to unpack “good”, would you insert other meanings besides “what helps our survival”?
“There are different sets of self-consistent values.” This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean “logically consistent set of values”, but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don’t think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary—it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how “good” has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
They eat innocent, sentient beings who suffer and are terrified because of it. That’s wrong, no matter who does it.
It may not be un-baby-eater-ey, but it’s wrong.
Likewise, not eating babies is un-baby-eater-ey, no matter who does it. It might not be wrong, but it is un-baby-eater-ey.
We have two species who agree on the physical effects of certain actions. One species likes the effects of the action, and the other doesn’t. The difference between them is what they value.
“Right” just means “in harmony with this set of values.” Baby-eater-ey means “in harmony with this other set of values.”
There’s no contradiction in saying that something can be in harmony with one set of values and not in harmony with another set of values. Hence, there’s no contradiction in saying that eating babies is wrong, and is also baby-eater-ey. You can also note that the action is found compelling by one species and not compelling by another, and there is no contradiction in this, either.
What could “right” mean if we have “right according to these morals” AND “right according to these other, contradictory morals?”
I see one possibility: “right” is taken to mean ” in harmony with any set of values.” Which, of course, makes it meaningless. Do you see another possibility?
I disagree that it is wrong for them to do that. And this is not just a disagreement about words: I disagree that Eliezer’s preferred outcome for the story is better than the other outcome.
“Right” is just another way of saying “good”, or anyway “reasonably judged to be good.” And good is the kind of thing which naturally results in desire. Note that I did not say it is “what is desired” any more than you want to say that someone values at a particular moment is necessarily right. I said it is what naturally results in desire. This definition is in fact very close to yours, except that I don’t make the whole universe revolve around human beings by saying that nothing is good except what is good for humans. And since different kinds of things naturally result in desire for different kinds of beings (e.g. humans and babyeaters), those different things are right for different kinds of beings.
That does not make “right” or “good” meaningless. It makes it relative to something. And this is an obvious fact about the meaning of the words; to speak of good is to speak of what is good for someone. This is not subjectivism, since it is an objective fact that some things are good for humans, and other things are good for other things.
Nor does this mean that right means “in harmony with any set of values.” It has to be in harmony with some real set of values, not an invented one, nor one that someone simply made up—for the same reasons that you do not allow human morals to be simply invented by a random individual.
Returning to the larger point, as I said, this is not just a disagreement about words, but about what is good. People maintaining your theory (like Eliezer) hope to optimize the universe for human values. I have no such hope, and I think it is a perverse idea in the first place.
“Right” is just another way of saying “good”, or anyway “reasonably judged to be good.”
No, morally rightness and wrongness have implications about rule following and rule breaking, reward and punishment that moral goodness and harness dont. Giving to charity is virus, but not giving to charity isn’t wrong and doesn’t deserve punishment.
Similarly, moral goodness and hedonic goodness are different.
I’m not sure what you’re saying. I would describe giving to charity as morally good without implying that not giving is morally evil.
I agree that moral goodness is different from hedonic goodness (which I assume means pleasure), but I would describe that by saying that pleasure is good in a certain way, but may or may not be good all things considered, while moral goodness means what is good all things considered.
You’re saying that “right” just means “in harmony with any set of values held by sentient beings?”
So, baby-eating is right for baby-eaters, wrong for humans, and all either of those statements means is that they are/aren’t consistent with the fundamental values of the two species?
That is most of it. But again, I insist that the disagreement is real. Because Eliezer would want to stomp out baby-eater values from the cosmos. I would not.
I do not support “letting a sentient being eat babies just because it wants to” in general. So for example if there is a human who wants to eat babies, I would prevent that. But that is because it is bad for humans to eat babies. In the case of the babyeaters, it is by stipulation good for them.
That stipulation itself, by the way, is not really a reasonable one. Some species do sometimes eat babies, and it is possible that such a species could develop reason. But it is likely that the very process of developing reason would impede the eating of babies, and eating babies would become unusual, much as cannibalism is unusual in human societies. And just as cannibalism is wrong for humans, eating babies would become wrong for that species. But Eliezer makes the stipulation because, as I said, he believes that human values are intrinsically arbitrary, from an absolute standpoint.
So there is a metaethical disagreement. You could put it this way: I think that reality is fundamentally good, and therefore actually existing species will have fundamentally good values. Eliezer thinks that reality is fundamentally indifferent, and therefore actually existing species will have fundamentally indifferent values.
But given the stipulation, yes I am serious. And no I would not accept those solutions, unless those solutions were acceptable to them anyway—which would prove my point that eating babies was not actually good for them, and not actually a true part of their values.
When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?
Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”
It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires. Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?
That is, if it wants to kill you because you value that, are you cool with that?
What do you do, in general, when values clash? You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?
“When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?” Sort of, but not quite.
“Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”″ No.
First of all, the word “tautology” is vague. I know it is a tautology to say that red is red. But is it a tautology to say that two is an even number? That’s not clear. But if a tautology means that the subject and predicate mean the same thing, then saying that two is even is definitely not a tautology, because they don’t mean the same thing. And in that way, “reality is fundamentally good” is not a tautology, because “reality” does not have the same meaning as “good.”
Still, if you say that reality is fundamentally something, and you are right, there must be something similar to a tautology there. Because if there is nothing even like a tautology, you will be saying something false, as if you were to say that reality is fundamentally blue. That’s not a tautology at all, but it’s also false. But if what you say is true, then “being real” and “being that way” must be very deeply intertwined, and most likely even the meaning will be very close. Otherwise how would it turn out that reality is fundamentally that way?
I have remarked before that we get the idea of desire from certain feelings, but what makes us call it desire instead of a different feeling is not the subjective quality of the feeling, but the objective fact that when we feel that way, we tend to do a particular thing. E.g. when we are hungry, we tend to go and find food and eat it. So because we notice that we do that, we call that feeling a desire for food. Now this implies that the most important thing about the word “desire” is that it is a tendency to do something, not the fact that it is also a feeling.
So if we said, “everyone does what they desire to do,” it would mean something like “everyone does what they tend to do.” That is not a tautology, because you can occasionally do something that you do not generally tend to do, but it is very close to a tautology.
We get the idea of “good” from the fact that we are tending to do various things, and we assume that those various things must have something in common that explains why we are tending to do all of them. We call that common thing “good.”
Now you could say, “the common thing is that you desire all of those things.” But that is not the way the human mind is working here, whether it is right or wrong. We already know that we desire them all. We want to know “why” we desire them all. And we explain that by saying that they all have something that we call “goodness.” We know it explains our desires, but that does not mean we know anything else about it.
This is really the exact point where I disagree with Eliezer. I think he believes that the common thing is the desire, and there is no other explanation except for random facts in the world that are responsible for our individual desires and for desires generally common in the human species. I think that the natural intuition that there is another explanation is correct. Now you might want to ask, “then what is good, apart from ‘what explains our desires’”?
And I have already started to explain this in other comments, although I did not go into detail. I noted above that the most important thing about “desire” is that it is a tendency to do something. So likewise the most important thing about the word “good” is that it explains the tendency to do something. Now consider this fact about things: things tend to exist. And existing things tend to continue to exist. Why do they tend to do those things? In the first place, it is obvious why things tend to exist. Because they are real, and reality involves existence. And tending to continue to exist might be less obvious, but we can see that at least the particular reality of the thing is responsible for that tendency: why do rocks tend to continue to exist? Part of the reality of the rock (in this case its structure) is responsible for that tendency. It tends to continue to exist because of the reality it has.
In other words, the thing that explains why things tend to do things is reality itself. So reality is fundamentally good, that is, the explanation for why things tend to do the things they do is fundamentally their reality. Note that this last sentence is not a tautology, in that it has a distinct subject and predicate.
Richard Dawkins says that reality looks just as we would expect if it is fundamentally indifferent. And I am pretty sure Eliezer agrees with him about this. But in fact it does not look the way I would expect if it were fundamentally indifferent: I would expect in that situation that things would not have any tendencies at all, so all things would be random.
I will answer the things about my values in another comment.
“It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires.” Yes.
“Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?”
No sentient being has, or can have (at least in a normal way) that desire as a “fundamental desire.” It should be obvious why such a value cannot evolve, if you consider the matter physically. Considered from my point of view, it cannot evolve precisely because it is an evil desire.
Also, it is important here that we are speaking of “fundamental” desires, in that a particular sentient being sometimes has a particular desire for something bad, due to some kind of mistake or bad situation. (E.g. a murderer has the desire to kill someone, but that desire is not fundamental.)
“You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?”
As I said in another comment, the babyeater situation is contrived, and most likely it is impossible for those values to evolve in reality. But stipulating that they do, then the desires of the babies are not fundamental, because if the baby grows up and learns more about reality, it will say, “it would have been right to eat me.”
I am pretty sure that people even in the original context brought attention to the fact that there are a great many ways that we treat children in which they do not want to be treated, to which no one at all objects (e.g. no one objects if you prevent a child from running out into the street, even if it wants to. And that is because the desires are not fundamental.)
Your objection is really something like, “but that desire must be fundamental because everything has the fundamental desire not to be eaten.” Perhaps. But as I said, that simply means that the situation is contrived and false.
The situation can happen with an intelligent species and a non-intelligent species, and has happened on earth—e.g. people kill and eat other animals. And although I do not object to people doing this, and I think it is morally right, I do not take “sides,” because I would change the values neither of the people nor of the animals. Both desires are good, and the behavior on both sides is right (although technically we should not be speaking of right and wrong in respect to non-rational creatures.)
It probably could not happen with two intelligent species, if only for economic reasons.
I don’t know. I wonder if some extra visualization would help.
Would you help catch the children so that their parents could eat them? If they pleaded with you, would you really think “if you were to live, you would one day agree this was good, therefore it is good, even though you don’t currently believe it to be?”
Why say the important desire is the one the child will one day have, instead of the one that the adult used to have?
I would certainly be less interested in aliens obtaining what is good for them, than in humans obtaining what is good for them. However, that said, the basic response (given Eliezer’s stipulations), is yes, I would, and yes I would really think that.
The adult has not only changed his desire, he has changed his mind as well, and he has done that through a normal process of growing up. So (again given Eliezer’s stipulations), it is just as reasonable to believe the adults here as it is to believe human adults. It is not a question of talking about whose desire is important, but whose opinion is correct.
We get the idea of “good” from the fact that we are tending to do various things, and we assume that those various things must have something in common that explains why we are tending to do all of them. We call that common thing “good.”
....a word which means a number of things, which are capable of conflicting with each other. Moral good refers to things that are beneficial at the group level, but which individuals tend not to do without encouragement.
I think it is perfectly obvious that this usage of “should” and so on is wrong. A paperclipper believes that it should make paperclips, and it means exactly the same thing by “should” that I do when I say I should not murder.
And when I say it is obvious, I mean it is obvious in the same way that it is obvious that you are using the word “hat” wrong if you use it for a coat.
I think you’re using “should” to mean “feels compelled to do.”
Yes, a paperclipper feels compelled to make paperclips, and a human feels compelled to make sentient beings happy.
But when we say “should,” we don’t just mean “whatever anyone feels compelled to do.” We say “you might drug me to make me want to kill people, but I still shouldn’t do it.”
“Should” does not refer to compelling feelings, but rather to a certain set of states of beings that we value. To say we “still shouldn’t kill people,” means it “still isn’t in harmony with happy sentient beings (plus a million other values) to kill people.”
A paperclipper wouldn’t disagree that killing people isn’t in harmony with happy sentient beings (along with a million other values), it just wouldn’t care. In other words, it wouldn’t disagree that it shouldn’t kill people, it just doesn’t care about “should;” it cares about “clipperould.”
Likewise, we wouldn’t disagree that keeping people around instead of making them into paperclips is not in harmony with maximizing paperclips, we just wouldn’t care. We know we clipperould turn people into paperclips, we just don’t care about clipperould, we care about should.
No, I am not using “should” to mean “feels...” anything (in other words, feelings have nothing to do with it.) But you are right about compulsion. The word “ought” is, in theory, just the past tense of “owe”, and what is owed is something that needs to be paid. Saying that you ought to do something, just means that you need to do it. And should is the same; that you should do it just means that there is a need for it. And need is just necessity. So it does all have to do with compulsion.
But it is not compulsion of feelings, but of a goal. And to that degree, your idea is actually correct. But you are wrong to say that the specific goal sought affects the meaning of the word. “I should do it” means that I need to do it to attain my goal. It does not say what that goal is.
The truth is that humans have an inherent instinct towards seeing “Good” as an objective thing, that corresponds to no reality. This includes an instinct towards doing what, thanks to both instinct and culture, humans see as “good”.
But although I am not a total supporter of Yudowksy’s moral support, he is right in that humans want to do good regardless of some “tablet in the sky”. Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system.
The alternative, as far as I can tell, would be that ANY coherent formulation of morality whatsoever could be countered with “Is it good?”.
The truth is that humans have an inherent instinct towards seeing “Good” as an objective thing, that corresponds to no reality. This includes an instinct towards doing what, thanks to both instinct and culture, humans see as “good”.
True but not very interesting. The interesting question is whether the operations of intuitive black boxes can be improved on.
But although I am not a total supporter of Yudowksy’s moral support, he is right in that humans want to do good regardless of some “tablet in the sky”.
The tablet argument is entirely misleading.
Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system.
i don’t see what you mean by that. If the function of the ethical black bx can be identified, then it can be improved on, in the way that physics physics improves on folk physics.
Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system.
The alternative, as far as I can tell, would be that ANY coherent formulation of morality whatsoever could be countered with “Is it good?”.
“ANY coherent formulation of morality whatsoever could be countered with “Is it good?”.
Exactly, if you think morality is different from goodness. That is why said “morally right” just means “what it is good for me to do.”
That is not the same as what I want at the moment. Humans have an inherent instinct towards seeing good as objective rather than as “what I want” for the same reason that we have an instinct towards seeing dogs and cats as objectively distinct, instead of just saying “dog is what I call dog, and cat is what I call cat, and if I decide to start calling them all dogs, that will be fine too.”
Saying that good is just what I happen to want is just the same as saying that dog is whatever I happen to call dog. And both positions are equally ridiculous.
Exactly, if you think morality is different from goodness. That is why said “morally right” just means “what it is good for me to do.”
Moral goodness is clearly different form, eg, hedonic goodness. Enjoying killing doesn’t mean you should kill.
Humans have an inherent instinct towards seeing good as objective rather than as “what I want”
It might be the case that humans have a mistaken view of the objectivity of morality, but it doesn’t follow from that that morality=hedonism. You can’t infer the correctness of one of N>2 theories form the wrongness of another.
we have an instinct towards seeing dogs and cats as objectively distinct, instead of just saying “dog is what I call dog, and cat is what I call cat, and if I decide to start calling them all dogs,
It is possible to misuse the terms “dog” and “cat”, so the theory of semantics you are appealing to as the only possible alternative to objective fully objective semantics is wrong as well. HInt: intersubjectivity, convention.
Saying that good is just what I happen to want is just the same as saying that dog is whatever I happen to call dog. And both positions are equally ridiculous.
I don’t know why you are bringing up hedonism. It is bad to kill even if you enjoy it; so if morally good means what it is good to do, as I say, it will be morally bad to kill even if it is pleasant to someone.
The fully intersubjective but non-objective theory of meaning that you are suggesting is also false, since if everyone all at once agrees to call all dogs and cats “dogs”, that will not mean that suddenly there is no objective difference between the things that used to be called dogs and the things that used to be called cats.
The correct theory is this:
“Dog” means something that has what is in common to the things that are normally called dogs. Notice that this incorporates inter-subjectivity and convention, since “things that are normally called dogs” means normally called that by normal people. But it also includes an objective element, namely “what is in common.”
Now someone could say, “Well, what those things have in common is that people normally call them dogs. They don’t have anything else in common. So this theory reduces to the same thing: dogs are what people call dogs.”
But they would be wrong, since obviously there are plenty of other things that dogs have in common, and where they differ from cats, which do not depend on anyone calling them anything.
The correct theory of goodness is analagous:
“Good” means something that has what is in common to the things that are normally called good. Again, this incorporates the element of convention, in “normally called good,” but it also includes an objective element, in “what is in common.”
As before, someone might say that actually they have nothing in common except the name. But again that would be wrong.
More plausibly, though, someone might say that actually what they have in common is that people desire them. And in a sense this is Eliezer’s view. But this is also wrong. Let me explain why.
One difficulty is that people are rarely wrong about whether something is a dog, but they are often wrong about whether something is good. This makes no difference to the fact that the words have meanings, but it makes it easier to see what is “normally called a dog” than “normally called good.” If someone calls something good because they are mistaken about it in some way, for example, then you cannot include that as one of the things that has what is in common, just as if someone mistakenly calls a cat a dog in some case, you cannot include that cat in determining what dogs have in common.
Just as it is not too difficult to see that dogs have some objective features that distinguish them from cats, good things have an objective feature that distinguishes them from bad things: good things tend to result in things desiring them, and bad things tend to result in things avoiding them. Now that tendency is not complete and perfect, especially because of people making mistakes. So occasionally someone desires something bad, or avoids something good. But the general tendency is for good things to result in desire, and bad things to result in avoidance.
Now if you think reality is intrinsically indifferent, as Eliezer does, then you would say that there is no such tendency: people have a tendency to desire some things and avoid others. We then call the things we tend to desire, “good,” and the things we tend to avoid, “bad,” but actually the good things have nothing in common except that we are desiring them, and the bad things have nothing in common except that we are avoiding them.
As you pointed out yourself, people have an inherent instinct to deny this position. That is because people ask, “why do I desire these things, and not others?” And they want the answer to be, “Because these are good, and the others are not.” And that answer does not make sense, unless the good things have something objective in common in addition to the fact that I desire them.
The instinct is correct, and Eliezer is wrong, and we can prove that by finding some things that the good things have in common, other than desire. The way to do that is to note that desire itself is a particular case of something more general, namely a tendency to do something. And the tendencies to do something that we find have various properties. So for example consistency is one of them—without consistency, you cannot have a tendency at all. Rocks tend to fall, and it is very consistent that they go downwards. And note that without this consistency, there would be no tendency. Likewise, tendencies will always preserve the existence of something—not necessarily of the whole existence of the thing which immediately has the tendency, but of something. Thus inertia is a tendency to motion, and it tends to preserve the amount of that movement. And we could go on. But all of these things imply that “what we desire” has various properties in common besides the fact that we desire it. And this is what it is to be good.
I don’t know why you are bringing up hedonism. It is bad to kill even if you enjoy it; so if morally good means what it is good to do, as I say, it will be morally bad to kill even if it is pleasant to someone.
So what is your theory? That the morally good is the morally good? Weren’t you criticising that approach?
“The morally good is the morally good” is vacuous.
“The morally good is the good” is subject to counteraxamples.
The fully intersubjective but non-objective theory of meaning that you are suggesting is also false, since if everyone all at once agrees to call all dogs and cats “dogs”, that will not mean that suddenly there is no objective difference between the things that used to be called dogs and the things that used to be called cats.
That is only true if you equate “wrong” with not capturing all the information. But then we would always be wrong, since we never capture all the information. There are languages where “mouse” and “rat” are translated by the same word. Speakers of those languages are not systematically denuded.
“Dog” means something that has what is in common to the things that are normally called dogs. Notice that this incorporates inter-subjectivity and convention, since “things that are normally called dogs” means normally called that by normal people. But it also includes an objective element, namely “what is in common.”
That’s rather redundant, since the idea that new sages of “dog” shoudl ave something in common with established ones is already part of the norm.
Just as it is not too difficult to see that dogs have some objective features that distinguish them from cats, good things have an objective feature that distinguishes them from bad things: good things tend to result in things desiring them, and bad things tend to result in things avoiding them.
I would say that you have the casual arrow the wrong way round there.
Also, you are, again, using “good” in a way that leads to obvious counterxamples of things that are desired or desireable but not morally good.
Now that tendency is not complete and perfect, especially because of people making mistakes. So occasionally someone desires something bad, or avoids something good. But the general tendency is for good things to result in desire, and bad things to result in avoidance.
If you could work out the difference between the mistakes and the norm, you would have a non-vacuous theory of what “morally” means in “morally good”. However, I don;t know if you are even trying to do that, since you seem wedded to the idea that the morally good is the good, period.
We then call the things we tend to desire, “good,” and the things we tend to avoid, “bad,” but actually the good things have nothing in common except that we are desiring them, and the bad things have nothing in common except that we are avoiding them.
If you want the word “good” to do all the work in your theory of moral good, yo would have that problem. If you allow the word “moral” to do some work, you don’t. The morally good has features in common , scuh as being co-operative and prosocial, that the unqualified “good” does not, and that is stil the case if the good is not an objective feature of the world.
And that answer does not make sense, unless the good things have something objective in common in addition to the fact that I desire them.
You don’t need objectivity, intersubjectivity is enough.
Also, I did not say that people would be wrong if they started calling all cats and dogs “dogs.” I said that this would not mean that there were not objective differences between the things that used to be called dogs, and the things that used to be called cats. In fact, the only reason we are able to call some dogs and some cats is that there are objective differences that allow us to distinguish them.
Not all semantics is based on objective differences. There’s no objective feature that makes someone a senator, or a particular piece of paper money..we just have social conventions, coupled with memorising the members of the set “money” or “senator”. So if you arguing that “good” must have objective characteristics because all menaingful words must denote something objective, that doesn’t work. But it is not clear you are arguing that way.
Objective differences doesn’t have to mean physical differences of the thing at the time. It is an objective fact that certain people have won elections and that others have not, for example, even if it doesn’t change them physically.
In this sense, it is true that every meaningful distinction is based on something objective, since otherwise you would not be able to make the distinction in the first place. You make the distinction by noticing that some fact is true in one case which isn’t true in the other. Or even if you are wrong, then you think that something is true in one case and not in the other, which means that it is an objective fact that you think the thing in one case and not in the other.
It is an objective fact that certain people have won elections and that others have not, for example, even if it doesn’t change them physically.
No, it’s intersubjective. Winning and elections aren’t in the laws of physics. You can’t infer objecgive from not-subjective.
In this sense, it is true that every meaningful distinction is based on something objective, since otherwise you would not be able to make the distinction in the first place
You need to be more granular about that. It is true that you can’t recognise novel members of an open-ended category (cats and dogs) except by objective features, and you cant do that because you can’t memorise all the members of such a set. But you can memorise all the members fo the set of Seanators. So objectivty is not a universal rule.
I think you might be arguing about words, in relation to whether the election is an objective fact. I don’t see what the laws of physics have to do with it. There is no rule that objective facts have to be part of the laws of physics. It is an objective fact that I am sitting in a chair right now, but the laws of physics say nothing about chairs (or about me, for that fact.)
Even if you memorize the set of Senators, you cannot recognize them without them being different from other people.
I do not know why you keep saying that I am saying that morally good is the same as good.
According to me (and this is what I think they are, not an argument) : “Morally good” is “what is good to do.”
So morally good is not the same as good. Good is general, and “Good TO DO” is morally good. So morally good is a kind of goodness, just as everyone believes.
So morally good is not the same as good. Good is general, and “Good TO DO” is morally good.
Not helping. Good to do can be hedonistically good to do, selfishly good to do, etc. If I sacrifice the lives of 100 people to save my life, that is a good ting to do from some points of view, but not what most people would call morally good.
Saying that a thing is “hedonistically good to do” means that it is good to some extent. It does not tell us whether it is good to do, period. If it is good to do, period, it is morally good. If there are other considerations more important than the pleasure, it won’t be good to do, period, and so will be morally wrong.
It’s not helpful to define the morally good as the “good, period”, without an explanation of “good, period”. You are defining a more precise term using a less precise one, which isn’t the way to go.
Suppose there is a blue house with a red spot on it. You ask, “Is that a red house?” Someone answers, “Well, there is a red spot on it.”
There is no difference if there is something bad that you could do which would be pleasant. You ask, “Is that something good to do?” Someone answers, “Well, it is hedonistically good.”
But I don’t care if there is a red spot, or if it is pleasant. I am asking if the house is red, and if it would be good to do the thing.
Those are answered in similar ways: the house is red if it is red enough that a reasonable person would say, “yes, the house is red.” And the action is morally good if a reasonable person would say, “yes, it is good to do it.”
i think that’s a fairly misleading analogy. For instance, a house’s being red is not exclusive of another ones..but my goods can conflict with another person’s.
Survival is good, you say. If I am in a position to ensure my survival by sacrificing Smith, is it morally good to do so? After all Smith’s survival is just as Good as mine.
As I said, we are asking whether it is good to do something overall. So there is no definite answer to the question about Smith. In some cases it will be good to do that, and in some cases not, depending on the situation and what exactly you mean by sacrificing Smith.
As I said, we are asking whether it is good to do something overall. So there is no definite answer to the question about Smith.
So what you call goodness cannot be equated with moral goodness, because moral goodness does need to put an overall value on act, does need to say that an act is permitted, forbidden or obligatory.
I don’t understand what you are trying to say here. Of course in a particular situation it will be good, and thus morally right, to sacrifice Smith, and in other particular situations it will not be. I just said that you cannot say in advance, and I see no reason why moral goodness would have to judge these situations in advance without taking everything into account.
Morality binds and blinds. People derive moral claims from emotional and intuitive notions. It can feel good and moral to do amoral things. Objective morality has to be tied to evidence what really is human wellbeing; not to moral intuitions that are adaptions to the benefit of ones ingroup; or post hoc thought experiments about knowledge.
Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.
No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
We’re told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more right. The only way to judge who’s right in a disagreement seems to be “the one who knows more relevant facts is more right” or “the one who more honestly and deeply considered the question”. This does not appear to be an objectively measurable criterion (to say the least).
The claim that ancients, like Roman soldiers, thought slavery was morally fine because they didn’t understand how much slaves suffer is frankly preposterous. Roman soldiers (and poor Roman citizens in general) were often enslaved, and some of them were later freed (or escaped from foreign captivity). Many Romans were freedmen or their descendants—some estimate that by the late Empire, almost all Roman citizens had at least some slave ancestors. And yet somehow these people, who both knew what slavery was like and were often in personal danger of it, did not think it immoral, while white Americans in no danger of enslavement campaigned for abolition.
I’m getting really sick of this claim that Eliezer says all humans would agree on some morality under extrapolation. That claim is how we get garbage like this. At no point do I recall Eliezer saying psychopaths would definitely become moral under extrapolation. He did speculate about them possibly accepting modification. But the paper linked here repeatedly talks about ways to deal with disagreements which persist under extrapolation:
(Naturally, Eugine Nier as “seer” downvoted all of my comments.)
The metaethics sequence does say IMNSHO that most humans’ extrapolated volitions (maybe 95%) would converge on a cluster of goals which include moral ones. It furthermore suggests that this would apply to the Romans if we chose the ‘right’ method of extrapolation, though here my understanding gets hazier. In any case, the preferences that we would loosely call ‘moral’ today, and that also survive some workable extrapolation, are what I seem to mean by “morality”.
One point about the ancient world: the Bhagavad Gita, produced by a warrior culture though seemingly not by the warrior caste, tells a story of the hero Arjuna refusing to fight until his friend Krishna convinces him. Arjuna doesn’t change his mind simply because of arguments about duty. In the climax, Krishna assumes his true form as a god of death with infinitely many heads and jaws, saying, ‘I will eat all of these people regardless of what you do. The only deed you can truly accomplish is to follow your warrior duty or dharma.’ This view seems plainly environment-dependent.
No, you’re totally right.
I’ve simplified it a bit for the sake of brevity and comprehension of the central idea, but yeah, it’s probably right to say that humans are all born with ABOUT the same morality equation. And also true that psychopaths’ equation is further away than most’s.
I don’t think it being unfalsifiable is a problem. I think this is more of a definition than a derivation. Morality is a fuzzy concept that we have intuitions about, and we like to formalize these sorts of things into definitions. This can’t be disproven any more than the definition of a triangle can be disproven.
What needs to be done instead is show the definition to be incoherent or that it doesn’t match our intuition.
You’re right; I’ve provided no evidence.
Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it’s right or wrong is to look for evidence?
If it was incoherent or contradicted itself, it wouldn’t even need evidence to be disproven; we would already know it’s wrong. Have I avoided being wrong in that way?
(by the way, understanding slavery might be necessary, but not sufficient to get someone to be against it. They might also need to figure out that people are equal, too. Good point, I might need to add that note into the post).
You do understand that debates about objective vs relative morality has been going on for millenia?
No, they don’t if they themselves are in danger of becoming slaves. Notably, a major source of slaves in the Ancient world was defeated armies. Slaves weren’t clearly different people (like the blacks were in America), anyone could become a slave if his luck turned out to be really bad.
Right. Someone could be against slavery for THEM personally without being against slavery in general if they didn’t realize that what was wrong for them was also wrong for others. That’s all I’m getting at, there.
Or do you mean that they should have opposed slavery for everybody as a sort of game theory move to reduce their chance of ever becoming a slave?
“You do understand that debates about objective vs relative morality has been going on for millenia?”
What I’m getting at here is that most moral theories are so bad you don’t even need to talk about evidence. You can show them to be wrong just because they’re incoherent or self-contradictory.
It’s a pretty low standard, but I’m asking if this theory is at least coherent and consistent enough that you have to look at evidence to know if it’s wrong, instead of just pointing at its self-defeating nature to show it’s wrong. If so, yay, it might be the best I’ve ever seen. :)
Huh? I’m against going to jail personally without being against the idea of jail in general. In any case, wasn’t your original argument that ancient Greeks and Romans just didn’t understand what does it mean to be a slave? That clearly does not hold.
Do you mean descriptive or prescriptive moral theories? If descriptive, humans are incoherent and self-contradictory.
Which moral theories do you have in mind? A few examples will help.
Mmm, that’s not quite the right abstraction. You’re probably against innocents going to jail in general, no?
Whereas some Roman might not care, as long as it’s no one they care about.
All I’m getting at is that the Romans didn’t think certain things were wrong, but if they were shown in a sufficiently deep way everything we know, they would be moved by it, whereas if we were shown everything they know, we would not find it persuasive of their position. Neither would they, after they had seen what we’ve seen.
I’m talking metaethics, what makes something moral, what it means for something to be moral. Failed ones include divine command theory, the “whatever contributes to human flourishing” idea, whatever makes people happy, whatever matches some platonic ideals out there somehow, whatever leads to selfish interest, etc.
That doesn’t seem obvious to me at all.
Let’s try it on gay marriage. Romans certainly knew and practiced homosexuality, same for marriage. What knowledge exactly do you want to convey to them to persuade them that gay marriage is a good thing?
So, prescriptive. I am not sure in which way do you consider the theories “failed”—in the sense that they have not risen to the status of physics meaning being able to empirically prove all their claims? That doesn’t look to be a viable criterion. In the sense of not having taken over the world? I don’t know, the divine command theory is (or, at least, has been) pretty good at that. You probably wouldn’t want a single theory to take over the world, anyway.
Kind of a weird example, but I’ll assume we’re talking about the Praetorian Guard. The Romans seem to have had very little respect for women and for being penetrated. So right off the bat, having them read a lot of women’s minds might change their views. (I’m not sure if I want to classify that as knowledge, though.) They likely also have false beliefs not only about women but about the gods and stable societies. None of this seems like a cure-all, but it does seem extremely promising.
I don’t understand what that means.
You think no male Roman actually knew what women think? The Roman matrons were entirely voiceless?
I think hairyfigment is of the belief that the Romans (and in the most coherent version of his claim you would have to say male and female) were under misconceptions about the nature of male and female minds, and believes that “a sufficiently deep way” would mean correcting all these misconceptions.
My view is that we really can’t say that as things stand. We’d have to know a lot more about the Roman beliefs about the male and female minds, and compare them against what we know to be accurate about male and female minds.
And what evidence do you have that they laboured under such major misconceptions which we successfully overcame?
I was trying to say with my second paragraph that we specifically cannot be sure about that. My first paragraph was simply my best effort at interpreting what I think hairyfigment thinks, not a statement of what I believe to be true.
From my vague recollections I think the idea is worth looking up one way or the other. After all, a massive portion of modern culture is under the impression there are no gender differences and there are other instances of clear major misconceptions I actually can attest to throughout history. But I don’t have any idea with the Romans.
That’s the stupid portion of modern culture, and I’m not sure they actually, um, practice that belief. Here’s a quick suggestion: make competitive sports sex-blind :-/
I don’t think it’s massive, either.
Yes, I think it is coherent.
Ideological Turing test: I think your theory is this: there is some set of values, which we shall call Morals. All humans have somewhat different sets of lower-case morals. When people make moral mistakes, they can be corrected by learning or internalizing some relevant truths (which may of course be different in each case). These truths can convince even actual humans to change their moral values for the better (as opposed to values changing only over generations), as long as these humans honestly and thoroughly consider and internalize the truths. Over historical time, humans have approached closer to true Morals, and we can hope to come yet closer, because we generally collect more and more truths over time.
If you mean you don’t have any evidence for your theory yet, then how or why did you come by this theory? What facts are you trying to explain or predict with it?
Remember that by default, theories with no evidence for them (and no unexplained facts we’re looking for a theory about) shouldn’t even rise to the level of conscious consideration. It’s far, far more likely that if a theory like that comes to mind, it’s for due to motivated reasoning. For example, wanting to claim your morality is better by some objective measure than that of other people, like slavers.
That’s begging the question. Believing that “people are equal” is precisely the moral belief that you hold and ancient Romans didn’t. Not holding slaves is merely one of many results of having that belief; it’s not a separate moral belief.
But why should Romans come to believe that people are equal? What sort of factual knowledge could lead someone to such a belief, despite the usually accepted idea that should cannot be derived from is?
This is an explanation of Yudkowsky’s idea from the metaethics sequence. I’m just trying to make it accessible in language and length with lots of concept handles and examples.
Technically, you could believe that people are equally allowed to be enslaved. All people equal + it’s wrong to make me a slave = it’s wrong to make anyone a slave.
“All men are created equal” emerges from two or more basic principles people are born with. You might say: “Look, you have value, yah? And your loved ones? Would they stop having value if you forgot about them? No? They have value whether or not you know them? How did you conclude they have value? Could that have happened with other people, too? Would you then think they had value? Would they stop having value if you didn’t know them? No? Well, you don’t know them; do they have value?
You take “people I care about have value” (born with it) and combine it with “be consistent” (also born with), and you get “everyone has value.”
That’s the idea in principle, anyway. You take some things people are all born with, and they combine to make the moral insights people can figure out and teach each other, just like we do with math.
In a sense, the ancient Romans did believe this. Anyone who ended up in the same situation—either taken as a war captive or unable to pay their debts—was liable to be sold as a slave. So what makes you think your position is objectively better than theirs?
This assumes without argument that “value” is something people intrinsically have or can have. If instead you view value as value-to-someone, i.e. I value my loved ones, but someone else might not value them, then there is no problem.
And it turns out that yes, most people did not have an intuition that anyone has intrinsic value just by virtue of being human. Most people throughout history assigned value only to ingroup members, to the rich and powerful, and to personally valued individuals. The idea that people are intrinsically valuable is historically very new, still in the minority today globally, and for both these reasons doesn’t seem like an idea everyone should naturally arrive at if they only try to universalize their intuitions a bit.
You realise that’s a reinvention of Kant?
Would this be an accurate summary of what you think is the meta-ethics sequence? I feel that you captured the important bits but I also feel that we disagree on some aspects:
values that motivates actions (set of concepts that agents care about) are two placed computations, one for class of beings (and possibly other parameters locating them) and the other for individual beings.
V(Elves, ) = Christmas spirity
V(Pebblesorters, ) = primality
V(Humans, _ ) = morality
If V(Humans, Alice) =/= V(Humans, ) that doesn’t make morality subjective, it is rather indicating that Alice is behaving immoraly. V(Humans, ) (= morality) exists objectively insofar it is a computation instantiated by a class of agents at some point in time, but it is not a property of the world independent from the existence of any agents calculating it. Morality is there because of evolution, and it happens to be a complicated and somewhat unexplored landscape, which means that it’s also fragile and possibly no one has a hold of it’s entirety.
I think that’s right.
Except that something is moral whether any being cares about morality or not, just like something is prime regardless of whether or not anyone cares about primality.
It’s not that morality is there because of evolution, but that being who CARE about morality are there because of evolution.
I’m not sure what you mean by fragile morality, but since you’ve gotten pretty much everything right, I suspect you’ve got the right idea, there, too.
And what happens when you plug in MrMinds claim that there are multiple species specific moralities? Doesn’t that mean that every action is both moral and immoral from multiple perspective?
I think we’ve ceased to argue about anything but definitions.
Cut out “morality” and get:
Different species have different sets of values they respond to. Every action is valued according to some such sets fo values and not valued or negatively valued by other sets of values.
You can call any set of values “a” morality if you want, but I think that ceases to refer to what we’re talking about when we say something is moral whether anybody values it or not.
I’m not advocating the idea that morality is value, I am examining the implications of what other people have said.
You wrote an article purporting to explain the Yudkowskian theory of morality, and, indeed the one true theory of morality, since the two are the same.
Hypothetically, making a few comments about value, and nothing but value, doesn’t do what is advertised on the label. The reader would need to know how value relates back to morality.
And in fact you supplied the rather definitional sounding statement that Morality is Values.
If you base an argument on a definition ,don’t be surprised if people argue about it. The alternative, where someone can stipulate a definition, but no one can challenge it, is a game that will always be won by the first to move.
And what happens when you plug in MrMinds claim that there are multiple species specific moralities? Doesn’t that mean that every action is both moral and immoral from multiple perspective?
Unpacking “should” as ” morally obligated to” is potentially helpful, so inasmuch as you can give separate accounts of “moral” and “obligatory”.
That doesn’t generalise to the point that non humans have no morality. You have made things too easy on yourself by having the elves concede that the Christmas spirit isn’t morality. You need to to put forward some criteria for morality and show that the Christmas Spirit doesn’t fulfil them. (One of the odd things about the Yudkowskian theory is that he doesnt feel the need to show that human values are the best match to some pretheoretic botion of morality, he instead jumps straight to the conclusion).
The hard case would be some dwarves, say, who have a behavioural code different from our own, and who haven’t conceded that they are amoral. Maybe they have a custom whereby any dwarf who hits a rich seam of ore has to raise a cry to let other dwarves have a share, and any dwarf who doesn’t do this is criticised and shunned. If their code of conduct passed the duck test .. is regarded as obligatory, involves praise and blame, and so on … why isn’t that a moral system?
If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point… morality means what you should care about, not what you happen to do.
Morality needs to be motivating, and rubber stamping your existing values as moral achieves that, but being motivating is not sufficient. A theory of morality also needs to be able to answer the Open Question objection, meaning in this case, the objection that it is not obvious that you should value something just because you do.
That is arguing from the point that morality is a label for whatever humans care about, not toward it.
There are many ways of refuting relativism, and most don’t involve the claim that humans are uniquely moral.
It is human value, or it is fixed.. choose one. Humans have valued many different things. One of the problems with the rubber stamping approach is that things the audience will see as immoral such as slavery and the subjugation of women have been part of human value.
If that is true, then you need to stop saying that morality is human values. and start saying morality is human values at time T. And justify the selection of time, etc. And even at that, you won’t support your other claims. because what you need to prove is that morality is unique, that only one thing can fulfil the role.
If it is possible for human values to diverge from morality. then something else must define morality, because human values can’t diverge from human values. So you are not using a stipulative definition… here....although you are when you argue that elves can’t be moral. Here, you and Yudkowsky have noticed that your theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there’s no fixed standard of morality. The label “moral” has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
There is from many perspectives , but given that human values can differ, you get no definite answer by defining morality as human value. You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God’s commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don’t think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory.
Why doesn’t that constitute an admission that you don’t actually have a theory of morality?
On the assumption that all human value gets thrown into the equation, it certainly would be complex. But not everyone has that problem. since people have criteria for somethings being moral , and others but being. which simplify the equation. and allow you to answer the questions you were struggling with above. You know, you don’t have to pursue assumptions to their illogical conclusions.
On the face of it , it’s contradictory. There maybe something else that is smooths out the contradictions, such as the Moral Equation, but that needs justification of its own.
Is that a fact? It’s eminently naturalistic, but the flip side to that is that it is, therefore, empirically refutable. If an individual’s Morality Equation is just how their moral intuition works, then the evidence indicates that intuitions can vary enough to start a war or two. So the Morality Equation appears not to be conveniently the same in everybody.
What does it mean to do it wrong, if the moral equation is just a label for black box intuitive reasoning? If you had an external standard, as utilitarians and others do, then you could determine whose use of intuition is right use according to it. But in the absence of an external standard, you could have a situation where both parties intuit differently, and both swear they are taking all factors into account. Given such a stalemate, how do you tell who is right? It would be convenient if the only variations to the output of the Morality Equation were caused by variations in the input, but you cannot assume something is true just because it would be convenient.
If the Moral Equation is something ideal and abstract, why can’t aliens partake? That model of ethics is just what s needed to explain how you can have multiple varieties of object level morality that actually all are morality: different values fed into the same equation produce different results, so object level morality varies although the underlying principle us the same..
Okay. By saying “If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point… morality means what you should care about, not what you happen to do.”
it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?
Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.
For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it’s moral and don’t care, that also means they agree they should do it and don’t care.
Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don’t care about what they spiritould do, they only care about what they should do.
moral is to Christmas spiritey what “should” is to (make up a word like) “spiritould”
Obligatory is just a kind of “should.” Elves agree that some things are obligatory, and don’t care, they care about what’s ochristmastory.
.
Likewise, to say that today’s morality equation is the “best” is to say that today’s morality equation is the equation which is most like today’s morality equation. Tautology.
Best = most good, and good = valued by the morality equation.
Almost everything. You explain morality by putting forward one theory. Under those circumstances, most people would expect to see some critique of other theories, and explanation of why your theory is the One True Theory. You don’t do the first, and it is not clear that you are even trying to do the second.
And to say that only humans have morality. But if there is something the Elves should do, then morality applies to them., contradicting that claim.
That doesn’t help. For one thing, humans don’t exactly want to be moral...their moral fibre has to be buttressed bty various punishments and rewards. For another “should” and “want to” are not synonyms..but “moral” and “what you should do” are. So if there is something the Elves should do, at that point you have established that morality applies to the Elves, and the fact that they don’t want to do it is a side-issue. (And of course they could tweak their own motivations by constructing punishments and rewards).
OK. Now you seem to be saying..without quite making it quite explicit of course, ..that morality is by definition unique to humans, because the word “moral” just labels what motivates humans, in the way that “Earth” or “Terra” labels the planet where humans live. That claim isn’t completely incomprehensible, it’s just strange and arbitrary, and what is considerably strange is the way you feel no need to defend it against alternative theories—the main alternative being that morality is multiply instantiable, that other civilisations could have their own versions. like they have their own versions , in the way they could have their own versions of houses or money.
You state it as though it is obvious, yet it has gone unnoticed for thousands of years.
Suppose I were to announce that dark matter is angels’ tears. Doesn’t it need some expansion? That’s how your claim reads, that’ the outside view.
Obligatory is a kind of “should” *that shouldn’t be overridden by other considerations. (A failure to do what is obligatory is possible, of course, but it is important to remember that it is seen as a lapse, as something wrong, not a valid choice). Yet the Elves are overriding it, casting doubt on whether they have actually understood the concept of “obligatory”
Since anyone can say that at any time, that breaks the meaning of “best”, which is supposed to pick out something unique. That would be a reductio ad absurdum of your own theory.
No, no, no...
Every possible creature, and every process of physics SHOULD do XYZ. But practically nothing is moved by that fact.
This sentence means: It is highly valued in the morality equation for XYZ to be the state of affairs, independently of who/what causes it to be so.
Likewise, everything Spiritould do ABC, but only Elves are moved by that fact.
These are objective equations which apply to everything. To say should, spiritould, clipperould, etc., is just to say about different things that they are valued by this equation or that one. It’s an objective truth that they are valued by this equation or that one.
It’s just that humans are not moved by almost any of the possible equations. They ARE moved by the morality equation.
Humans and Elves should AND spiritould do whatever. They are both equally obligated and ochristmasated. But one species finds one of those facts moving and not the other, and the other finds the other moving and not the one.
Perhaps now it is clear?
It is not a clear expression of something that can be seen to work
Version 1.
I am obligated to both do and not do any number of acts by any number of shouldness-equations
If that is the case, anything resembling objectivism is out of the window. If I am obligate to do X, and I do X, then my action is right. If I am obligated not do to X, and I do X, my action is wrong. if I am both obligated and not obligated to do X, then my action is somehow both right and wrong..that is, it has no definite moral status.
But that’s not quite what you were saying.
Version 2.
There are lots of different kinds of morality, but I am only obligated by human morality.
That would work, but it’s not what you mean. You are explicitly embracing...
Version 3.
There are lots of different kinds of morality, but I am only motivated by human morality
There’s only one word of difference between that and version 2, which is the substitution of “motivated” for “obligated”. As we saw under version 1, it’s the existence of multiple conflicting obligations which stymies ethical objectivism. And motivation can’t fix that problem, because it is a different thing to obligation. In fact it is orthogonal, because:
You can be motivated to do what you are not obligated to do. You can be obligated to d what your are not motivated to do. Or both. Or neither.
Because of that, version 3 implies version 1, and has the same problem.
If you are interested, I might recommend trying to write up what you think this idea is, and see if you find any holes in your understanding that way. I’m not sure how to make it any clearer right now, but, for what it’s worth, you have my word that you have not understood the idea.
We are not disagreeing about something we both understand; you are disagreeing with a series of ideas you think I hold, and I am trying to explain the original idea in a way that you find understandable and, apparently, not yet succeeding.
I believe I just did something like that. Of course, I attributed the holes to the theory not working. If you want me to attribute them to my not having understood you, you need to put forward a version that works.
All of this is why Eliezer’s morality sequence is wrong. Version 2 is basically right. The Baby-Eaters were not immoral, but moral, but according to a different morals. That is not subjectivism, because it is an objective fact that Baby-Eaters are what they are, and are obligated by Baby-Eater morality, and humans are humans, and are obligated by human morality.
But Eliezer (and Bound-Up) do not admit this, nonsensically asserting that non-humans should be obligated by human morality.
To be honest, Eliezer made a slightly different argument:
1) humans share (because of evolution) a psychological unity that is not affected by regional or temporal distinctions;
2) this unity entails a set of values that is inescapable for every human beings, its collective effect on human cognition and actions we dub “morality”;
3) Clippy, Elves and Pebblesorters, being fundamentally different, share a different set of values that guide their actions and what they care about;
4) those are perfectly coherent and sound for those who entertain them, we should though do not call them “Clippy’s, Elves’ or Pebblesorters’ morality”, because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.
That’s it: you can debate any single point, but I think the difference is only formal. The underlying understanding, that “motivating set of values” is a two place predicate, is the same, Yudkowski preferred though to use different words for different partially applied predicates, on the grounds of point 1 and 4.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain’t no Mama cause she ain’t my Mama!
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
And it’s not like the issue isn’t important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
That seems different to what you were saying before.
There’s not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.
Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”. There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of “the good of society” or “objective ethical truth” are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.
“I think Eliezer is correct in showing that the only solution is avoiding contact at all.”: Assumes that there is such a thing as an objective solution, if implicitly.
“The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.”: Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.
“There’s not much objectivity in that.”: What if objectivity in the sense you describe is impossible?
“Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.”: If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
Maybe we should also consider in parallel the question of whether objectivity is necessary. If objectivity is both necessary to morality and impossible, then nihilism results.
The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics is that it is connected to practices of reward and punishment, which either happen or not.
The essential problem with the tablet is that it offers conclusions as a fait accompli, with no justification of argument. The point does not generalise against objectivity morality.
if you are serious about the unselfish bit, then surely it boils down to “what do they want” or “what do we want”.
i don’t accept the Moral Void argument, for the reasons given. Do you have another?
The idea that humans are uniquely motivated by human morality isn’t put forward as a an answer to the amoralist challenge, it is put forward as a a way of establishing something like moral objectivism.
“words should be used in such a way to maximize their usefulness in carving reality”
That does not mean that we should not use general words, but that we should have both general words and specific words. That is why it is right to speak of morality in general, and human morality in particular.
As I stated in other replies, it is not true that this disagreement is only about words. In general, when people disagree about how words should be used, that is because they disagree about what should be done. Because when you use words differently, you are likely to end up doing different things. And I gave concrete places where I disagree with Eliezer about what should be done, ways that correspond to how I disagree with him about morality.
In general I would describe the disagreement in the following way, although I agree that he would not accept this characterization: Eliezer believes that human values are intrinsically arbitrary. We just happen to value a certain set of things, and we might have happened to value some other random set. In whatever situation we found ourselves, we would have called those things “right,” and that would have been a name for the concrete values we had.
In contrast, I think that we value the things that are good for us. What is “good for us” is not arbitrary, but an objective fact about relationships between human nature and the world. Now there might well be other rational creatures and they might value other things. That will be because other things are good for them.
But not everything people value is actually good for them. You are retaining the problem of equating morality with values.
I agree that not everything in particular that people value is good for them. I say that everything that they value in a fundamental way is good for them. If you disagree, and think that some people value things that are bad for them in a fundamental way, how are they supposed to find out that those things are bad for them?
You are currently saying that the good is what people fundamentally value, and what people fundamentally value is good....for them. To escape vacuity, the second phrase would need to be cashed out as something like “side survival”.
But whose survival? If I fight for my tribe, I endanger my own survival, if I dodge the draft, I endanger my tribes’.
Real world ethics has a pretty clear answer: the group wins every time. Bravery beats cowardice, generosity beats meanness...these are human universals. if you reverse engineer that observation back into a theoretical understanding, you get the idea that morality is something programned into individuals by communities to promote the survival and thriving of communities.
But that is a rather different claim to The Good is the Good.
Clarification please. How do you avoid this supposed vacuity applying to basically all definitions? Taking a quick definition from a Google Search: A: “I define a cat as a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws.” B: “Yes, but is that a cat?”
Which could eventually lead back to A saying that:
A: “Yes you’ve said all these things, but it basically comes back to the claim a cat is a cat.”
Definitions are at best a record of usage. Usage can be broadened to include social practices such as reward and punishment. And the jails are full of people who commit theft (selfishness) , rape (ditto), etc. And the medals and plaudits go to the brave (altruism), the generous (ditto), etc.
I’m not sure how you’re addressing what I said. What do you mean by escaping vacuity? I used “good for them” in that comment because you did, when you said that not everything people value is good for them. I agree with that, if you mean the particular values that people have, but not in regard to their fundamental values.
Saying that something is morally good means “doing this thing, after considering all the factors, is good for me,” and saying that it is morally bad means “doing this thing, after considering all the factors, is bad for me.” Of course something might be somewhat good, without being morally good, because it is good according to some factors, but not after considering all of them. And of course whether or not it will benefit your communities is one of the factors.
I’m going to assume you mean what you say and are not just arguing about definitions. In that case:
You would be an apologist for HP Lovecraft’s Azathoth, at best, if you lived in his universe. There’s no objective criterion you could give to explain why that wouldn’t be moral, unless you beg the question and bring in moral criteria to judge a possible ‘ground of morality.’ Yes, I’m saying Nyarlathotep should follow morality instead of the supposed dictates of his alien god. And that’s not a contradiction but a tautology.
While I’m on the subject, Aquinian theology is an ugly vulgarization of Aristotle’s, the latter being more naturally linked to HPL’s Azathoth or the divine pirates of Pastafarianism.
I’m pretty sure this is not an attempt at discussion, but an attempt to be insulting, so I won’t discuss it.
I prefer Eliezer’s way because it makes evident, when talking to someone who hasn’t read the Sequence, that there are different set of self-consistent values, but it’s an agreement that people should have before starting to debate and I personally would have no problem in talking about different moralities.
But does he? Because that would be demonstrably false. Maybe arbitrary in the sense of “occupying a tiny space in the whole set of all possible values”, but since our morality is shaped by evolution, it will contain surely some historical accident but also a lot of useful heuristics.
No human can value drinking poison, for example.
If you were to unpack “good”, would you insert other meanings besides “what helps our survival”?
“There are different sets of self-consistent values.” This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean “logically consistent set of values”, but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don’t think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary—it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how “good” has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
Any virulently self-reproducing meme would be another.
This would be a long discussion, but there’s some truth in that, and some falsehood.
They eat innocent, sentient beings who suffer and are terrified because of it. That’s wrong, no matter who does it.
It may not be un-baby-eater-ey, but it’s wrong.
Likewise, not eating babies is un-baby-eater-ey, no matter who does it. It might not be wrong, but it is un-baby-eater-ey.
We have two species who agree on the physical effects of certain actions. One species likes the effects of the action, and the other doesn’t. The difference between them is what they value.
“Right” just means “in harmony with this set of values.” Baby-eater-ey means “in harmony with this other set of values.”
There’s no contradiction in saying that something can be in harmony with one set of values and not in harmony with another set of values. Hence, there’s no contradiction in saying that eating babies is wrong, and is also baby-eater-ey. You can also note that the action is found compelling by one species and not compelling by another, and there is no contradiction in this, either.
What could “right” mean if we have “right according to these morals” AND “right according to these other, contradictory morals?”
I see one possibility: “right” is taken to mean ” in harmony with any set of values.” Which, of course, makes it meaningless. Do you see another possibility?
I disagree that it is wrong for them to do that. And this is not just a disagreement about words: I disagree that Eliezer’s preferred outcome for the story is better than the other outcome.
“Right” is just another way of saying “good”, or anyway “reasonably judged to be good.” And good is the kind of thing which naturally results in desire. Note that I did not say it is “what is desired” any more than you want to say that someone values at a particular moment is necessarily right. I said it is what naturally results in desire. This definition is in fact very close to yours, except that I don’t make the whole universe revolve around human beings by saying that nothing is good except what is good for humans. And since different kinds of things naturally result in desire for different kinds of beings (e.g. humans and babyeaters), those different things are right for different kinds of beings.
That does not make “right” or “good” meaningless. It makes it relative to something. And this is an obvious fact about the meaning of the words; to speak of good is to speak of what is good for someone. This is not subjectivism, since it is an objective fact that some things are good for humans, and other things are good for other things.
Nor does this mean that right means “in harmony with any set of values.” It has to be in harmony with some real set of values, not an invented one, nor one that someone simply made up—for the same reasons that you do not allow human morals to be simply invented by a random individual.
Returning to the larger point, as I said, this is not just a disagreement about words, but about what is good. People maintaining your theory (like Eliezer) hope to optimize the universe for human values. I have no such hope, and I think it is a perverse idea in the first place.
No, morally rightness and wrongness have implications about rule following and rule breaking, reward and punishment that moral goodness and harness dont. Giving to charity is virus, but not giving to charity isn’t wrong and doesn’t deserve punishment.
Similarly, moral goodness and hedonic goodness are different.
I’m not sure what you’re saying. I would describe giving to charity as morally good without implying that not giving is morally evil.
I agree that moral goodness is different from hedonic goodness (which I assume means pleasure), but I would describe that by saying that pleasure is good in a certain way, but may or may not be good all things considered, while moral goodness means what is good all things considered.
I’m saying its a bad idea to collapse together the ideas of moral obligation, moral advisability and pleasure.
I agree.
I think I get it.
You’re saying that “right” just means “in harmony with any set of values held by sentient beings?”
So, baby-eating is right for baby-eaters, wrong for humans, and all either of those statements means is that they are/aren’t consistent with the fundamental values of the two species?
That is most of it. But again, I insist that the disagreement is real. Because Eliezer would want to stomp out baby-eater values from the cosmos. I would not.
Metaethically, I don’t see a disagreement between you and Eliezer. Ethically, I do.
Eliezer says he values babies not being eaten more than he values letting a sentient being eat babies just because it wants to.
You say you don’t, that’s all. Different values.
Are you serious, though? What if you had enough power to stop them from eating babies without having to kill them? Can we just give them fake babies?
I do not support “letting a sentient being eat babies just because it wants to” in general. So for example if there is a human who wants to eat babies, I would prevent that. But that is because it is bad for humans to eat babies. In the case of the babyeaters, it is by stipulation good for them.
That stipulation itself, by the way, is not really a reasonable one. Some species do sometimes eat babies, and it is possible that such a species could develop reason. But it is likely that the very process of developing reason would impede the eating of babies, and eating babies would become unusual, much as cannibalism is unusual in human societies. And just as cannibalism is wrong for humans, eating babies would become wrong for that species. But Eliezer makes the stipulation because, as I said, he believes that human values are intrinsically arbitrary, from an absolute standpoint.
So there is a metaethical disagreement. You could put it this way: I think that reality is fundamentally good, and therefore actually existing species will have fundamentally good values. Eliezer thinks that reality is fundamentally indifferent, and therefore actually existing species will have fundamentally indifferent values.
But given the stipulation, yes I am serious. And no I would not accept those solutions, unless those solutions were acceptable to them anyway—which would prove my point that eating babies was not actually good for them, and not actually a true part of their values.
When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?
Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”
It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires. Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?
That is, if it wants to kill you because you value that, are you cool with that?
What do you do, in general, when values clash? You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?
“When you say reality is fundamentally “good,” doesn’t that translate (in your terms) to just a tautology?” Sort of, but not quite.
“Aren’t you just saying that the desires of sentient beings are fundamentally “the desires of sentient beings?”″ No.
First of all, the word “tautology” is vague. I know it is a tautology to say that red is red. But is it a tautology to say that two is an even number? That’s not clear. But if a tautology means that the subject and predicate mean the same thing, then saying that two is even is definitely not a tautology, because they don’t mean the same thing. And in that way, “reality is fundamentally good” is not a tautology, because “reality” does not have the same meaning as “good.”
Still, if you say that reality is fundamentally something, and you are right, there must be something similar to a tautology there. Because if there is nothing even like a tautology, you will be saying something false, as if you were to say that reality is fundamentally blue. That’s not a tautology at all, but it’s also false. But if what you say is true, then “being real” and “being that way” must be very deeply intertwined, and most likely even the meaning will be very close. Otherwise how would it turn out that reality is fundamentally that way?
I have remarked before that we get the idea of desire from certain feelings, but what makes us call it desire instead of a different feeling is not the subjective quality of the feeling, but the objective fact that when we feel that way, we tend to do a particular thing. E.g. when we are hungry, we tend to go and find food and eat it. So because we notice that we do that, we call that feeling a desire for food. Now this implies that the most important thing about the word “desire” is that it is a tendency to do something, not the fact that it is also a feeling.
So if we said, “everyone does what they desire to do,” it would mean something like “everyone does what they tend to do.” That is not a tautology, because you can occasionally do something that you do not generally tend to do, but it is very close to a tautology.
We get the idea of “good” from the fact that we are tending to do various things, and we assume that those various things must have something in common that explains why we are tending to do all of them. We call that common thing “good.”
Now you could say, “the common thing is that you desire all of those things.” But that is not the way the human mind is working here, whether it is right or wrong. We already know that we desire them all. We want to know “why” we desire them all. And we explain that by saying that they all have something that we call “goodness.” We know it explains our desires, but that does not mean we know anything else about it.
This is really the exact point where I disagree with Eliezer. I think he believes that the common thing is the desire, and there is no other explanation except for random facts in the world that are responsible for our individual desires and for desires generally common in the human species. I think that the natural intuition that there is another explanation is correct. Now you might want to ask, “then what is good, apart from ‘what explains our desires’”?
And I have already started to explain this in other comments, although I did not go into detail. I noted above that the most important thing about “desire” is that it is a tendency to do something. So likewise the most important thing about the word “good” is that it explains the tendency to do something. Now consider this fact about things: things tend to exist. And existing things tend to continue to exist. Why do they tend to do those things? In the first place, it is obvious why things tend to exist. Because they are real, and reality involves existence. And tending to continue to exist might be less obvious, but we can see that at least the particular reality of the thing is responsible for that tendency: why do rocks tend to continue to exist? Part of the reality of the rock (in this case its structure) is responsible for that tendency. It tends to continue to exist because of the reality it has.
In other words, the thing that explains why things tend to do things is reality itself. So reality is fundamentally good, that is, the explanation for why things tend to do the things they do is fundamentally their reality. Note that this last sentence is not a tautology, in that it has a distinct subject and predicate.
Richard Dawkins says that reality looks just as we would expect if it is fundamentally indifferent. And I am pretty sure Eliezer agrees with him about this. But in fact it does not look the way I would expect if it were fundamentally indifferent: I would expect in that situation that things would not have any tendencies at all, so all things would be random.
I will answer the things about my values in another comment.
“It sounds like you’re saying that you personally value sentient beings fulfilling their fundamental desires.” Yes.
“Do you also value a sentient being fulfilling its fundamental desire to eliminate sentient beings that value sentient beings that fulfill their fundamental desires?”
No sentient being has, or can have (at least in a normal way) that desire as a “fundamental desire.” It should be obvious why such a value cannot evolve, if you consider the matter physically. Considered from my point of view, it cannot evolve precisely because it is an evil desire.
Also, it is important here that we are speaking of “fundamental” desires, in that a particular sentient being sometimes has a particular desire for something bad, due to some kind of mistake or bad situation. (E.g. a murderer has the desire to kill someone, but that desire is not fundamental.)
“You have some members of a species who want to eat their innocent, thinking children, and you have some innocent, thinking children who don’t want to be eaten. On what grounds do you side with the eaters?”
As I said in another comment, the babyeater situation is contrived, and most likely it is impossible for those values to evolve in reality. But stipulating that they do, then the desires of the babies are not fundamental, because if the baby grows up and learns more about reality, it will say, “it would have been right to eat me.”
I am pretty sure that people even in the original context brought attention to the fact that there are a great many ways that we treat children in which they do not want to be treated, to which no one at all objects (e.g. no one objects if you prevent a child from running out into the street, even if it wants to. And that is because the desires are not fundamental.)
Your objection is really something like, “but that desire must be fundamental because everything has the fundamental desire not to be eaten.” Perhaps. But as I said, that simply means that the situation is contrived and false.
The situation can happen with an intelligent species and a non-intelligent species, and has happened on earth—e.g. people kill and eat other animals. And although I do not object to people doing this, and I think it is morally right, I do not take “sides,” because I would change the values neither of the people nor of the animals. Both desires are good, and the behavior on both sides is right (although technically we should not be speaking of right and wrong in respect to non-rational creatures.)
It probably could not happen with two intelligent species, if only for economic reasons.
I don’t know. I wonder if some extra visualization would help.
Would you help catch the children so that their parents could eat them? If they pleaded with you, would you really think “if you were to live, you would one day agree this was good, therefore it is good, even though you don’t currently believe it to be?”
Why say the important desire is the one the child will one day have, instead of the one that the adult used to have?
I would certainly be less interested in aliens obtaining what is good for them, than in humans obtaining what is good for them. However, that said, the basic response (given Eliezer’s stipulations), is yes, I would, and yes I would really think that.
The adult has not only changed his desire, he has changed his mind as well, and he has done that through a normal process of growing up. So (again given Eliezer’s stipulations), it is just as reasonable to believe the adults here as it is to believe human adults. It is not a question of talking about whose desire is important, but whose opinion is correct.
....a word which means a number of things, which are capable of conflicting with each other. Moral good refers to things that are beneficial at the group level, but which individuals tend not to do without encouragement.
I think it is perfectly obvious that this usage of “should” and so on is wrong. A paperclipper believes that it should make paperclips, and it means exactly the same thing by “should” that I do when I say I should not murder.
And when I say it is obvious, I mean it is obvious in the same way that it is obvious that you are using the word “hat” wrong if you use it for a coat.
I think you’re using “should” to mean “feels compelled to do.”
Yes, a paperclipper feels compelled to make paperclips, and a human feels compelled to make sentient beings happy.
But when we say “should,” we don’t just mean “whatever anyone feels compelled to do.” We say “you might drug me to make me want to kill people, but I still shouldn’t do it.”
“Should” does not refer to compelling feelings, but rather to a certain set of states of beings that we value. To say we “still shouldn’t kill people,” means it “still isn’t in harmony with happy sentient beings (plus a million other values) to kill people.”
A paperclipper wouldn’t disagree that killing people isn’t in harmony with happy sentient beings (along with a million other values), it just wouldn’t care. In other words, it wouldn’t disagree that it shouldn’t kill people, it just doesn’t care about “should;” it cares about “clipperould.”
Likewise, we wouldn’t disagree that keeping people around instead of making them into paperclips is not in harmony with maximizing paperclips, we just wouldn’t care. We know we clipperould turn people into paperclips, we just don’t care about clipperould, we care about should.
No, I am not using “should” to mean “feels...” anything (in other words, feelings have nothing to do with it.) But you are right about compulsion. The word “ought” is, in theory, just the past tense of “owe”, and what is owed is something that needs to be paid. Saying that you ought to do something, just means that you need to do it. And should is the same; that you should do it just means that there is a need for it. And need is just necessity. So it does all have to do with compulsion.
But it is not compulsion of feelings, but of a goal. And to that degree, your idea is actually correct. But you are wrong to say that the specific goal sought affects the meaning of the word. “I should do it” means that I need to do it to attain my goal. It does not say what that goal is.
The Open Question argument is theoretically flawed because it relies too much on definitions (see this website’s articles on how definitions don’t work that way, more specifically http://lesswrong.com/lw/7tz/concepts_dont_work_that_way/).
The truth is that humans have an inherent instinct towards seeing “Good” as an objective thing, that corresponds to no reality. This includes an instinct towards doing what, thanks to both instinct and culture, humans see as “good”.
But although I am not a total supporter of Yudowksy’s moral support, he is right in that humans want to do good regardless of some “tablet in the sky”. Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system.
The alternative, as far as I can tell, would be that ANY coherent formulation of morality whatsoever could be countered with “Is it good?”.
True but not very interesting. The interesting question is whether the operations of intuitive black boxes can be improved on.
The tablet argument is entirely misleading.
i don’t see what you mean by that. If the function of the ethical black bx can be identified, then it can be improved on, in the way that physics physics improves on folk physics.
Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system.
The alternative, as far as I can tell, would be that ANY coherent formulation of morality whatsoever could be countered with “Is it good?”.
“ANY coherent formulation of morality whatsoever could be countered with “Is it good?”.
Exactly, if you think morality is different from goodness. That is why said “morally right” just means “what it is good for me to do.”
That is not the same as what I want at the moment. Humans have an inherent instinct towards seeing good as objective rather than as “what I want” for the same reason that we have an instinct towards seeing dogs and cats as objectively distinct, instead of just saying “dog is what I call dog, and cat is what I call cat, and if I decide to start calling them all dogs, that will be fine too.”
Saying that good is just what I happen to want is just the same as saying that dog is whatever I happen to call dog. And both positions are equally ridiculous.
Moral goodness is clearly different form, eg, hedonic goodness. Enjoying killing doesn’t mean you should kill.
It might be the case that humans have a mistaken view of the objectivity of morality, but it doesn’t follow from that that morality=hedonism. You can’t infer the correctness of one of N>2 theories form the wrongness of another.
It is possible to misuse the terms “dog” and “cat”, so the theory of semantics you are appealing to as the only possible alternative to objective fully objective semantics is wrong as well. HInt: intersubjectivity, convention.
So what’s the correct theory?
I don’t know why you are bringing up hedonism. It is bad to kill even if you enjoy it; so if morally good means what it is good to do, as I say, it will be morally bad to kill even if it is pleasant to someone.
The fully intersubjective but non-objective theory of meaning that you are suggesting is also false, since if everyone all at once agrees to call all dogs and cats “dogs”, that will not mean that suddenly there is no objective difference between the things that used to be called dogs and the things that used to be called cats.
The correct theory is this:
“Dog” means something that has what is in common to the things that are normally called dogs. Notice that this incorporates inter-subjectivity and convention, since “things that are normally called dogs” means normally called that by normal people. But it also includes an objective element, namely “what is in common.”
Now someone could say, “Well, what those things have in common is that people normally call them dogs. They don’t have anything else in common. So this theory reduces to the same thing: dogs are what people call dogs.”
But they would be wrong, since obviously there are plenty of other things that dogs have in common, and where they differ from cats, which do not depend on anyone calling them anything.
The correct theory of goodness is analagous:
“Good” means something that has what is in common to the things that are normally called good. Again, this incorporates the element of convention, in “normally called good,” but it also includes an objective element, in “what is in common.”
As before, someone might say that actually they have nothing in common except the name. But again that would be wrong.
More plausibly, though, someone might say that actually what they have in common is that people desire them. And in a sense this is Eliezer’s view. But this is also wrong. Let me explain why.
One difficulty is that people are rarely wrong about whether something is a dog, but they are often wrong about whether something is good. This makes no difference to the fact that the words have meanings, but it makes it easier to see what is “normally called a dog” than “normally called good.” If someone calls something good because they are mistaken about it in some way, for example, then you cannot include that as one of the things that has what is in common, just as if someone mistakenly calls a cat a dog in some case, you cannot include that cat in determining what dogs have in common.
Just as it is not too difficult to see that dogs have some objective features that distinguish them from cats, good things have an objective feature that distinguishes them from bad things: good things tend to result in things desiring them, and bad things tend to result in things avoiding them. Now that tendency is not complete and perfect, especially because of people making mistakes. So occasionally someone desires something bad, or avoids something good. But the general tendency is for good things to result in desire, and bad things to result in avoidance.
Now if you think reality is intrinsically indifferent, as Eliezer does, then you would say that there is no such tendency: people have a tendency to desire some things and avoid others. We then call the things we tend to desire, “good,” and the things we tend to avoid, “bad,” but actually the good things have nothing in common except that we are desiring them, and the bad things have nothing in common except that we are avoiding them.
As you pointed out yourself, people have an inherent instinct to deny this position. That is because people ask, “why do I desire these things, and not others?” And they want the answer to be, “Because these are good, and the others are not.” And that answer does not make sense, unless the good things have something objective in common in addition to the fact that I desire them.
The instinct is correct, and Eliezer is wrong, and we can prove that by finding some things that the good things have in common, other than desire. The way to do that is to note that desire itself is a particular case of something more general, namely a tendency to do something. And the tendencies to do something that we find have various properties. So for example consistency is one of them—without consistency, you cannot have a tendency at all. Rocks tend to fall, and it is very consistent that they go downwards. And note that without this consistency, there would be no tendency. Likewise, tendencies will always preserve the existence of something—not necessarily of the whole existence of the thing which immediately has the tendency, but of something. Thus inertia is a tendency to motion, and it tends to preserve the amount of that movement. And we could go on. But all of these things imply that “what we desire” has various properties in common besides the fact that we desire it. And this is what it is to be good.
So what is your theory? That the morally good is the morally good? Weren’t you criticising that approach?
“The morally good is the morally good” is vacuous.
“The morally good is the good” is subject to counteraxamples.
That is only true if you equate “wrong” with not capturing all the information. But then we would always be wrong, since we never capture all the information. There are languages where “mouse” and “rat” are translated by the same word. Speakers of those languages are not systematically denuded.
That’s rather redundant, since the idea that new sages of “dog” shoudl ave something in common with established ones is already part of the norm.
I would say that you have the casual arrow the wrong way round there.
Also, you are, again, using “good” in a way that leads to obvious counterxamples of things that are desired or desireable but not morally good.
If you could work out the difference between the mistakes and the norm, you would have a non-vacuous theory of what “morally” means in “morally good”. However, I don;t know if you are even trying to do that, since you seem wedded to the idea that the morally good is the good, period.
If you want the word “good” to do all the work in your theory of moral good, yo would have that problem. If you allow the word “moral” to do some work, you don’t. The morally good has features in common , scuh as being co-operative and prosocial, that the unqualified “good” does not, and that is stil the case if the good is not an objective feature of the world.
You don’t need objectivity, intersubjectivity is enough.
Also, I did not say that people would be wrong if they started calling all cats and dogs “dogs.” I said that this would not mean that there were not objective differences between the things that used to be called dogs, and the things that used to be called cats. In fact, the only reason we are able to call some dogs and some cats is that there are objective differences that allow us to distinguish them.
Not all semantics is based on objective differences. There’s no objective feature that makes someone a senator, or a particular piece of paper money..we just have social conventions, coupled with memorising the members of the set “money” or “senator”. So if you arguing that “good” must have objective characteristics because all menaingful words must denote something objective, that doesn’t work. But it is not clear you are arguing that way.
Objective differences doesn’t have to mean physical differences of the thing at the time. It is an objective fact that certain people have won elections and that others have not, for example, even if it doesn’t change them physically.
In this sense, it is true that every meaningful distinction is based on something objective, since otherwise you would not be able to make the distinction in the first place. You make the distinction by noticing that some fact is true in one case which isn’t true in the other. Or even if you are wrong, then you think that something is true in one case and not in the other, which means that it is an objective fact that you think the thing in one case and not in the other.
No, it’s intersubjective. Winning and elections aren’t in the laws of physics. You can’t infer objecgive from not-subjective.
You need to be more granular about that. It is true that you can’t recognise novel members of an open-ended category (cats and dogs) except by objective features, and you cant do that because you can’t memorise all the members of such a set. But you can memorise all the members fo the set of Seanators. So objectivty is not a universal rule.
I think you might be arguing about words, in relation to whether the election is an objective fact. I don’t see what the laws of physics have to do with it. There is no rule that objective facts have to be part of the laws of physics. It is an objective fact that I am sitting in a chair right now, but the laws of physics say nothing about chairs (or about me, for that fact.)
Even if you memorize the set of Senators, you cannot recognize them without them being different from other people.
I do not know why you keep saying that I am saying that morally good is the same as good.
According to me (and this is what I think they are, not an argument) : “Morally good” is “what is good to do.”
So morally good is not the same as good. Good is general, and “Good TO DO” is morally good. So morally good is a kind of goodness, just as everyone believes.
Not helping. Good to do can be hedonistically good to do, selfishly good to do, etc. If I sacrifice the lives of 100 people to save my life, that is a good ting to do from some points of view, but not what most people would call morally good.
Saying that a thing is “hedonistically good to do” means that it is good to some extent. It does not tell us whether it is good to do, period. If it is good to do, period, it is morally good. If there are other considerations more important than the pleasure, it won’t be good to do, period, and so will be morally wrong.
It’s not helpful to define the morally good as the “good, period”, without an explanation of “good, period”. You are defining a more precise term using a less precise one, which isn’t the way to go.
Suppose there is a blue house with a red spot on it. You ask, “Is that a red house?” Someone answers, “Well, there is a red spot on it.”
There is no difference if there is something bad that you could do which would be pleasant. You ask, “Is that something good to do?” Someone answers, “Well, it is hedonistically good.”
But I don’t care if there is a red spot, or if it is pleasant. I am asking if the house is red, and if it would be good to do the thing.
Those are answered in similar ways: the house is red if it is red enough that a reasonable person would say, “yes, the house is red.” And the action is morally good if a reasonable person would say, “yes, it is good to do it.”
i think that’s a fairly misleading analogy. For instance, a house’s being red is not exclusive of another ones..but my goods can conflict with another person’s.
Survival is good, you say. If I am in a position to ensure my survival by sacrificing Smith, is it morally good to do so? After all Smith’s survival is just as Good as mine.
As I said, we are asking whether it is good to do something overall. So there is no definite answer to the question about Smith. In some cases it will be good to do that, and in some cases not, depending on the situation and what exactly you mean by sacrificing Smith.
So what you call goodness cannot be equated with moral goodness, because moral goodness does need to put an overall value on act, does need to say that an act is permitted, forbidden or obligatory.
I don’t understand what you are trying to say here. Of course in a particular situation it will be good, and thus morally right, to sacrifice Smith, and in other particular situations it will not be. I just said that you cannot say in advance, and I see no reason why moral goodness would have to judge these situations in advance without taking everything into account.
Morality binds and blinds. People derive moral claims from emotional and intuitive notions. It can feel good and moral to do amoral things. Objective morality has to be tied to evidence what really is human wellbeing; not to moral intuitions that are adaptions to the benefit of ones ingroup; or post hoc thought experiments about knowledge.