All the examples of privileged questions given are disguised manifestations of moral uncertainty
should gay marriage be legal?
is the struggle between a morality that favors equality, and one that has a certain set of values surrounding purity and/or respect for religious authority.
Should Congress pass stricter gun control laws?
is the struggle between individual autonomy vs. harm avoidance
Should immigration policy be tightened or relaxed?
is the struggle between in-group preference and lack thereof
The questions themselves are unimportant...but the deeper moral undercurrent which causes those questions to be privileged is important. If someone is against gay marriage and stem cells, how do you expect them to react to trans-humanist memes, life extension, and the AI?
When society makes a decision about the morality of gay marriage and stems cells, they have also gone part of the way to making a decision about AI, since a lot of the same moral circuitry is going to be involved.
Side comment: Can anyone find an example of a “privileged” question which isn’t a disguised moral struggle?
How to optimize the society for more long-term utility for people of any sexual orientation?
You can’t even ask this question until you arrive at utilitarianism as a moral philosophy. A person with moral objections against homosexual marriage isn’t a utilitarian by definition, since they care about additional things (purity, respect for authority, etc) which have nothing to do with increasing everyone’s utility..
When you ask “how to maximize utility”, you have already assumed that the moral struggle between harm/care and purity has been settled in favor of harm/care. Otherwise, you would be asking about how to maximize utility while also keeping people from “defiling” themselves.
As mare-of-night reminded us elsewhere in-thread, even Clippy is a utilitarian. There’s nothing special about paperclips or purity that prevents them from being included in someone’s definition of utility.
On the other hand, even if your post boils down to “my definition of utility is the correct global definition”, that’s no more wrong than Viliam_Bur’s treating “utility for people” as a well-defined term without billions of undetermined coefficients.
How to optimize the society for more long-term utility for people of any sexual orientation?
Under classical preference utilitarianism, you try to maximize everyone’s utility and conveniently ignore the problems of putting two utility functions into one equation, and the problems you mention.
Continuing to conveniently ignore that problem, I implicitly assume that we agree that the positive utility generated by removing restrictions to homosexuality outweigh the negative utility generated by violating purity boundaries, when applied over the entire population.
We still include the purity thing in the calculations of course. For example, I could in principle argue that the negative utility from allowing sex in public probably outweighs the positive utility generated from the removal of the restriction, hence our public obscenity laws.
Continuing to conveniently ignore that problem, I implicitly assume that we agree that the positive utility generated by removing restrictions to homosexuality outweigh the negative utility generated by violating purity boundaries, when applied over the entire population.
That ignores the possibility that there is a reason those purity boundaries were there in the first place.
I’ve seen this before, but I can’t say I find it a compelling argument - if an institution was placed for good reason, then at least someone, somewhere would remember why it was placed and could give a compelling argument. If no one can do so, the risk of some, hidden drawback which the original lawmaker could have forseen seems too small to count.
I mean, this argument does apply when you are acting alone, on some question that neither you nor anyone you come into contact with knows anything about...but it doesn’t apply to something like this.
Utilitarians who choose to draw the line around the whole of the human race do so because they believe they aught to value the whole of the human race.
Utilitarians who choose to draw the line around the whole of the human race do so because they believe they aught to value the whole of the human race.
Is that a deontological standard?
The reason I asked is that, in principle, you could have utilitarianism based on some group smaller than the human race.
For some people, probably. Let’s take a step back.
Morality comes from the “heart”. It’s made of feelings. Utilitarianism (and much of what falls under moral philosophy) is one of many attempt to make a consistent set of rules to describe inconsistent feelings. The purpose of making a consistent set of rules is 1) to convince others of the morality of an action and 2) because we morally feel aversion to hypocrisy and crave moral consistency.
Keeping those aims in mind, drawing the line across all humans, sentient beings, etc has the following benefits:
1) The creators might feel that the equation describes the way they feel better when they factor in all humans. They might hold it as a deontological standard to care about all humans, or they might feel a sense of fairness, or they might have empathy for everyone, etc.
2) Drawing the line across all humans allows you to use the utilitarian standard to negotiate compromises with any arbitrary human you come across. Many humans, having the feelings described in [1], will instinctively accept utilitarianism as a valid way to think about things.
There are plenty of things that are problematic here, but that is why utilitarianism defaults to include the whole human race. As with all things moral, that’s just an arbitrary choice on our part, and we could easily have done it a different way. We can restrict it to a smaller subset of humans, we can broaden it to non-human things which seem agent-like enough to be worth describing with a utility function, etc. Many utilitarians include animals, for example.
Morality comes from the “heart”. It’s made of feelings.
People use feelings/System1 to do morality. That doesn’t make it an oracle. Thinking might be more accurate.
As with all things moral, that’s just an arbitrary choice on our part
If you don’t know how to solve a problem, you guess. But that doens’t mean anything goes. Would anyone include rocks
in the Circle? Probably not, since they don’t have feelings, values, or preferences. So there seem to be some constraints.
People use feelings/System1 to do morality. That doesn’t make it an oracle. Thinking might be more accurate.
Accurate? How can you speak of a moral preference being “accurate” or not? Moral preferences simply are. There are some meta-ethics sequences here that explain the arbitrariness of our moral preferences more eloquently , and here is a fun story that tangentially illustrates it
Would anyone include rocks in the Circle? Probably not, since they don’t have feelings, values, or preferences. So there seem to be some constraints.
I bet I can find you someone who would say that burning the Quran or the Bible is inherently immoral.
Quite a few of them no doubt. Of course, the overwhelming majority of people who would say that burning the Quran or the Bible is inherently immoral would also say that it’s immoral by virtue of the preferences of an entity that, on their view, is in fact capable of having preferences.
Of course, I’m sure I could find someone who would say rocks have feelings, values, and preferences.
“the overwhelming majority of people who would say that burning the Quran or the Bible is inherently immoral would also say that it’s immoral by virtue of the preferences of an entity that, on their view, is in fact capable of having preferences.”
Conversationalists will want to preserve ecosystems, even where those ecosystems are already well studied by science, even when the ecosystem contains no sentient beings (plants, fungi, microbes), even when destroying the ecosystem has many advantages for humans, because they think the ecosystem is intrinsically valuable independently of the effect on beings with feelings, values, and preferences.
Some looser examples...
Pro-life advocates say that beings without preferences have rights by virtue of future preferences. Not all of them are religious.
Hindus treat books (all books in general) with reverence because they are vehicles of learning, despite not necessarily believing in deities.
Many social conservatives report being unwilling to slap their fathers, even with permission, as part of a play.
The classic trolley problem implies that many people’s moral intuitions hinge on the act of murder being wrong, rather than the effect that the death has on the values, feelings, and preferences being morally wrong.
Of course, if you are a moral realist, you can just say that these people’s intuitions are “wrong”...but the point is that “feelings, values, and preferences”—in a word, utilitarianism—isn’t the only guiding moral principle that humans care about.
And yes, you could argue that this is all a deity’s preferences...but why did they decide that those were in fact the deity’s preferences? Doesn’t it hint that they might have an underlying feeling of those preferences in themselves, that they would project those wishes on a deity?
Conversationalists will want to preserve ecosystems
No doubt some of them will, but I suspect you meant “conservationists.” And yes, I agree that some of those will assign intrinsic value to “nature” in various forms, or at least claim to, as you describe.
Pro-life advocates say that beings without preferences have rights by virtue of future preferences. Not all of them are religious.
Some of them do, yes. Indeed, I suspect the ones who say that are disproportionately non-religious.
why did they decide that those were in fact the deity’s preferences?
A fine question.
Doesn’t it hint that they might have an underlying feeling of those preferences in themselves, that they would project those wishes on a deity?
Quite a few of them no doubt. Of course, the overwhelming majority of people who would say that burning the Quran or the Bible is inherently immoral would also say that it’s immoral by virtue of the preferences of an entity that, on their view, is in fact capable of having preferences.
And, again, if destroying entity X is wrong because some other entity Y says so, that is not inherent.
I bet I can find you someone who would say that burning the Quran or the Bible is inherently immoral.
We sometimes extend morality to inanimate objects , but only ones that mean something to us, such as works of art and religious artefacts. That isn’t actually inherent because of the “to us” clause, although some people might claim
that it is.
There are some meta-ethics sequences here that explain the arbitrariness of our moral preferences more eloquently , and here is a fun story that tangentially illustrates it
Pebble sorting is a preference. That’s it. I don’t have to believe it is a moral preference or a correct moral preference.
How can you speak of a moral preference being “accurate” or not?
Moral objectivism isn’t obviously wrong, and system 2 isn’t obviously the wrong way to realise moral truths.
IOW, moral subjectivism isn’t obviously true.
NB: Objectivism isn’t universalism.
Moral preferences simply are.
Beliefs simply are. And some are true and some are not. You seem to be assuming the non-existence of anything that
could verify or disprove a moral preference in order to prove more or less the same thing.
We sometimes extend morality to inanimate objects , but only ones that mean something to us, such as works of art and religious artefacts. That isn’t actually inherent because of the “to us” clause, although some people might claim that it is.
I would say that the “to us” clause actually applies to everything, and that nothing is “inherent”, as you put it. Pebble sorting means something to the pebble sorters. Humans mean something to me. The entirity of morality boils down to what is important “to us”?
To me, moral objectivism is obviously wrong and subjectivism is obviously true, and this is embedded in my definition of morality. I’m actually unsure how anyone could think of it in any other coherent way.
I think it’s time to unpack “morality”. I think morality is feelings produced in the human mind about how people aught to act. That is, I think “murder is bad” is in some ways analogous to “Brussels sprouts are gross”. From this definition, it follows that I see moral objectivism as obviously wrong—akin to saying, “no man, Brussels sprouts are objectively, inherently gross! In the same way that the sky is objectively blue! / In the same way that tautologies are true!” (Actually, replace blue with the appropriate wavelengths to avoid arguments about perception)
What do you think “morality” is, and where do you suppose it comes from?
I think morality is behaving so as to take into account the values and preferences of others as well as ones own. You can succed or fail in that, hence “accurate”.
Morality may manifest in the form of a feeling for many people, but not for everybody and not all feelings are equal. So I don’t think that is inherent, or definitional.
I don’t think the sprout analogy works, because your feeling that you don’t like sprouts doesn’t seriously affect others, but the psychoaths fondndess for murder does.
The feelings that are relevant to morality are the empathic ones, not personal preferences. That is a clue that
morality is about behaving so as to take into account the values and preferences of others as well as ones own.
if you think morlaity is the same as a personal preference...what makes it morality? Why don’t we just have one word and one way of thinking?
what makes it morality? Why don’t we just have one word and one way of thinking?
Because they feel different to us from the inside—for the same reason that we separate “thinking” and “feeling” even though in the grand scheme of things they are both ways to influence behavior.
Mathematical statements aren’t empirical facts eitherl but convergence is uncontroversial there.
In Math, empirical evidence is replaced by axioms. In Science, the axioms are the empirical evidence.
The point is that all rational agents will converge upon mathematical statements, and will not converge upon moral statements. Do you disagree?
Are you quite sure that morlaity isn’t implicit in the logic of how-a-society-if-entities-wth-varying-prefernces manage-to-rub-along?
I think morality is behaving so as to take into account the values and preferences of others as well as ones own
I’m very, very sure that my morality doesn’t work that way.
Imagine you lived on a world with two major factions, A and B.
A has a population of 999999. B has a population of 1000.
Every individual in A has a very mild preference for horrifically torturing B, and the motivation is sadism and hatred. The torture and slow murder of B is a bonding activity for A, and the shared hatred keeps the society cohesive.
Every individual in B has a strong, strong preference not to be tortured, but it doesn’t even begin to outweigh the collective preferences of A.
From the standpoint of preference utilitarianism, this scenario is analogous to Torture vs. Dust Specks. Preference Utilitarians choose torture, and a good case could be made even under good old human morality to choose torture as the lesser of two evils. This is a problem which I’d give serious weight to choosing torture
Preference utilitarian agents would let A torture B—“shut up and multiply”. However, from the standpoint of my human morality, this scenario is very different from torture vs. dust specks, and I wouldn’t even waste a fraction of a second in deciding what is right in this scenario. Torture for the sake of malice is wrong (to me) and it really doesn’t matter what everyone else’s preferences are—if it’s in my power, I’m not letting A torture B!
Are you quite sure that morlaity isn’t implicit in the logic of how-a-society-if-entities-wth-varying-prefernces manage-to-rub-along?
Morality evolved as a function of how it benefited single alleles, not societies. Under different conditions, it could have evolved differently. You can’t generalize from the way morality works in humans to the way it might work in all possible societies of entities.
Mathematical statements aren’t empirical facts eitherl but convergence is uncontroversial there.
In Math, empirical evidence is replaced by axioms.
The point is that all rational agents will converge upon mathematical statements, and will not converge upon moral statements. Do you disagree?
Agreement isn’t important: arguments are important. You apparently made the argument that convergence on morality isn’t possible because it would require empirically detectable moral objects. I made the counterargument that convergence on morality could work like convergence on mathematical truth. So it
seems that convergence on morlaity could happen, since there is a way it could work.
I think morality is behaving so as to take into account the values and preferences of others as well as ones own
I’m very, very sure that my morality doesn’t work that way. [Argument against utilitarianism].
OK. Utilitarianism sucks. That doens’t mean other objective approaches don’t work—you could be a deontologist. And it doesn’t mean subjectivism does work.
Morality evolved as a function of how it benefited single alleles, not societies. Under different conditions, it could have evolved differently. You can’t generalize from the way morality works in humans to the way it might work in all possible societies of entities
Says who? We can generalise language, maths and physics beyond our instinctive System I understandings. And we have.
I think morality is behaving so as to take into account the values and preferences of others as well as ones own.
is the reason why I said that my morality isn’t preference utilitarian. If “taking into account the values and preferences of others as well as your own”, then preference utilitarianism seems to be the default way to do that.
Alright...so if I’m understanding correctly, you are saying that moral facts exist and people can converge upon them independently, in the same ways that people will converge on mathematical facts. And I’m saying we can’t, and that morality is a preference linked to emotions. Neither of us have really done anything but restate our positions here. My position seems more or less inherent in my definition of morality, and I think you understand my position...but I still don’t understand yours.
Can I have a rudimentary definition of morality, an example of a moral fact, and a process by which two agents can converge upon it?
Can you give me a method of evaluating a moral fact which doesn’t at some point refer to our instincts? Do moral facts necessarily have to conform to our instincts? As in, if I proved a moral fact to you, but your instincts said it was wrong, would you still accept that it was right?
is the reason why I said that my morality isn’t preference utilitarian. If “taking into account the values and preferences of others as well as your own”, then preference utilitarianism seems to be the default way to do that.
For lexicographers, the default is apparently deontology
“conformity to the rules of right conduct”
“Principles concerning the distinction between right and wrong or good and bad behavior.”
etc.
Can I have a rudimentary definition of morality, an example of a moral fact, and a process by which two agents can converge upon it?
1 A means by which communities of entities with preferences act in accordance with all their preferences.
2 Murder is wrong.
3 Since agents do not wish to be murdered, it is in their interests to agree to refrain from murder under an
arrangement in which other agents agree to refrain from removing them.
Can you give me a method of evaluating a moral fact which doesn’t at some point refer to our instincts?
I don’t see why I need to, Utilitarianism and ontology take preferences and intuitions into account. Your argument against utilitarinism that it comes to to conclusions which go against your instincts. That isn’t just an assumption that morality has to something to do with instincts, it is a further assumption that your instincts trump all further constderations It is an assumption of subjectivism.
You are saying objectivism is false because subjectivism is true.
If utilitarianism worked, it would take intuitions and preferences into account, and arrive at some arrangement that
minimises the number of people who don’t get their instincts or preferences satisfied. Some people have to lose You have decided that is unaccpetable because you have decided that you must not lose. But utilitariansim
still works in the sense that a set of subjective prefefernces can be treated as objective facts, and aggregated together. There is nothing to stop different utilitarians (of the same variety) converging on a decision. U-ism
“works” in that sense.You objection is not that convergence is not possible, but that what is converged upon is not moral, because your instincts say not.
But you don’t have any argument beyond an assumption that morality just is what your instincts say. The other side
of the argument doesn’t have to deny the instinctive or subjective aspect of morality, it only needs to deny that your instincts are supreme. And it can argue that since morality is about the regulation of conduct amongst groups, the very notion of subjective morality is incoherent (parallel: language is all about communication, so a language that is only understood by one person is a paradox).
As in, if I proved a moral fact to you, but your instincts said it was wrong, would you still accept that it was right?
Maybe. Almost everybody who has had their mind changed about sexual conduct had overridden an instinct.
3 Since agents do not wish to be murdered, it is in their interests to agree to refrain from murder under an arrangement in which other agents agree to refrain from removing them.
So there are several things I don’t like about this..
0) It’s not in their interests to play the cooperative strategy if they are more powerful, since the other agent can’t remove them.
1) It’s not a given that all agents do not wish to be murdered. It’s only luck that we wish not to die. Sentient beings could just as easily have come out of insects who allow themselves to be eaten by mates, or by their offspring.
2) So you sidestep this, and say that this only applies to beings that wish to be murdered. Well now, this is utilitarianism. You’d essentially be saying that all agents want their preferences fulfilled, therefore we should all agree to fulfill each others preferences.
You have decided that is unaccpetable because you have decided that you must not lose.
Essentially yes. But to rephrase: I know that the behavior of all agents (including myself) will work to bring about the agent’s preferences to the best of the agent’s ability, and this is true by definition of what a “preference” is.
Maybe. Almost everybody who has had their mind changed about sexual conduct had overridden an instinct.
I’m not sure I follow what you mean by this. My ideas about sexual conduct are in line with my instincts. A highly religious person’s ideas about sexual conduct are in line with the instincts that society drilled into them. If I converted that person into sex-positivism, they would shed the societal conditioning and their morality and feelings would change. Who is not in alignment with their instincts?
(Instincts here means feelings with no rational basis, rather than genetically programmed or reflexive behaviors)
0) It’s not in their interests to play the cooperative strategy if they are more powerful, since the other agent can’t remove them.
I am not sure what the argument is here. The objectivist claim is not that every entity actually will be moral in practice, and it’t not the claim that every agent will be interested in settling moral question: it’s just the claim that agents who are interested in settling moral questions, and have the same set of facts available (ie live in the same society) will be able to converge. (Which is as objective as anything else.The uncontentious claim that mathematics is objective doens’t imply that everyone is a mathematician, or knows all mathematical truths).
It’s not a given that all agents do not wish to be murdered. It’s only luck that we wish not to die. Sentient beings could just as easily have come out of insects who allow themselves to be eaten by mates, or by their offspring.
I have described morality as an arrangement within a society. Alien societies might have different morality to go with their diffrent biology. That is not in favour of subjectivism, because subjectivism requires morality to vary with personal preference, not objective facts about biology. Objectivism does not mean universalism. It means agents, given the same facts, and the willingness to draw moral conclusions from them, will converge. It doens’t mean the facts never vary. if they do, so will the conclusions
You’d essentially be saying that all agents want their preferences fulfilled, therefore we should all agree to fulfill each others preferences.
All agents want their preferences fulfilled, and what “should” means is being in accordance some arrangement for resolving the resulting conflicts, whether utilitarian, deontological, or something else.
My ideas about sexual conduct are in line with my instincts. A highly religious person’s ideas about sexual conduct are in line with the instincts that society drilled into them. If I converted that person into sex-positivism, they would shed the societal conditioning and their morality and feelings would change. Who is not in alignment with their instincts?
The convertee. In my expererience, people are generally converted by arguments...reasoning...system 2.
So when people are converted, they go from Instinct to Reason.
But perhaps you know of some process by which subjective feelings are transferred directly, without the involvement of system 2.
it’s just the claim that agents who are interested in settling moral questions, and have the same set of facts available (ie live in the same society) will be able to converge.
It means agents, given the same facts, and the willingness to draw moral conclusions from them, will converge. It doens’t mean the facts never vary. if they do, so will the conclusions
But don’t you see what you’re doing here? You are defining a set of moral claims M, and then saying that any agents who are interested in M will converge on M!
The qualifier “agents who are interested in moral questions” restricts the set of agents to those who already agree with you about what morality is. Obviously, if we all start from the same moral axioms, we’ll converge onto the same moral postulates—the point is that the moral axioms are arbitrarily set by the user’s preferences.
All agents want their preferences fulfilled, and what “should” means is being in accordance some arrangement for resolving the resulting conflicts, whether utilitarian, deontological, or something else.
Wait, so you are defining morality is defined as a system of conflict resolution between agents? I actually do like that definition...even though it doesn’t imply convergence.
Then Utilitarianism is the solution that all agents should maximize preferences, deontological is the solution that there exist a set of rules to follow when arbitrating conflict, etc.
Counterexample—Imagine a person who isn’t religious, who also believes incest between consenting adults is wrong (even for old infertile people, even if no one else gets to know about it). There is no conflict between the two agents involved—would you say that this person is not exhibiting a moral preference, but something else entirely?
But perhaps you know of some process by which subjective feelings are transferred directly, without the involvement of system 2.
The vast majority of people are not convinced by argument, but by life experience. For most people, all the moral rhetoric in the world isn’t as effective as a picture of two gay men crying with happiness as they get married.
That’s besides the point, though—you are right that it is possible (though difficult) to alter someone’s moral stance through argument alone. However, “System 1” and “System 2“ share a brain. You can influence “system 1” via “system 2”—reasoning can effect feelings, and vice versa. I can use logical arguments to change someone’s feelings on moral issues. That doesn’t change the fact that the moral attitude stems from the feelings.
If you can establish a shared set of “moral axioms” with someone, you can convince them of the rightness or wrongness of something with logic alone. This might make it seem like any two agents can converge on morality—but just because most humans have certain moral preferences hardwired into them doesn’t mean every agent has the same set of preferences. I have some moral axioms, you have some moral axioms, and we can use shared moral axioms to convince each other of things… but we won’t be able to convince any agent which has moral axioms that do not match with ours.
But don’t you see what you’re doing here? You are defining a set of moral claims M, and then saying that any agents who are interested in M will converge on M!
I haven’t defined a set of moral claims. You asked me for an example of one claim. I can argue the point with specifying any moral conclusions. The facts I mentioned as the input to the process are not moral per se.
The qualifier “agents who are interested in moral questions” restricts the set of agents to those who already agree with you about what morality is.
In a sense, yes. But only in the sense that “agents who are interested in mathematical questions” restricts the set of agents who are interested in “mathematics” as I understand it. On the other had, nothing is implied about the set of object level claims moral philosophers would converge on.
Obviously, if we all start from the same moral axioms, we’ll converge onto the same moral postulates—the point is that the moral axioms are arbitrarily set by the user’s preferences.
I don’t have to accept that, because I am not use a subjective criterion for “morailty”. If you have a preference
for Tutti Frutti, that is not a moral preference, because it does not affect anybody else. The definition of morality I am using is not based on any personal preference of mine, it’s based on a recognition that morality has a job to do.
Wait, so you are defining morality is defined as a system of conflict resolution between agents? I actually do like that definition...even though it doesn’t imply convergence.
If no convergence takes place, how can you have an implementable system? People are either imprisoned or not, they cannot be imprisoned for some agents but not for others.
Counterexample—Imagine a person who isn’t religious, who also believes incest between consenting adults is wrong (even for old infertile people, even if no one else gets to know about it). There is no conflict between the two agents involved—would you say that this person is not exhibiting a moral preference, but something else entirely?
You are tacilty assuming that no action will be taken on the basis of feelings of wrongness, that nobody ever campaigns to ban things they don’t like.
That doesn’t change the fact that the moral attitude stems from the feelings.
If system 1 was influenced by system 2 , then what stems from system 1 stemmed from system 2, and so on. You are drawing an arbitrary line.
If you can establish a shared set of “moral axioms” with someone, you can convince them of the rightness or wrongness of something with logic alone.
If moral axioms are completely separate from everything else, then you would need to change their axioms. If
they are not, then not. For instance, you can argue that some moral attitudes someone has are inconsistent with others. Consistency is not a purely moral criterion.
I have some moral axioms, you have some moral axioms, and we can use shared moral axioms to convince each other of things… but we won’t be able to convince any agent which has moral axioms that do not match with ours.
If “moral axioms” overlap with rational axioms, and if moral axioms are constrained by the functional role of morality, there is plenty of scope for rational agents to converge.
Does it follow, then, that rational agents will always be “moral”? Does it mean that the most rational choice for maximizing any set of preferences, is also in line with “morality”?
That would put morality into decision theory, which would be kind of nice.
But I can’t think how an agent whose utility function simply read “Commit Murder” could possibly make a choice that was both moral (the way morality is traditionally defined) and rational.
But I can’t think how an agent whose utility function simply read “Commit Murder” could possibly make a choice that was both moral (the way morality is traditionally defined) and rational.
People who believe in the Convergence thesis tend not to believe in Orthogonality thesis. They tend to use the traditional definition of rationality
“In its primary sense, rationality is a normative concept that philosophers have generally tried to characterize in such a way that, for any action, belief, or desire, if it is rational we ought to choose it” (WP)
in detail, they tend to see rationalists as having a preference for objectivity, consistency and non-arbitrariness, including in their preference. Thus they would tend to see Clippies as having highly rational though and highly irrational (because arbitrary) desires. Likewise they would see a murderer who does not want to be murdered as inconsistent, and therefore irrational.
Another way of looking at it is that they would see highly intelligent and rational agents as climbing the Maslow hierarchy.
Does it follow, then, that rational agents will always be “moral”? Does it mean that the most rational choice for maximizing any set of preferences, is also in line with “morality”?
Depends how rational. For a Convergence theorist, an ideal, supremely rational agent will have rational desires and preferences. A less ideal one might fall short as a result of being non-ideal.
Thanks for that term. This makes things clearer. Based on what you are arguing, does that make you a convergence theorist then? (Or at least, you seem to be defending convergence theory here, even if you don’t wholeheartedly accept it)
I dunno...I just find the orthogonality thesis as intuitively obvious, and I’m having real trouble grasping what exactly the thought process that leads one to become a convergence theorist might be. I’m hoping you can show me what that thought process is.
hus to deny the Orthogonality thesis is to assert that there is a goal system G, such that...There cannot exist any efficient real-world algorithm with goal G.
Now, I agree that there exist some G such that this is the case, but I don’t think this set would have anything to do with morality as humans understand it.
You seem to be making the argument that one of the characteristics that would automatically qualify something as a candidate for G is immorality.
This makes no intuitive sense. Why couldn’t you make an efficient real world algorithm to destroy all life forms? It seems like, in the absence of some serious mathematical arguments to the contrary, we aught to dismiss claims that efficient real world algorithms for murder are impossible offhand.
Thanks for that term. This makes things clearer. Based on what you are arguing, does that make you a convergence theorist then?
Why is that important?
I dunno...I just find the orthogonality thesis as intuitively obvious, and I’m having real trouble grasping what exactly the thought process that leads one to become a convergence theorist might be. I’m hoping you can show me what that thought process is.
I think I can see where the intuitive appeal comes from, and I think I can see where the errors are too.
“Thus to deny the Orthogonality thesis is to assert that there is a goal system G, such that...There cannot exist any efficient real-world algorithm with goal G.”
I can see why that is appealing, but it is not equivalent to the claim that any intelligent and rational entity could have any goal. Of course you can write a dumb algorithm to efficiently make paperclips, just as you can build a dumb machine that makes paperclips. And of course an AI could....technically …design and/or implement such an algorithm, But it doesn’t follow that an AGI would do so. (Which is two propositions: it doesn’t follow that an AI could be persuaded to adopt such a goal, and it doesn’t follow that such a goal could be programmed in ab initio and remain stable).
The Convergentist would want to claim:
“To assert the Orthogonality Thesis is to assert that no matter how intelligent and rational an agent, no matter the breadth of its understanding, no matter the strength of its commitment to objectivity, no matter its abilities to self-reflect and update, it would still never realise that making huge numbers of paperclips is arbitrary and unworthy of its abilities”
The orthogonality claim only has bite against Convergence/Moral Realism if it relates to all or most or typical
rational intelligent agents, because that is how moral realists define their claim: they claim that ideal
rational agents of a typical kind will converge, or that most rational-enough and intelligent-enough agents will converge. You might be able to build a (genuinely intelligent, reflecting and updating) Clippy, but that wouldn’t prove anything. The natural existence of sociopaths doesn’t disprove MR because they are statiscally rare, and their typicality is in doubt. You can’t prove anything about morality by genetically engineering a sociopath.
As an argument against MR/C, Orthogonality has to claim that the typical, statistically
common kind of agent could have arbitrary goals, and that the evidence of convergence amongst humans
is explained by specific cultural or genetic features, not by rationality in general.
ETA:
Ben: [The Orthogonality Thesis] may be true, but who cares about possibility “in principle”? The question is whether any level of intelligence is PLAUSIBLY LIKELY to be combined with more or less any final goal in practice. And I really doubt it. I guess I could posit the alternative: Interdependency Thesis: Intelligence and final goals are in practice highly and subtly interdependent.
If we don’t understand the relationship between instrumental intelligence and goals, Clippies will seem possible—in the way that p-zombies do if you don’t understand the relationship between matter and consciousness.
Because I want to be sure that I’m understanding what the claim you’re making is.
The Convergentist would want to claim:
“To assert the Orthogonality Thesis is to assert that no matter how intelligent and rational an agent, no matter the breadth of its understanding, no matter the strength of its commitment to objectivity, no matter its abilities to self-reflect and update, it would still never realise that making huge numbers of paperclips is arbitrary and unworthy of its abilities”
Okay...so I agree with the Convergence theorist on what the implications of the Orthogonality Thesis are, and I still think the Orthogonality Thesis is true.
if it relates to all or most or typical rational intelligent agents, because that is how moral realists define their claim
Hold on now...that makes the claim completely different than what I thought we were talking about up till now. I thought we were talking about whether or not all rational agents would be in agreement about what morality is, independent of specifically human preferences?
We can have the other discussion too...but not before settling whether or not the Orthogonality Thesis is in fact true “in principle”. Remember, we originally started this discussion with my claim that morality is feelings/preference, as opposed to something you could figure out (i.e. something embedded into logic/game theory or the universe itself.) We weren’t originally talking about rational agents to shed light on evolution or plausible AI...we brought them in as hypothetical agents who converge upon the correct answer to any answerable question, to explore whether or not “what is good” is independent from “what do humans think is good”.
I thought we were talking about whether morality was something that could be discovered objectively.
I said:
Morality comes from the “heart”. It’s made of feelings.
Then you said:
People use feelings/System1 to do morality. That doesn’t make it an oracle. Thinking might be more accurate.
Then I said
Accurate? How can you speak of a moral preference being “accurate” or not? Moral preferences simply are. Accurate? How can you speak of a moral preference being “accurate” or not? Moral preferences simply are.
You disagreed, and said
Moral objectivism isn’t obviously wrong,
To which I countered
all rational agents will converge upon mathematical statements, and will not converge upon moral statements.
You disagreed:
morality could work like convergence on mathematical truth.
Which is why
I thought we were talking about whether or not all rational agents would be in agreement about what morality is, independent of specifically human preferences?
Hence
if it relates to all or most or typical rational intelligent agents
doesn’t make any sense in our discussion. All rational agents converge on mathematical and ontological facts, by definition. My argument was that there is no such thing as a “moral fact: and moral statements can only be discussed when in reference to the psychology of a small set of creatures which includes humans and some other mammals. I argued that moral statements can’t be “discovered” true or false in any ontological or mathematical sense, nor are they deeply embedded into game theory (meaning it is not always in the interest of all rational agents to follow human morality) - even though game theory does explain how we evolved morality given our circumstances.
If you admit that at least one of all possible rational agent doesn’t converge upon morality, you’ve been in agreement with me this entire time—which means we’ve been talking about different things this entire time...so what did you think we were talking about?
All rational agents converge on mathematical and ontological facts, by definition.
Only by a definition whereby “rational” means “ideally rational”. In the ordinary sense of the term, it perfectly possible for someone who is deemed “rational” in a more-or-less, good-enough sense to fail to understand some mathematical truths. The existence of the innumerate does not disprove the objectivity of mathematics, and the existence of sociopaths does not disprove the objectivity of morality.
If you admit that at least one of all possible rational agent doesn’t converge upon morality,
Do you believe that it is possible for a rational agent to fail to understand a mathematical truth? Because that seems rather commonplace to me. Unless you mean ideally rational....
The whole point of invoking an ideal rational agent in the first place was to demonstrate that moral “truths” aren’t like empirical or mathematical truths in that you can’t discover them objectively through philosophy or mathematics (even if you are infinitely smart). Rather, moral “truths” are peculiar to humans.
If you want to illustrate the non-objectivity of morality, then stating that even ideal rational agents won’t converge on them is one of expressing the point, although it helps to state the “ideal” explicitly. However, that is still only the expression of a claim, not the “demonstration” of one.
There are some meta-ethics sequences here that explain the arbitrariness of our moral preferences more
eloquently , and here is a fun story that tangentially illustrates it
I think you’ve misunderstood the meta-ethics sequences, then, or I have, because
Pebble sorting is a preference. That’s it. I don’t have to believe it is a moral preference or a correct moral preference.
is quite similar to Eliezer’s position. Although Juno_Watt may have reached it from another direction.
I read it as a warning about expecting sufficiently rational beings to automatically acquire human morality, in the same way that sufficiently rational beings would automatically acquire knowledge about true statements (science, etc). The lesson is that preferences (morality, etc) is different from fact.
If you want to know Eliezer’s views, he spells them out explicitly here—although I think the person most famous for this view is Nietzsche (not that he’s the first to have held this view).
To me, “No universally compelling arguments” means this—two rational agents will converge upon factual statements, but they need not converge upon preferences (moral or otherwise) because moral statements aren’t “facts”.
It really doesn’t matter if you define the pebble sorting as a “moral” preference or a plain old preference.The point is, that humans have a morality module—but that module is in the brain and not a feature which is implicit in logical structures, nor is it a feature implicit in the universe itself.
I read it as a warning about expecting sufficiently rational beings to automatically acquire human morality, in the same way that sufficiently rational beings would automatically acquire knowledge about true statements (science, etc). The lesson is that preferences (morality, etc) is different from fact.
I agree that is what it is trying to say, but...as you made illustrate above..it only appears to work if the reader is willing to bel fuzzy about the difference between preference and moral preference.
If you want to know Eliezer’s views, he spells them out explicitly
For some value of “explicit”. He doesn’t even restrict the range of agents to rational agents, and no-one
expects irrationali agents to agree with each other, or rational ones.
To me, “No universally compelling arguments” means this—two rational agents will converge upon factual statements, but they need not converge upon preferences (moral or otherwise) because moral statements aren’t “facts”.
Mathematical statements aren’t empirical facts eitherl but convergence is uncontroversial there.
It really doesn’t matter if you define the pebble sorting as a “moral” preference or a plain old preference.The point is, that humans have a morality module—but that module is in the brain and not a feature which is implicit in logical structures, nor is it a feature implicit in the universe itself.
Are you quite sure that morlaity isn’t implicit in the logic of how-a-society-if-entities-wth-varying-prefernces manage-to-rub-along ?
Ummmmm… do I draw the line around the whole of the human race? I’m not sure whether I do or not. I do know that there is a certain boundary (defined mostly by culture) where I get much more likely to say ‘that’s your problem’ and become much less skeptical/cynical about preferences, although issues that seem truly serious always get the same treatment.
For some reason, choosing to accept that somebody’s utility function might be very different from your own feels kind of like abandoning them from the inside. (Subjective!).
You could also, in principle, have a utilitarianism that gives unequal weights to different people. I’ve asked around here for a reason to think that the egalitarian principle is true, but haven’t yet received any responses that are up to typical Less Wrong epistemic standards.
I’ve asked around here for a reason to think that the egalitarian principle is true, but haven’t yet received any responses that are up to typical Less Wrong epistemic standards.
It’s a very clear Schelling point. At least until advances in uplifting/AI/brain emulation/etc. complicates the issue of what counts as a human.
This seems to me very unclear actually. In fact, I have never encountered someone that acted as if this was (approximately) the decision criterion they were following. For all the humans I have personally observed, they seem to be acting as if they, their friends, and their family members are weighted thousands or millions of times greater than perfect strangers.
I’ve asked around here for a reason to think that the egalitarian principle is true
…
is true
You’re applying moral realism here...as in, you are implying that moral facts exist objectively, outside of a human’s feelings. Are you dong this intentionally?
Your alternative would be to think an aristocratic or meritocratic principle is true. (It’s either equal or unequal, right?)
I think we can assume aristocracy is a dead duck along with the Divine Right of Kings and other theological relics.
Meritocracy in some form I believe has been advocated by some utilitarians. People with Oxford degrees get 10 votes. Cambridge 9. Down to the LSE with 2 votes and the common ignorant unlettered herd 1 vote…
This is kind of an epistemocratic voting regime which some think might lead to better outcomes. Alas, no one has been game to try get such laws up. There is little evidence that an electorate of PhDs is any less daft/ignorant/clueless/idle/indifferent on matters outside their specialty than the general public.
From a legal rights perspective, egalitarianism is surely correct. Equal treatment before the law seems a lot easier to defend than unequal treatment.
But put something up that assumes a dis-egalitarian principle and see how it flies. I’d be interested to see if you can come up with something plausible that is dis-egalitarian and up to epistemic scratch...
Your alternative would be to think an aristocratic or meritocratic principle is true. (It’s either equal or unequal, right?)
I wouldn’t use those terms, since they bring in all kinds of unnecessary connotations. I would say the opposite of the egalitarian principle is the non-egalitarian principle. I was thinking less along the lines of nobles/commoners and more along the lines of my children/other people’s children. I find the idea (that I think the egalitarian principle entails) that I have as much obligation to perfect strangers as to my wife to be extremely counter-intuitive.
I think we can assume aristocracy is a dead duck along with the Divine Right of Kings and other theological relics.
I don’t consider the Divine Right of Crowds (‘human rights’, or whatever the cool kids are calling it these days) to be any less silly than those ‘theological relics’.
Meritocracy in some form I believe has been advocated by some utilitarians. People with Oxford degrees get 10 votes. Cambridge 9. Down to the LSE with 2 votes and the common ignorant unlettered herd 1 vote...
This is kind of an epistemocratic voting regime which some think might lead to better outcomes. Alas, no one has been game to try get such laws up. There is little evidence that an electorate of PhDs is any less daft/ignorant/clueless/idle/indifferent on matters outside their specialty than the general public.
This part isn’t really relevant to what I’m talking about, since I’m not discussing equal weight in decision-making, but equal weight in a social welfare function. My infant son’s interests are one of my greatest concerns, but he currently has about zero say in family decision-making.
From a legal rights perspective, egalitarianism is surely correct. Equal treatment before the law seems a lot easier to defend than unequal treatment.
Equal treatment before the law does not necessarily mean that individuals interests are weighted equally. When was the last time you heard of jurors on a rape trial trying to figure out exactly how much utility the rapist got so they could properly combine that with the disutility of the victim?
Of course what “the cool kids” are actually talking about is more like a Divine Right of People; it’s got nothing to do with treating people differently when there’s a mass of them. And of course adding the word “divine” is nothing more than a handy way of making it sound sillier than it otherwise would (whereas in “Divine Right of Kings” it is a word with an actual meaning; the power of kings was literally thought to be of divine origin).
So, removing some of the spin, what you’re apparently saying is that “let’s treat all people as having equal rights” seems as silly to you as “let’s suppose that one person in each country is appointed by a divine superbeing to rule over all the others”. Well, OK.
Equal treatment before the law does not necessarily mean that individuals’ interests are weighted equally.
It means that people are treated unequally only according to differences that are actually relevant. (Of course then the argument shifts to which differences are relevant; but at least then one actually has to argue for their relevance rather than simply assuming it on traditional grounds.)
Having said all of which, I agree that the usual arguments for equal weighting completely fail to show that a person shouldn’t give higher weighting to herself, her family, her friends, etc.
Of course what “the cool kids” are actually talking about is more like a Divine Right of People; it’s got nothing to do with treating people differently when there’s a mass of them.
The state in which I live has statute law initiatives, so yes, people actually do ‘rule’ only if there is a large enough mass of them. Individually, I have no such (legal) right.
And of course adding the word “divine” is nothing more than a handy way of making it sound sillier than it otherwise would (whereas in “Divine Right of Kings” it is a word with an actual meaning; the power of kings was literally thought to be of divine origin).
Speaking of dubious origins:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights...
I am in complete agreement with the following:
It means that people are treated unequally only according to differences that are actually relevant. (Of course then the argument shifts to which differences are relevant; but at least then one actually has to argue for their relevance rather than simply assuming it on traditional grounds.)
Having said all of which, I agree that the usual arguments for equal weighting completely fail to show that a person shouldn’t give higher weighting to herself, her family, her friends, etc.
In any case, the point of my comment was not to bring up politics, but to show the incompatibility of typical intuitions with regards to how one should treat family and friends compared to strangers with what (the most popular flavors of) utilitarianism seems to indicate is ‘correct’.
I have argued with utilitarians several times on Less Wrong and the discussions seem to follow the same sequence of backpedalling. First they claim utilitarianism is true. Then, when I ask and they are unable to conceive of an experiment that would verify or falsify it, they claim that it isn’t the kind of thing that has a truth-value, but that it is a description of their preferences. Next, I demonstrate that relying on revealed preference shows that virtually nobody actually has utilitarian preferences. Lastly, they claim that intuition gives us good reason go with (even if it isn’t True) utilitarianism. My response to NancyLebovitz in this thread is yet another attempt to show that, no, it really isn’t intuitive.
Is this an accurate description of what is going on or am I mind-killed on the subject of normative ethics (or both, or neither)?
When you first used the phrase “Divine Right of Crowds” you immediately explained in parentheses that you meant “human rights” or something similar. Now you seem to be talking about democracy instead. The two aren’t the same, though probably approval of one is correlated with approval of the other.
Anyway, “crowds” in the literal sense still aren’t involved (it needs N people to get something voted on, but that doesn’t require them to be colocated or to know one another or anything else crowd-like other than sheer numbers; and if you’re now using “Divine Right of Crowds” to mean “a political system that tries to favour outcomes preferred by more people rather than fewer” then, again, I suggest that you’re picking terminology simply to make the other side look as silly as possible.
Speaking of dubious origins: [...]
It is possible that those words from the Declaration of Independence show that in the 18th century people believed in something like a “Divine Right of Crowds”. (It’s not entirely obvious, though. Perhaps they actually just believed in a Right of Crowds and thought what they said would sound better if they included “created” and “by their Creator”; compare the mention of a Creator at the end of some editions of the Origin of Species, or Einstein’s “God does not play dice”.)
But that doesn’t mean that people who now favour democracy, or human rights, or independence of the US from the UK, have to believe (or commonly do believe) that those things are divinely ordained. Similarly, there are people now who want kings without believing in a Divine Right of Kings, and pretending that they do would be a shabby rhetorical trick.
[...] incompatibility of typical intuitions [...] with what (the most popular flavors of) utilitarianism seems to indicate [...]
Yup, there are indeed such incompatibilities (though I think one could make a reasonable argument that, given human nature, overall utility is likely to be higher in a society where people care more about themselves and those closer to them than in one where they truly care equally about everyone. Surely not nearly so much more as our intuitions lead to, though.
the same sequence of backpedalling
I’ll take your word for it, but I’m a bit surprised: I’d have thought an appreciable fraction of LWers advocating utilitarianism would start from the position that it’s an expression of their preferences rather than an objective fact about the world.
(For my part, not that it particularly matters, I do indeed care most about myself, and less about people less connected to me, physically further from me, more unlike me, etc., but I find that as I reflect more on my preferences in any given case they shift nearer to egalitarianism, though they often don’t get all the way. Something like utilitarianism seems like a pretty decent approximation to what I’d want in law.)
am I mind-killed [...]?
I can’t tell, obviously, but I do tend to think that things like switching ground without noticing (“human rights” --> democracy) and insisting on using question-begging language (“Divine Right of Crowds”) are often signs of someone not thinking as clearly as they might be.
I think we can assume aristocracy is a dead duck along with the Divine Right of Kings and other theological relics.
Counterpoint: it offers stability, which is useful regardless of theology. See the Fnargle World thought experiment and various other neo-reactionary stuff on Why Democracy Is Bad.
Let me put it this way: would you rather we’re ruled by someone who’s skilled at persuading us to elect him, and who focuses resources on looking good in four years; or someone who’s been trained since birth to govern well, and knows they or their descendants will be held accountable for any future side-effects of their policies?
These arguments may be deeply flawed, but hereditary aristocracy doesn’t stand of fall with the Divine Right Of Kings.
Counterpoint: it offers stability, which is useful regardless of theology.
Stability Is good if governance is good and bad if not.
Let me put it this way: would you rather we’re ruled by someone who’s skilled at persuading us to elect him, and who focuses resources on looking good in four years;
...and you can get rid of..
or someone who’s been trained since birth to govern well, and knows they or their descendants will be held accountable for any future side-effects of their policies?
OK. Looks like democracy with a supply of candidates from Kennedy-style political dynasties is the best of all possible systems...;-)
Kinda. In practice a lot of the power of government wrests in agencies that offer advice to the currently ruling party, and those agencies often embody significant powers themselves. It would be a mistake to confuse the elected executive branch of government with government entire. It’s not even clear to me that they have the majority share of influence over what actually happens.
Under democracy, the people can decide if their stable government has outstayed its welcome after so many years.
Except that due to problems with rational ignorance they frequently make bad choices. Furthermore, this system encourages politicians to made shortsighted decisions.
There’s an ordinary selection mechanism for politicians, and an ordinary selection mechanism for lords of the manor.
Ideally, the ordinary selection mechanism for politicians (elections) would choose people who define success the way the voter would define success. That said, we both know that this is not how things actually work. For principal-agent delegation reasons, politicians often have their own agendas that conflict with voter preferences. The politician agenda diverges increasingly from the voter agenda as the number of voters increases (i.e. national figures generally have more freedom to pursue their own ends than county officials).
Still, politician agendas cannot completely diverge from voter preferences. Observationally, many voter preferences are implemented into law. As an extreme example, bribery is illegal even though the prohibition is bad for most politicians. So there is reason to think that the ordinary selection process for politicians leads to some connection in the definition of success (teleologically, if not cognitively).
By contrast, there is no particular reason to think the ordinary selection mechanism (inheritance) picks lords of the manor who want to implement tenant farmers preferences. Unless you include revolutionary change, which does not seem like an ordinary selection process.
Inasmuch as democracy woks, they do. In an ideal democracy, representatives are servants of the people who are fired if they don’t deliver. Diverging interests are failures, not inherent to democracy.
What do you mean by “inherent to democracy”? Certain types of failures, e.g., politicians pursuing short sighted policies because they’re not likely to be around when said policies implode, are systemic to democracies.
To a certain extent. However, the bureaucrat has no motivation to care about the welfare of the people, not even the politician’s desire to get reelected or the noble’s incentive to make his estate successful. The bureaucrat’s incentive, by contrast, is to expand his bureaucratic empire, frequently at the expense of the nation as a whole.
But it’s still long termist. None of the cogs does the work of the whole machine itself. You also need a free press, even though their motivation is to sell pieces of paper.
Only if we define “interest” in a rational sense (i.e., “how rational agents embodying the role of ‘employers’ should optimally behave if their goals/values are X), rather than in an evopsych sense (i.e., “how human apes embodying the role of ‘employers’ will tend to behave, and what that implies that the encoded values of human apes actually are”).
Maintaining or improving position within the dominance hierarchy often co-opts other concerns that a human ape might have, up to and including bare survival. Often, that cognitive dissonance is “resolved” by that human ape convincing themselves that strategies which improve their position within the dominance hierarchy are actually strategies to achieve other goals that seem more palatable to the parts of their brain that cogitate palatability.
(In Anglo: “We like bossing more than we like living well, but we like thinking that we’re trying to live well more than we like thinking that we’re trying to boss. So, we trick ourselves into believing that we’re trying to live well, when we’re really just trying to boss.”)
Its in their economic interest to tax the peasantry to almost but not quite the point of starvation, and use the excess to fund land-acquisition, which is pretty much what they did for centuries. You could argue that with the benefit of hindsight, what they should have done is abandoned agriculture+war for education+industrialisation, since [by some measures] ordinary citizens of the present are wealthier than the aristocrats of the past. But I could argue right back that the industrial revoiution wasn’t that good for the aristocaracy, as a class, in the end.
You could argue that with the benefit of hindsight, what they should have done is abandoned agriculture+war for ecuation+industialisation, since ordinary citizens of the present are wealthier than the aristocrats of the past.
Only if you consider absolute gains preferable to relative/”zero-sum” gains, which our evolved psychological makeup isn’t really prepared to do very well.
Social animals with a natural dominance hierarchy will often see “how well am I doing right now, compared to how well everyone else around me is doing right now?” as a more salient question than “how well am I doing right now, compared to how well I was doing before / how well I could be doing?”.
Only if you consider absolute gains preferable to relative/”zero-sum” gains, which our evolved psychological makeup isn’t really prepared to do very well.
Except that that never happens, and it’s not in their interests to disrupt the economy that much, and it’s also
not in their interests to do something that might lead to civil unrest...and it never happens.
Well, it never happens at the 49%-51% level, but that’s because there aren’t any countries where 49% of the country is wealthy enough to be worth plundering (see Pareto). Massive redistribution of wealth away from minorities has happened quite a bit, as in Zimbabwe, Haiti, Germany, and others. The various communist revolutions seem to be an example of this, if you allow ‘democracy of the sword’, and I would suspect pogroms are as well, to the extent that property is looted as well as destroyed.
One counterexample is sufficient to break a “never.” To the extent that ‘good’ democracies do not do this, it is not a statement about the incentive structure of democracy, but a statement about the preferences of the voters of that particular polity.
Like Vaniver said, it’s never happened this explicitly, but demanding that [group you’ve just demonized] pay their “fair share” is relatively common rhetoric. And yes, politicians are willing to do this even as it gradually destroys the economy as is happening right now in Europe.
that [group you’ve just demonized] pay their “fair share” is relatively common rhetoric.
Quite. It’s hard to make it stick unless it is seen as fair.
And yes, politicians are willing to do this even as it gradually destroys the economy as is happening right now in Europe.
You mean southern Europe? I don’t know who you think the 49% are. (In fact, given the tendency of democracies to alternate between parties of the left and right, one would expect the 49% and 51% to switch roles, leading to an averaging out).
In any case, if Greek or Spanish voters vote for unsustainable benefits, more fool them, It wasn’t done to them, they did it to themselves.
(In fact, given the tendency of democracies to alternate between parties of the left and right, one would expect the 49% and 51% to switch roles, leading to an averaging out).
I think you’re overestimating the amount of difference between the two parties. Also, this still screws the economy.
In any case, if Greek or Spanish voters vote for unsustainable benefits, more fool them, It wasn’t done to them, they did it to themselves.
Well, I suppose all govt. is unstable, then. Which dynasty has been in power forever?
Stability is a matter of degree, as you’re well aware. Few dynasties lose power after four years of rule.
What good is that going to do a peasant like me? It’s not like they are going to knock off the cost of electioneering from my taxes.
Even a massive amount of spending on election campaigns is less likely to succeed (and thus less stable) than a (relatively) small amount of spending an safeguarding from assassination.
Also, election campaigns have negative effects on, among other things, the rationality of the populace; and they encourage polarization in the long term—in contrast, bodyguards discourage trying to off your rich uncle for the inheritance.
Considering many of them profess to include other kinds of intelligence, at least in theory … it seems to be mostly a consistency thing. Why shouldn’t I include Joe The Annoying Git?
The questions themselves are unimportant...but the deeper moral undercurrent which causes those questions to be privileged is important.
Ask the counter-question: what do you plan to do once you’ve settled to your satisfaction the struggle between moral concern X and moral concern Y? Have you known yourself to change your behavior after settling such issues?
I agree that people have different opinions about the relative value of different moral concerns. What I’m pessimistic about is the value of discussing those differences by focusing on questions like the examples I gave.
Can anyone find an example of a “privileged” question which isn’t a disguised moral struggle?
If you wanted to be really pessimistic about mathematics research, you could argue that most of pure math research consists of privileged questions.
Ask the counter-question: what do you plan to do once you’ve settled to your satisfaction the struggle between moral concern X and moral concern Y? Have you known yourself to change your behavior after settling such issues?
Of course! I have to change my behavior to be in accord with my new-found knowledge about my preferences. A current area of moral uncertainty for me revolves around the ethics of eating meat, which is motivating me to do research on the intelligence of various animals. As a result, the bulk of my meat consumption has shifted from more intelligent/empathetic animals (pigs) to less intelligent animals (shrimp, fish, chicken).
Through discussion, I’ve also influenced some friends into having more socially liberal views, thus changing the nature of their interpersonal interactions. If optimizing charity was the question that people focused on, we would still end up having the discussion about whether or not the charity should provide abortions, contraceptives, etc.
You can’t escape discussing the fundamental moral questions if those moral struggles create disagreement about which action should be taken.
I agree that people have different opinions about the relative value of different moral concerns. What I’m pessimistic about is the value of discussing those differences by focusing on questions like the examples I gave.
I do think that it might be better to focus on the underlying moral values rather than the specific examples.
If optimizing charity was the question that people focused on, we would still end up having the discussion about whether or not the charity should provide abortions, contraceptives, etc.
Since GiveWell hasn’t found any good charities that provide abortions and give out contraceptives the answer in this community is probably: “No, charity shouldn’t do those things.”
That’s however a very different discussion from mainstream US discussion over the status of abortion.
Since GiveWell hasn’t found any good charities that provide abortions and give out contraceptives the answer in this community is probably: “No, charity shouldn’t do those things.”
Did an ‘is’ just morph into a ‘should’ there somehow?
Since GiveWell hasn’t found any good charities that provide abortions and give out contraceptives the answer in this community is probably: “No, charity shouldn’t do those things.”
Or “There is not an existing charity which does those things well enough to donate towards.”
“Givewell hasn’t found any good charities that do X” does not imply “Charity should not do X”
Qiaochu_Yuan’s argument was that debates over abortion are privileged questions (discussed disproportionately to the value of answering them).
I added that while this is true in regards to the specific nature of the questions, the underlying moral uncertainty that the questions represent (faced by the US population—lesswrong is pretty settled here) is one that is valuable to discuss for the population at large because it effects how they behave.
Givewell isn’t worrying about moral uncertainty—they’ve already settled approximately on utilitarianism. Not so for the rest of the population.
All the examples of privileged questions given are disguised manifestations of moral uncertainty
is the struggle between a morality that favors equality, and one that has a certain set of values surrounding purity and/or respect for religious authority.
is the struggle between individual autonomy vs. harm avoidance
is the struggle between in-group preference and lack thereof
The questions themselves are unimportant...but the deeper moral undercurrent which causes those questions to be privileged is important. If someone is against gay marriage and stem cells, how do you expect them to react to trans-humanist memes, life extension, and the AI?
When society makes a decision about the morality of gay marriage and stems cells, they have also gone part of the way to making a decision about AI, since a lot of the same moral circuitry is going to be involved.
Side comment: Can anyone find an example of a “privileged” question which isn’t a disguised moral struggle?
Isn’t moral strugle a part of how mindkilling feels from inside?
Also, compare these two questions:
a) Should gay marriage be legal?
b) How to optimize the society for more long-term utility for people of any sexual orientation?
Only the first one could get media attention. And it’s not because the second one is less moral.
You can’t even ask this question until you arrive at utilitarianism as a moral philosophy. A person with moral objections against homosexual marriage isn’t a utilitarian by definition, since they care about additional things (purity, respect for authority, etc) which have nothing to do with increasing everyone’s utility..
When you ask “how to maximize utility”, you have already assumed that the moral struggle between harm/care and purity has been settled in favor of harm/care. Otherwise, you would be asking about how to maximize utility while also keeping people from “defiling” themselves.
As mare-of-night reminded us elsewhere in-thread, even Clippy is a utilitarian. There’s nothing special about paperclips or purity that prevents them from being included in someone’s definition of utility.
On the other hand, even if your post boils down to “my definition of utility is the correct global definition”, that’s no more wrong than Viliam_Bur’s treating “utility for people” as a well-defined term without billions of undetermined coefficients.
So the original question was:
Under classical preference utilitarianism, you try to maximize everyone’s utility and conveniently ignore the problems of putting two utility functions into one equation, and the problems you mention.
Continuing to conveniently ignore that problem, I implicitly assume that we agree that the positive utility generated by removing restrictions to homosexuality outweigh the negative utility generated by violating purity boundaries, when applied over the entire population.
We still include the purity thing in the calculations of course. For example, I could in principle argue that the negative utility from allowing sex in public probably outweighs the positive utility generated from the removal of the restriction, hence our public obscenity laws.
That ignores the possibility that there is a reason those purity boundaries were there in the first place.
I’ve seen this before, but I can’t say I find it a compelling argument - if an institution was placed for good reason, then at least someone, somewhere would remember why it was placed and could give a compelling argument. If no one can do so, the risk of some, hidden drawback which the original lawmaker could have forseen seems too small to count.
I mean, this argument does apply when you are acting alone, on some question that neither you nor anyone you come into contact with knows anything about...but it doesn’t apply to something like this.
How do utilitarians decide to draw the boundary at the whole human race rather than some smaller set of humans?
II’m not sure if I understand your question...
Utilitarians who choose to draw the line around the whole of the human race do so because they believe they aught to value the whole of the human race.
Is that a deontological standard?
The reason I asked is that, in principle, you could have utilitarianism based on some group smaller than the human race.
For some people, probably. Let’s take a step back.
Morality comes from the “heart”. It’s made of feelings. Utilitarianism (and much of what falls under moral philosophy) is one of many attempt to make a consistent set of rules to describe inconsistent feelings. The purpose of making a consistent set of rules is 1) to convince others of the morality of an action and 2) because we morally feel aversion to hypocrisy and crave moral consistency.
Keeping those aims in mind, drawing the line across all humans, sentient beings, etc has the following benefits:
1) The creators might feel that the equation describes the way they feel better when they factor in all humans. They might hold it as a deontological standard to care about all humans, or they might feel a sense of fairness, or they might have empathy for everyone, etc.
2) Drawing the line across all humans allows you to use the utilitarian standard to negotiate compromises with any arbitrary human you come across. Many humans, having the feelings described in [1], will instinctively accept utilitarianism as a valid way to think about things.
There are plenty of things that are problematic here, but that is why utilitarianism defaults to include the whole human race. As with all things moral, that’s just an arbitrary choice on our part, and we could easily have done it a different way. We can restrict it to a smaller subset of humans, we can broaden it to non-human things which seem agent-like enough to be worth describing with a utility function, etc. Many utilitarians include animals, for example.
People use feelings/System1 to do morality. That doesn’t make it an oracle. Thinking might be more accurate.
If you don’t know how to solve a problem, you guess. But that doens’t mean anything goes. Would anyone include rocks in the Circle? Probably not, since they don’t have feelings, values, or preferences. So there seem to be some constraints.
Accurate? How can you speak of a moral preference being “accurate” or not? Moral preferences simply are. There are some meta-ethics sequences here that explain the arbitrariness of our moral preferences more eloquently , and here is a fun story that tangentially illustrates it
I bet I can find you someone who would say that burning the Quran or the Bible is inherently immoral.
Quite a few of them no doubt. Of course, the overwhelming majority of people who would say that burning the Quran or the Bible is inherently immoral would also say that it’s immoral by virtue of the preferences of an entity that, on their view, is in fact capable of having preferences.
Of course, I’m sure I could find someone who would say rocks have feelings, values, and preferences.
I don’t think this is an accurate formulation of the general religious attitude towards morality.
I agree. Do you also think it’s a false statement?
Let’s just say the expression “it’s immoral by virtue of the preferences of an entity” is not actually a good ‘translation’ of the phrase they’d use.
Um… well, I’m not really sure what to do with that statement, but I’m happy to leave the topic there if you prefer.
Ok, maybe I misunderstood your question in the grandparent. Which statement was it referring to?
“the overwhelming majority of people who would say that burning the Quran or the Bible is inherently immoral would also say that it’s immoral by virtue of the preferences of an entity that, on their view, is in fact capable of having preferences.”
They’d phrase it in terms of sacredness, which isn’t quite the same thing, e.g., how would you apply your argument to flag burning?
Fair enough.
Conversationalists will want to preserve ecosystems, even where those ecosystems are already well studied by science, even when the ecosystem contains no sentient beings (plants, fungi, microbes), even when destroying the ecosystem has many advantages for humans, because they think the ecosystem is intrinsically valuable independently of the effect on beings with feelings, values, and preferences.
Some looser examples...
Pro-life advocates say that beings without preferences have rights by virtue of future preferences. Not all of them are religious.
Hindus treat books (all books in general) with reverence because they are vehicles of learning, despite not necessarily believing in deities.
Many social conservatives report being unwilling to slap their fathers, even with permission, as part of a play.
The classic trolley problem implies that many people’s moral intuitions hinge on the act of murder being wrong, rather than the effect that the death has on the values, feelings, and preferences being morally wrong.
Of course, if you are a moral realist, you can just say that these people’s intuitions are “wrong”...but the point is that “feelings, values, and preferences”—in a word, utilitarianism—isn’t the only guiding moral principle that humans care about.
And yes, you could argue that this is all a deity’s preferences...but why did they decide that those were in fact the deity’s preferences? Doesn’t it hint that they might have an underlying feeling of those preferences in themselves, that they would project those wishes on a deity?
No doubt some of them will, but I suspect you meant “conservationists.” And yes, I agree that some of those will assign intrinsic value to “nature” in various forms, or at least claim to, as you describe.
Some of them do, yes. Indeed, I suspect the ones who say that are disproportionately non-religious.
A fine question.
That’s one possibility, yes.
And, again, if destroying entity X is wrong because some other entity Y says so, that is not inherent.
Indeed. Do you mean to say that you don’t expect it to be said, or merely that those saying it are confused?
The latter.
We sometimes extend morality to inanimate objects , but only ones that mean something to us, such as works of art and religious artefacts. That isn’t actually inherent because of the “to us” clause, although some people might claim that it is.
Pebble sorting is a preference. That’s it. I don’t have to believe it is a moral preference or a correct moral preference.
Moral objectivism isn’t obviously wrong, and system 2 isn’t obviously the wrong way to realise moral truths. IOW, moral subjectivism isn’t obviously true.
NB: Objectivism isn’t universalism.
Beliefs simply are. And some are true and some are not. You seem to be assuming the non-existence of anything that could verify or disprove a moral preference in order to prove more or less the same thing.
I would say that the “to us” clause actually applies to everything, and that nothing is “inherent”, as you put it. Pebble sorting means something to the pebble sorters. Humans mean something to me. The entirity of morality boils down to what is important “to us”?
To me, moral objectivism is obviously wrong and subjectivism is obviously true, and this is embedded in my definition of morality. I’m actually unsure how anyone could think of it in any other coherent way.
I think it’s time to unpack “morality”. I think morality is feelings produced in the human mind about how people aught to act. That is, I think “murder is bad” is in some ways analogous to “Brussels sprouts are gross”. From this definition, it follows that I see moral objectivism as obviously wrong—akin to saying, “no man, Brussels sprouts are objectively, inherently gross! In the same way that the sky is objectively blue! / In the same way that tautologies are true!” (Actually, replace blue with the appropriate wavelengths to avoid arguments about perception)
What do you think “morality” is, and where do you suppose it comes from?
I think morality is behaving so as to take into account the values and preferences of others as well as ones own. You can succed or fail in that, hence “accurate”.
Morality may manifest in the form of a feeling for many people, but not for everybody and not all feelings are equal. So I don’t think that is inherent, or definitional.
I don’t think the sprout analogy works, because your feeling that you don’t like sprouts doesn’t seriously affect others, but the psychoaths fondndess for murder does.
The feelings that are relevant to morality are the empathic ones, not personal preferences. That is a clue that morality is about behaving so as to take into account the values and preferences of others as well as ones own.
if you think morlaity is the same as a personal preference...what makes it morality? Why don’t we just have one word and one way of thinking?
Because they feel different to us from the inside—for the same reason that we separate “thinking” and “feeling” even though in the grand scheme of things they are both ways to influence behavior.
In Math, empirical evidence is replaced by axioms. In Science, the axioms are the empirical evidence.
The point is that all rational agents will converge upon mathematical statements, and will not converge upon moral statements. Do you disagree?
I’m very, very sure that my morality doesn’t work that way.
Imagine you lived on a world with two major factions, A and B.
A has a population of 999999. B has a population of 1000.
Every individual in A has a very mild preference for horrifically torturing B, and the motivation is sadism and hatred. The torture and slow murder of B is a bonding activity for A, and the shared hatred keeps the society cohesive.
Every individual in B has a strong, strong preference not to be tortured, but it doesn’t even begin to outweigh the collective preferences of A.
From the standpoint of preference utilitarianism, this scenario is analogous to Torture vs. Dust Specks. Preference Utilitarians choose torture, and a good case could be made even under good old human morality to choose torture as the lesser of two evils. This is a problem which I’d give serious weight to choosing torture
Preference utilitarian agents would let A torture B—“shut up and multiply”. However, from the standpoint of my human morality, this scenario is very different from torture vs. dust specks, and I wouldn’t even waste a fraction of a second in deciding what is right in this scenario. Torture for the sake of malice is wrong (to me) and it really doesn’t matter what everyone else’s preferences are—if it’s in my power, I’m not letting A torture B!
Morality evolved as a function of how it benefited single alleles, not societies. Under different conditions, it could have evolved differently. You can’t generalize from the way morality works in humans to the way it might work in all possible societies of entities.
Agreement isn’t important: arguments are important. You apparently made the argument that convergence on morality isn’t possible because it would require empirically detectable moral objects. I made the counterargument that convergence on morality could work like convergence on mathematical truth. So it seems that convergence on morlaity could happen, since there is a way it could work.
OK. Utilitarianism sucks. That doens’t mean other objective approaches don’t work—you could be a deontologist. And it doesn’t mean subjectivism does work.
Says who? We can generalise language, maths and physics beyond our instinctive System I understandings. And we have.
is the reason why I said that my morality isn’t preference utilitarian. If “taking into account the values and preferences of others as well as your own”, then preference utilitarianism seems to be the default way to do that.
Alright...so if I’m understanding correctly, you are saying that moral facts exist and people can converge upon them independently, in the same ways that people will converge on mathematical facts. And I’m saying we can’t, and that morality is a preference linked to emotions. Neither of us have really done anything but restate our positions here. My position seems more or less inherent in my definition of morality, and I think you understand my position...but I still don’t understand yours.
Can I have a rudimentary definition of morality, an example of a moral fact, and a process by which two agents can converge upon it?
Can you give me a method of evaluating a moral fact which doesn’t at some point refer to our instincts? Do moral facts necessarily have to conform to our instincts? As in, if I proved a moral fact to you, but your instincts said it was wrong, would you still accept that it was right?
For lexicographers, the default is apparently deontology
“conformity to the rules of right conduct”
“Principles concerning the distinction between right and wrong or good and bad behavior.”
etc.
1 A means by which communities of entities with preferences act in accordance with all their preferences.
2 Murder is wrong.
3 Since agents do not wish to be murdered, it is in their interests to agree to refrain from murder under an arrangement in which other agents agree to refrain from removing them.
I don’t see why I need to, Utilitarianism and ontology take preferences and intuitions into account. Your argument against utilitarinism that it comes to to conclusions which go against your instincts. That isn’t just an assumption that morality has to something to do with instincts, it is a further assumption that your instincts trump all further constderations It is an assumption of subjectivism.
You are saying objectivism is false because subjectivism is true. If utilitarianism worked, it would take intuitions and preferences into account, and arrive at some arrangement that minimises the number of people who don’t get their instincts or preferences satisfied. Some people have to lose You have decided that is unaccpetable because you have decided that you must not lose. But utilitariansim still works in the sense that a set of subjective prefefernces can be treated as objective facts, and aggregated together. There is nothing to stop different utilitarians (of the same variety) converging on a decision. U-ism “works” in that sense.You objection is not that convergence is not possible, but that what is converged upon is not moral, because your instincts say not.
But you don’t have any argument beyond an assumption that morality just is what your instincts say. The other side of the argument doesn’t have to deny the instinctive or subjective aspect of morality, it only needs to deny that your instincts are supreme. And it can argue that since morality is about the regulation of conduct amongst groups, the very notion of subjective morality is incoherent (parallel: language is all about communication, so a language that is only understood by one person is a paradox).
Maybe. Almost everybody who has had their mind changed about sexual conduct had overridden an instinct.
So there are several things I don’t like about this..
0) It’s not in their interests to play the cooperative strategy if they are more powerful, since the other agent can’t remove them.
1) It’s not a given that all agents do not wish to be murdered. It’s only luck that we wish not to die. Sentient beings could just as easily have come out of insects who allow themselves to be eaten by mates, or by their offspring.
2) So you sidestep this, and say that this only applies to beings that wish to be murdered. Well now, this is utilitarianism. You’d essentially be saying that all agents want their preferences fulfilled, therefore we should all agree to fulfill each others preferences.
Essentially yes. But to rephrase: I know that the behavior of all agents (including myself) will work to bring about the agent’s preferences to the best of the agent’s ability, and this is true by definition of what a “preference” is.
I’m not sure I follow what you mean by this. My ideas about sexual conduct are in line with my instincts. A highly religious person’s ideas about sexual conduct are in line with the instincts that society drilled into them. If I converted that person into sex-positivism, they would shed the societal conditioning and their morality and feelings would change. Who is not in alignment with their instincts?
(Instincts here means feelings with no rational basis, rather than genetically programmed or reflexive behaviors)
I am not sure what the argument is here. The objectivist claim is not that every entity actually will be moral in practice, and it’t not the claim that every agent will be interested in settling moral question: it’s just the claim that agents who are interested in settling moral questions, and have the same set of facts available (ie live in the same society) will be able to converge. (Which is as objective as anything else.The uncontentious claim that mathematics is objective doens’t imply that everyone is a mathematician, or knows all mathematical truths).
I have described morality as an arrangement within a society. Alien societies might have different morality to go with their diffrent biology. That is not in favour of subjectivism, because subjectivism requires morality to vary with personal preference, not objective facts about biology. Objectivism does not mean universalism. It means agents, given the same facts, and the willingness to draw moral conclusions from them, will converge. It doens’t mean the facts never vary. if they do, so will the conclusions
All agents want their preferences fulfilled, and what “should” means is being in accordance some arrangement for resolving the resulting conflicts, whether utilitarian, deontological, or something else.
The convertee. In my expererience, people are generally converted by arguments...reasoning...system 2. So when people are converted, they go from Instinct to Reason. But perhaps you know of some process by which subjective feelings are transferred directly, without the involvement of system 2.
But don’t you see what you’re doing here? You are defining a set of moral claims M, and then saying that any agents who are interested in M will converge on M!
The qualifier “agents who are interested in moral questions” restricts the set of agents to those who already agree with you about what morality is. Obviously, if we all start from the same moral axioms, we’ll converge onto the same moral postulates—the point is that the moral axioms are arbitrarily set by the user’s preferences.
Wait, so you are defining morality is defined as a system of conflict resolution between agents? I actually do like that definition...even though it doesn’t imply convergence.
Then Utilitarianism is the solution that all agents should maximize preferences, deontological is the solution that there exist a set of rules to follow when arbitrating conflict, etc.
Counterexample—Imagine a person who isn’t religious, who also believes incest between consenting adults is wrong (even for old infertile people, even if no one else gets to know about it). There is no conflict between the two agents involved—would you say that this person is not exhibiting a moral preference, but something else entirely?
The vast majority of people are not convinced by argument, but by life experience. For most people, all the moral rhetoric in the world isn’t as effective as a picture of two gay men crying with happiness as they get married.
That’s besides the point, though—you are right that it is possible (though difficult) to alter someone’s moral stance through argument alone. However, “System 1” and “System 2“ share a brain. You can influence “system 1” via “system 2”—reasoning can effect feelings, and vice versa. I can use logical arguments to change someone’s feelings on moral issues. That doesn’t change the fact that the moral attitude stems from the feelings.
If you can establish a shared set of “moral axioms” with someone, you can convince them of the rightness or wrongness of something with logic alone. This might make it seem like any two agents can converge on morality—but just because most humans have certain moral preferences hardwired into them doesn’t mean every agent has the same set of preferences. I have some moral axioms, you have some moral axioms, and we can use shared moral axioms to convince each other of things… but we won’t be able to convince any agent which has moral axioms that do not match with ours.
I haven’t defined a set of moral claims. You asked me for an example of one claim. I can argue the point with specifying any moral conclusions. The facts I mentioned as the input to the process are not moral per se.
In a sense, yes. But only in the sense that “agents who are interested in mathematical questions” restricts the set of agents who are interested in “mathematics” as I understand it. On the other had, nothing is implied about the set of object level claims moral philosophers would converge on.
I don’t have to accept that, because I am not use a subjective criterion for “morailty”. If you have a preference for Tutti Frutti, that is not a moral preference, because it does not affect anybody else. The definition of morality I am using is not based on any personal preference of mine, it’s based on a recognition that morality has a job to do.
If no convergence takes place, how can you have an implementable system? People are either imprisoned or not, they cannot be imprisoned for some agents but not for others.
You are tacilty assuming that no action will be taken on the basis of feelings of wrongness, that nobody ever campaigns to ban things they don’t like.
If system 1 was influenced by system 2 , then what stems from system 1 stemmed from system 2, and so on. You are drawing an arbitrary line.
If moral axioms are completely separate from everything else, then you would need to change their axioms. If they are not, then not. For instance, you can argue that some moral attitudes someone has are inconsistent with others. Consistency is not a purely moral criterion.
If “moral axioms” overlap with rational axioms, and if moral axioms are constrained by the functional role of morality, there is plenty of scope for rational agents to converge.
Does it follow, then, that rational agents will always be “moral”? Does it mean that the most rational choice for maximizing any set of preferences, is also in line with “morality”?
That would put morality into decision theory, which would be kind of nice.
But I can’t think how an agent whose utility function simply read “Commit Murder” could possibly make a choice that was both moral (the way morality is traditionally defined) and rational.
People who believe in the Convergence thesis tend not to believe in Orthogonality thesis. They tend to use the traditional definition of rationality
“In its primary sense, rationality is a normative concept that philosophers have generally tried to characterize in such a way that, for any action, belief, or desire, if it is rational we ought to choose it” (WP)
in detail, they tend to see rationalists as having a preference for objectivity, consistency and non-arbitrariness, including in their preference. Thus they would tend to see Clippies as having highly rational though and highly irrational (because arbitrary) desires. Likewise they would see a murderer who does not want to be murdered as inconsistent, and therefore irrational.
Another way of looking at it is that they would see highly intelligent and rational agents as climbing the Maslow hierarchy.
Depends how rational. For a Convergence theorist, an ideal, supremely rational agent will have rational desires and preferences. A less ideal one might fall short as a result of being non-ideal.
Thanks for that term. This makes things clearer. Based on what you are arguing, does that make you a convergence theorist then? (Or at least, you seem to be defending convergence theory here, even if you don’t wholeheartedly accept it)
I dunno...I just find the orthogonality thesis as intuitively obvious, and I’m having real trouble grasping what exactly the thought process that leads one to become a convergence theorist might be. I’m hoping you can show me what that thought process is.
The page even says it:
Now, I agree that there exist some G such that this is the case, but I don’t think this set would have anything to do with morality as humans understand it.
You seem to be making the argument that one of the characteristics that would automatically qualify something as a candidate for G is immorality.
This makes no intuitive sense. Why couldn’t you make an efficient real world algorithm to destroy all life forms? It seems like, in the absence of some serious mathematical arguments to the contrary, we aught to dismiss claims that efficient real world algorithms for murder are impossible offhand.
Why is that important?
I think I can see where the intuitive appeal comes from, and I think I can see where the errors are too.
I can see why that is appealing, but it is not equivalent to the claim that any intelligent and rational entity could have any goal. Of course you can write a dumb algorithm to efficiently make paperclips, just as you can build a dumb machine that makes paperclips. And of course an AI could....technically …design and/or implement such an algorithm, But it doesn’t follow that an AGI would do so. (Which is two propositions: it doesn’t follow that an AI could be persuaded to adopt such a goal, and it doesn’t follow that such a goal could be programmed in ab initio and remain stable).
The Convergentist would want to claim:
“To assert the Orthogonality Thesis is to assert that no matter how intelligent and rational an agent, no matter the breadth of its understanding, no matter the strength of its commitment to objectivity, no matter its abilities to self-reflect and update, it would still never realise that making huge numbers of paperclips is arbitrary and unworthy of its abilities”
The orthogonality claim only has bite against Convergence/Moral Realism if it relates to all or most or typical rational intelligent agents, because that is how moral realists define their claim: they claim that ideal rational agents of a typical kind will converge, or that most rational-enough and intelligent-enough agents will converge. You might be able to build a (genuinely intelligent, reflecting and updating) Clippy, but that wouldn’t prove anything. The natural existence of sociopaths doesn’t disprove MR because they are statiscally rare, and their typicality is in doubt. You can’t prove anything about morality by genetically engineering a sociopath.
As an argument against MR/C, Orthogonality has to claim that the typical, statistically common kind of agent could have arbitrary goals, and that the evidence of convergence amongst humans is explained by specific cultural or genetic features, not by rationality in general.
ETA:
If we don’t understand the relationship between instrumental intelligence and goals, Clippies will seem possible—in the way that p-zombies do if you don’t understand the relationship between matter and consciousness.
Because I want to be sure that I’m understanding what the claim you’re making is.
Okay...so I agree with the Convergence theorist on what the implications of the Orthogonality Thesis are, and I still think the Orthogonality Thesis is true.
Hold on now...that makes the claim completely different than what I thought we were talking about up till now. I thought we were talking about whether or not all rational agents would be in agreement about what morality is, independent of specifically human preferences?
We can have the other discussion too...but not before settling whether or not the Orthogonality Thesis is in fact true “in principle”. Remember, we originally started this discussion with my claim that morality is feelings/preference, as opposed to something you could figure out (i.e. something embedded into logic/game theory or the universe itself.) We weren’t originally talking about rational agents to shed light on evolution or plausible AI...we brought them in as hypothetical agents who converge upon the correct answer to any answerable question, to explore whether or not “what is good” is independent from “what do humans think is good”.
I don’t see how. What did you think we were talking about?
I thought we were talking about whether morality was something that could be discovered objectively.
I said:
Then you said:
Then I said
You disagreed, and said
To which I countered
You disagreed:
Which is why
Hence
doesn’t make any sense in our discussion. All rational agents converge on mathematical and ontological facts, by definition. My argument was that there is no such thing as a “moral fact: and moral statements can only be discussed when in reference to the psychology of a small set of creatures which includes humans and some other mammals. I argued that moral statements can’t be “discovered” true or false in any ontological or mathematical sense, nor are they deeply embedded into game theory (meaning it is not always in the interest of all rational agents to follow human morality) - even though game theory does explain how we evolved morality given our circumstances.
If you admit that at least one of all possible rational agent doesn’t converge upon morality, you’ve been in agreement with me this entire time—which means we’ve been talking about different things this entire time...so what did you think we were talking about?
Only by a definition whereby “rational” means “ideally rational”. In the ordinary sense of the term, it perfectly possible for someone who is deemed “rational” in a more-or-less, good-enough sense to fail to understand some mathematical truths. The existence of the innumerate does not disprove the objectivity of mathematics, and the existence of sociopaths does not disprove the objectivity of morality.
Do you believe that it is possible for a rational agent to fail to understand a mathematical truth? Because that seems rather commonplace to me. Unless you mean ideally rational....
I did mean ideally rational.
The whole point of invoking an ideal rational agent in the first place was to demonstrate that moral “truths” aren’t like empirical or mathematical truths in that you can’t discover them objectively through philosophy or mathematics (even if you are infinitely smart). Rather, moral “truths” are peculiar to humans.
If you want to illustrate the non-objectivity of morality, then stating that even ideal rational agents won’t converge on them is one of expressing the point, although it helps to state the “ideal” explicitly. However, that is still only the expression of a claim, not the “demonstration” of one.
I’m not sure what you mean by “statistically common” here. Do you mean a randomly picked agent out of the set of all possible agents?
I mean likely to be encountered, likely to evolve or to be built (unless you are actually trying to build a Clippy)
I think you’ve misunderstood the meta-ethics sequences, then, or I have, because
is quite similar to Eliezer’s position. Although Juno_Watt may have reached it from another direction.
I read it as a warning about expecting sufficiently rational beings to automatically acquire human morality, in the same way that sufficiently rational beings would automatically acquire knowledge about true statements (science, etc). The lesson is that preferences (morality, etc) is different from fact.
If you want to know Eliezer’s views, he spells them out explicitly here—although I think the person most famous for this view is Nietzsche (not that he’s the first to have held this view).
To me, “No universally compelling arguments” means this—two rational agents will converge upon factual statements, but they need not converge upon preferences (moral or otherwise) because moral statements aren’t “facts”.
It really doesn’t matter if you define the pebble sorting as a “moral” preference or a plain old preference.The point is, that humans have a morality module—but that module is in the brain and not a feature which is implicit in logical structures, nor is it a feature implicit in the universe itself.
I agree that is what it is trying to say, but...as you made illustrate above..it only appears to work if the reader is willing to bel fuzzy about the difference between preference and moral preference.
For some value of “explicit”. He doesn’t even restrict the range of agents to rational agents, and no-one expects irrationali agents to agree with each other, or rational ones.
Mathematical statements aren’t empirical facts eitherl but convergence is uncontroversial there.
Are you quite sure that morlaity isn’t implicit in the logic of how-a-society-if-entities-wth-varying-prefernces manage-to-rub-along ?
Juno Watt has read the sequences, but still doesn’t know what Eliezer’s position is.
Ummmmm… do I draw the line around the whole of the human race? I’m not sure whether I do or not. I do know that there is a certain boundary (defined mostly by culture) where I get much more likely to say ‘that’s your problem’ and become much less skeptical/cynical about preferences, although issues that seem truly serious always get the same treatment.
For some reason, choosing to accept that somebody’s utility function might be very different from your own feels kind of like abandoning them from the inside. (Subjective!).
You could also, in principle, have a utilitarianism that gives unequal weights to different people. I’ve asked around here for a reason to think that the egalitarian principle is true, but haven’t yet received any responses that are up to typical Less Wrong epistemic standards.
It’s a very clear Schelling point. At least until advances in uplifting/AI/brain emulation/etc. complicates the issue of what counts as a human.
This seems to me very unclear actually. In fact, I have never encountered someone that acted as if this was (approximately) the decision criterion they were following. For all the humans I have personally observed, they seem to be acting as if they, their friends, and their family members are weighted thousands or millions of times greater than perfect strangers.
That, or something like it, is the decision criterion people are expected to follow when acting in official capacity.
You’re applying moral realism here...as in, you are implying that moral facts exist objectively, outside of a human’s feelings. Are you dong this intentionally?
Your alternative would be to think an aristocratic or meritocratic principle is true. (It’s either equal or unequal, right?)
I think we can assume aristocracy is a dead duck along with the Divine Right of Kings and other theological relics.
Meritocracy in some form I believe has been advocated by some utilitarians. People with Oxford degrees get 10 votes. Cambridge 9. Down to the LSE with 2 votes and the common ignorant unlettered herd 1 vote…
This is kind of an epistemocratic voting regime which some think might lead to better outcomes. Alas, no one has been game to try get such laws up. There is little evidence that an electorate of PhDs is any less daft/ignorant/clueless/idle/indifferent on matters outside their specialty than the general public.
From a legal rights perspective, egalitarianism is surely correct. Equal treatment before the law seems a lot easier to defend than unequal treatment.
But put something up that assumes a dis-egalitarian principle and see how it flies. I’d be interested to see if you can come up with something plausible that is dis-egalitarian and up to epistemic scratch...
Hint: plutocracy...
I wouldn’t use those terms, since they bring in all kinds of unnecessary connotations. I would say the opposite of the egalitarian principle is the non-egalitarian principle. I was thinking less along the lines of nobles/commoners and more along the lines of my children/other people’s children. I find the idea (that I think the egalitarian principle entails) that I have as much obligation to perfect strangers as to my wife to be extremely counter-intuitive.
I don’t consider the Divine Right of Crowds (‘human rights’, or whatever the cool kids are calling it these days) to be any less silly than those ‘theological relics’.
This part isn’t really relevant to what I’m talking about, since I’m not discussing equal weight in decision-making, but equal weight in a social welfare function. My infant son’s interests are one of my greatest concerns, but he currently has about zero say in family decision-making.
Equal treatment before the law does not necessarily mean that individuals interests are weighted equally. When was the last time you heard of jurors on a rape trial trying to figure out exactly how much utility the rapist got so they could properly combine that with the disutility of the victim?
Of course what “the cool kids” are actually talking about is more like a Divine Right of People; it’s got nothing to do with treating people differently when there’s a mass of them. And of course adding the word “divine” is nothing more than a handy way of making it sound sillier than it otherwise would (whereas in “Divine Right of Kings” it is a word with an actual meaning; the power of kings was literally thought to be of divine origin).
So, removing some of the spin, what you’re apparently saying is that “let’s treat all people as having equal rights” seems as silly to you as “let’s suppose that one person in each country is appointed by a divine superbeing to rule over all the others”. Well, OK.
It means that people are treated unequally only according to differences that are actually relevant. (Of course then the argument shifts to which differences are relevant; but at least then one actually has to argue for their relevance rather than simply assuming it on traditional grounds.)
Having said all of which, I agree that the usual arguments for equal weighting completely fail to show that a person shouldn’t give higher weighting to herself, her family, her friends, etc.
The state in which I live has statute law initiatives, so yes, people actually do ‘rule’ only if there is a large enough mass of them. Individually, I have no such (legal) right.
Speaking of dubious origins:
I am in complete agreement with the following:
In any case, the point of my comment was not to bring up politics, but to show the incompatibility of typical intuitions with regards to how one should treat family and friends compared to strangers with what (the most popular flavors of) utilitarianism seems to indicate is ‘correct’.
I have argued with utilitarians several times on Less Wrong and the discussions seem to follow the same sequence of backpedalling. First they claim utilitarianism is true. Then, when I ask and they are unable to conceive of an experiment that would verify or falsify it, they claim that it isn’t the kind of thing that has a truth-value, but that it is a description of their preferences. Next, I demonstrate that relying on revealed preference shows that virtually nobody actually has utilitarian preferences. Lastly, they claim that intuition gives us good reason go with (even if it isn’t True) utilitarianism. My response to NancyLebovitz in this thread is yet another attempt to show that, no, it really isn’t intuitive.
Is this an accurate description of what is going on or am I mind-killed on the subject of normative ethics (or both, or neither)?
When you first used the phrase “Divine Right of Crowds” you immediately explained in parentheses that you meant “human rights” or something similar. Now you seem to be talking about democracy instead. The two aren’t the same, though probably approval of one is correlated with approval of the other.
Anyway, “crowds” in the literal sense still aren’t involved (it needs N people to get something voted on, but that doesn’t require them to be colocated or to know one another or anything else crowd-like other than sheer numbers; and if you’re now using “Divine Right of Crowds” to mean “a political system that tries to favour outcomes preferred by more people rather than fewer” then, again, I suggest that you’re picking terminology simply to make the other side look as silly as possible.
It is possible that those words from the Declaration of Independence show that in the 18th century people believed in something like a “Divine Right of Crowds”. (It’s not entirely obvious, though. Perhaps they actually just believed in a Right of Crowds and thought what they said would sound better if they included “created” and “by their Creator”; compare the mention of a Creator at the end of some editions of the Origin of Species, or Einstein’s “God does not play dice”.)
But that doesn’t mean that people who now favour democracy, or human rights, or independence of the US from the UK, have to believe (or commonly do believe) that those things are divinely ordained. Similarly, there are people now who want kings without believing in a Divine Right of Kings, and pretending that they do would be a shabby rhetorical trick.
Yup, there are indeed such incompatibilities (though I think one could make a reasonable argument that, given human nature, overall utility is likely to be higher in a society where people care more about themselves and those closer to them than in one where they truly care equally about everyone. Surely not nearly so much more as our intuitions lead to, though.
I’ll take your word for it, but I’m a bit surprised: I’d have thought an appreciable fraction of LWers advocating utilitarianism would start from the position that it’s an expression of their preferences rather than an objective fact about the world.
(For my part, not that it particularly matters, I do indeed care most about myself, and less about people less connected to me, physically further from me, more unlike me, etc., but I find that as I reflect more on my preferences in any given case they shift nearer to egalitarianism, though they often don’t get all the way. Something like utilitarianism seems like a pretty decent approximation to what I’d want in law.)
I can’t tell, obviously, but I do tend to think that things like switching ground without noticing (“human rights” --> democracy) and insisting on using question-begging language (“Divine Right of Crowds”) are often signs of someone not thinking as clearly as they might be.
Counterpoint: it offers stability, which is useful regardless of theology. See the Fnargle World thought experiment and various other neo-reactionary stuff on Why Democracy Is Bad.
Let me put it this way: would you rather we’re ruled by someone who’s skilled at persuading us to elect him, and who focuses resources on looking good in four years; or someone who’s been trained since birth to govern well, and knows they or their descendants will be held accountable for any future side-effects of their policies?
These arguments may be deeply flawed, but hereditary aristocracy doesn’t stand of fall with the Divine Right Of Kings.
Stability Is good if governance is good and bad if not.
...and you can get rid of..
OK. Looks like democracy with a supply of candidates from Kennedy-style political dynasties is the best of all possible systems...;-)
Kinda. In practice a lot of the power of government wrests in agencies that offer advice to the currently ruling party, and those agencies often embody significant powers themselves. It would be a mistake to confuse the elected executive branch of government with government entire. It’s not even clear to me that they have the majority share of influence over what actually happens.
I was suggesting that it might serve to render governance better.
You still have to focus on retaining popularity, via attacking political opponents and increasing PR skills, unless the elections are total shams.
Also, to be clear, I’m not advocating this position; just pointing out there are other arguments for it than the “Divine Right of Kings”.
Under democracy, the people can decide if their stable government has outstayed its welcome after so many years.
Whilst aristos just have to keep slipping their rivals the poisoned chalice...much more discreet.
Got that.
Except that due to problems with rational ignorance they frequently make bad choices. Furthermore, this system encourages politicians to made shortsighted decisions.
Whereas aristos can be batshit crazy due to problems with genetics. Furthermore, this system encourages them to make selfcentered decisions.
What do you mean by “self-centered”? It is after all in a noble’s self-interest to pursue the success of his manor and its inhabitants.
I’m not sure the lord of the manor and the tenant farmer define “success” the same way.
The politician and the voter in a democracy also don’t define “success” in the same way.
There’s an ordinary selection mechanism for politicians, and an ordinary selection mechanism for lords of the manor.
Ideally, the ordinary selection mechanism for politicians (elections) would choose people who define success the way the voter would define success. That said, we both know that this is not how things actually work. For principal-agent delegation reasons, politicians often have their own agendas that conflict with voter preferences. The politician agenda diverges increasingly from the voter agenda as the number of voters increases (i.e. national figures generally have more freedom to pursue their own ends than county officials).
Still, politician agendas cannot completely diverge from voter preferences. Observationally, many voter preferences are implemented into law. As an extreme example, bribery is illegal even though the prohibition is bad for most politicians. So there is reason to think that the ordinary selection process for politicians leads to some connection in the definition of success (teleologically, if not cognitively).
By contrast, there is no particular reason to think the ordinary selection mechanism (inheritance) picks lords of the manor who want to implement tenant farmers preferences. Unless you include revolutionary change, which does not seem like an ordinary selection process.
I think that is what I was trying to say, but you said it much better.
Inasmuch as democracy woks, they do. In an ideal democracy, representatives are servants of the people who are fired if they don’t deliver. Diverging interests are failures, not inherent to democracy.
What do you mean by “inherent to democracy”? Certain types of failures, e.g., politicians pursuing short sighted policies because they’re not likely to be around when said policies implode, are systemic to democracies.
In practice short-termism is ameliorated by life presidents, second chambers, career civil servants, etc.
To a certain extent. However, the bureaucrat has no motivation to care about the welfare of the people, not even the politician’s desire to get reelected or the noble’s incentive to make his estate successful. The bureaucrat’s incentive, by contrast, is to expand his bureaucratic empire, frequently at the expense of the nation as a whole.
But it’s still long termist. None of the cogs does the work of the whole machine itself. You also need a free press, even though their motivation is to sell pieces of paper.
It is also in a factory-owner’s interest to pursue the success of his factories and their workers. And yet...
What’s more, it’s in an emplyers interest to have workers who are stakeholders..
Only if we define “interest” in a rational sense (i.e., “how rational agents embodying the role of ‘employers’ should optimally behave if their goals/values are X), rather than in an evopsych sense (i.e., “how human apes embodying the role of ‘employers’ will tend to behave, and what that implies that the encoded values of human apes actually are”).
Maintaining or improving position within the dominance hierarchy often co-opts other concerns that a human ape might have, up to and including bare survival. Often, that cognitive dissonance is “resolved” by that human ape convincing themselves that strategies which improve their position within the dominance hierarchy are actually strategies to achieve other goals that seem more palatable to the parts of their brain that cogitate palatability.
(In Anglo: “We like bossing more than we like living well, but we like thinking that we’re trying to live well more than we like thinking that we’re trying to boss. So, we trick ourselves into believing that we’re trying to live well, when we’re really just trying to boss.”)
Its in their economic interest to tax the peasantry to almost but not quite the point of starvation, and use the excess to fund land-acquisition, which is pretty much what they did for centuries. You could argue that with the benefit of hindsight, what they should have done is abandoned agriculture+war for education+industrialisation, since [by some measures] ordinary citizens of the present are wealthier than the aristocrats of the past. But I could argue right back that the industrial revoiution wasn’t that good for the aristocaracy, as a class, in the end.
Only if you consider absolute gains preferable to relative/”zero-sum” gains, which our evolved psychological makeup isn’t really prepared to do very well.
Social animals with a natural dominance hierarchy will often see “how well am I doing right now, compared to how well everyone else around me is doing right now?” as a more salient question than “how well am I doing right now, compared to how well I was doing before / how well I could be doing?”.
That’s what I meant.
nod I just felt it needed to be stated more explicitly.
Yes and it’s in the interest of elected politicians to take all the property of 49% of the population and divide it among the remaining 51%.
Except that that never happens, and it’s not in their interests to disrupt the economy that much, and it’s also not in their interests to do something that might lead to civil unrest...and it never happens.
Well, it never happens at the 49%-51% level, but that’s because there aren’t any countries where 49% of the country is wealthy enough to be worth plundering (see Pareto). Massive redistribution of wealth away from minorities has happened quite a bit, as in Zimbabwe, Haiti, Germany, and others. The various communist revolutions seem to be an example of this, if you allow ‘democracy of the sword’, and I would suspect pogroms are as well, to the extent that property is looted as well as destroyed.
I don’t think you have many good examples of democracies there.
One counterexample is sufficient to break a “never.” To the extent that ‘good’ democracies do not do this, it is not a statement about the incentive structure of democracy, but a statement about the preferences of the voters of that particular polity.
Or the details of the exact structure of the democracy which may create relevant incentives.
Like Vaniver said, it’s never happened this explicitly, but demanding that [group you’ve just demonized] pay their “fair share” is relatively common rhetoric. And yes, politicians are willing to do this even as it gradually destroys the economy as is happening right now in Europe.
Quite. It’s hard to make it stick unless it is seen as fair.
You mean southern Europe? I don’t know who you think the 49% are. (In fact, given the tendency of democracies to alternate between parties of the left and right, one would expect the 49% and 51% to switch roles, leading to an averaging out).
In any case, if Greek or Spanish voters vote for unsustainable benefits, more fool them, It wasn’t done to them, they did it to themselves.
I think you’re overestimating the amount of difference between the two parties. Also, this still screws the economy.
See my comment on rational ignorance above.
The two parties where?
I think you may be over generalising from (your assessment of) your own nation.
Uhhh...so democracy is not theoretically perfect. The discussion was about whether there is anything practical that is less bad, eg aristocracy.
I should have said two coalitions, sorry.
A stable government that loses power when it loses an election is, in fact “unstable”.
Eh, taste-testers, bodyguards and damn good doctors are cheaper than election campaigns.
Well, I suppose all govt. is unstable, then. Which dynasty has been in power forever?
What good is that going to do a peasant like me? It’s not like they are going to knock off the cost of electioneering from my taxes.
Stability is a matter of degree, as you’re well aware. Few dynasties lose power after four years of rule.
Even a massive amount of spending on election campaigns is less likely to succeed (and thus less stable) than a (relatively) small amount of spending an safeguarding from assassination.
Also, election campaigns have negative effects on, among other things, the rationality of the populace; and they encourage polarization in the long term—in contrast, bodyguards discourage trying to off your rich uncle for the inheritance.
I can’t seem to google up anything with the worlds “Fnargle World”
http://unqualified-reservations.blogspot.com/2007/05/magic-of-symmetric-sovereignty.html
This is the reference.
Considering many of them profess to include other kinds of intelligence, at least in theory … it seems to be mostly a consistency thing. Why shouldn’t I include Joe The Annoying Git?
Ask the counter-question: what do you plan to do once you’ve settled to your satisfaction the struggle between moral concern X and moral concern Y? Have you known yourself to change your behavior after settling such issues?
I agree that people have different opinions about the relative value of different moral concerns. What I’m pessimistic about is the value of discussing those differences by focusing on questions like the examples I gave.
If you wanted to be really pessimistic about mathematics research, you could argue that most of pure math research consists of privileged questions.
Of course! I have to change my behavior to be in accord with my new-found knowledge about my preferences. A current area of moral uncertainty for me revolves around the ethics of eating meat, which is motivating me to do research on the intelligence of various animals. As a result, the bulk of my meat consumption has shifted from more intelligent/empathetic animals (pigs) to less intelligent animals (shrimp, fish, chicken).
Through discussion, I’ve also influenced some friends into having more socially liberal views, thus changing the nature of their interpersonal interactions. If optimizing charity was the question that people focused on, we would still end up having the discussion about whether or not the charity should provide abortions, contraceptives, etc.
You can’t escape discussing the fundamental moral questions if those moral struggles create disagreement about which action should be taken.
I do think that it might be better to focus on the underlying moral values rather than the specific examples.
Cool. I’ve been having second thoughts about eating pigs as well.
They don’t seem to pass the mirror test (which has been my criteria for such things, even if flawed).
Since GiveWell hasn’t found any good charities that provide abortions and give out contraceptives the answer in this community is probably: “No, charity shouldn’t do those things.”
That’s however a very different discussion from mainstream US discussion over the status of abortion.
Did an ‘is’ just morph into a ‘should’ there somehow?
Or “There is not an existing charity which does those things well enough to donate towards.”
“Givewell hasn’t found any good charities that do X” does not imply “Charity should not do X”
We are talking about the mainstream US here.
Qiaochu_Yuan’s argument was that debates over abortion are privileged questions (discussed disproportionately to the value of answering them).
I added that while this is true in regards to the specific nature of the questions, the underlying moral uncertainty that the questions represent (faced by the US population—lesswrong is pretty settled here) is one that is valuable to discuss for the population at large because it effects how they behave.
Givewell isn’t worrying about moral uncertainty—they’ve already settled approximately on utilitarianism. Not so for the rest of the population.