..and it’s necessary to have a reasoned motivation for that. If you could really disprove things just by unmotivated refusal to use language, you could disprove everything. Meta-principle: treat one-size-fits-all arguments with suspicion.
ETA: you’ve misunderstood the grandparent, the point of which is not about a refusal to use language but rather about using it more precisely so as to avoid miscommunication and errors.
Probably because he doesn’t know what to replace it with. You introduced the words into the conversation. We’re trying to figure out what you mean by them.
morality :concern with the distinction between good and evil or right and wrong; right or good conduct
good:morally admirable
Ethics (also known as moral philosophy) is a branch of philosophy which seeks to address questions about morality; that is, about concepts such as good and bad, right and wrong, justice, and virtue.
Let me try to guess the next few moves in hopes of speeding this up:
A: Admirable according to whom? (And why’d you use “morally” in the definition of “morality”?)
B: Most people. / Everyone. / Everyone who matters.
A: So basically, if a lot of people or everyone admires something, it is morally good? It’s a popularity contest?
B: No, it’s just objectively admirable.
A: I don’t understand what it would mean to be “objectively admirable”?
B: These are two common words. How can you not understand them?
A: Each might make sense separately, but together no. Perhaps you mean “universally admirable”?
B: Yeah, that sounds good.
A: So basically, if everyone admires something, you will want to call it “morally good.” They will probably appreciate and agree to those approving words, seeing as they all admire it as well.
C; Now that you have enough of a handle on “morality” to see the difference between a theory of morality and a theory of flight, you can read the literature.
You’re aware that words have more than one definition, and in debates it is customary to define key terms before beginning? Perhaps I could interest you in this.
It could, that’s true. Only, I think, if we clear up who’s doing the admiring. There would be disagreement among a lot of people as to what’s admirable.
The issue is that some words are floating, disconnected from anything in reality, and meaningless. Consider the question: do humans have souls?
What would it mean, in terms of actual experience, for humans to have souls? What is a soul? Can you understand how if someone refused to explain what a soul is, claiming it to be a basic thing which no other words can describe, it would be pretty confusing?
What would it mean, in terms of actual experience, for something to be “morally right”? What characteristics make it that way, and how do you know?
To disbelieve in souls, you have to know what “soul” means, You seem to have mistaken an issue of truth for one of meaning.
Can you understand how if someone refused to explain what a soul is, claiming it to > be a basic thing which no other words can describe, it would be pretty confusing?
I think you are going to have to put up with that unfortunate confusion, since you can’t reduce everything to nothing.
What would it mean, in terms of actual experience, for something to be
“morally right”? What characteristics make it that way, and how do you know?
Something is morally right if it fulfils the Correct Theory of Morality. I’m not claiming to have that. However, I can recognise theories of morality, and I can do that
with my ordinary-language notiion of morality. (The theoretic is always based on the pre-theoretic. We do not reach the theoretic in one bound) I’m not creating stumbling blocks for myself by placing arbitrary requirments on definitions, like insisting that they are both
concrete and reductive.
Why do you believe there exists a Correct Theory of Physics?
As Constant points out here all the arguments based on reductionism that you’re using could just as easily be used to argue that there is no correct theory of physics.
One difference between physics and morality is that there is currently a lot more consensus about what the correct theory of physics looks like then what the correct theory of morality looks like. However, that is a statement about the current time, if you were to go back a couple centuries you’d find that there was as little consensus about the correct theory of physics as there is today about the correct theory of morality.
It’s not an argument by reductionism...it’s simply trying to figure out how to interpret the words people are using—because it’s really not obvious. It only looks like reductionism because someone asks, “What is morality?” and the answer comes: “Right and wrong,” then “What should be done,” then “What is admirable”… It is all moralistic language that, if any of it means anything, it all means the same thing.
Well the original argument, way back in the thread, was NMJablonski arguing against the existence of a “Correct Theory of Morality” by demanding that Peter provide “a clear reductionist description of what [he’s] talking about” while “tabooing words like ‘ethics’, ‘morality’, ‘should’, etc.
My point is that NMJablonski’s request is about as reasonable as demanding that someone arguing for the existence of a “Correct Theory of Physics” provide a clear reductionist description of what one means while tabooing words like ‘physics’, ‘reality’, ‘exists’, ‘experience’, etc.
Fair enough, though I suspect that by asking for a “reductionist” description NMJablonski may have just been hoping for some kind of unambiguous wording.
My point, and possibly Peter’s, is that given our current state of knowledge about meta-ethics I can give no better definition of the words “should”/”right”/”wrong” than the meaning they have in everyday use.
Note, following my analogy with physics, that historically we developed a systematic way for judging the validity of statements about physics, i.e., the scientific method, several centuries before developing a semi-coherent meta-theory of physics, i.e., empiricism and Bayseanism. With morality we’re not even at the “scientific method” stage.
My point, and possibly Peter’s, is that given our current state of knowledge about meta-ethics I can give no better definition of the words “should”/”right”/”wrong” than the meaning they have in everyday use.
This is consistent with Jablonski’s point that “it’s all preferences.”
Clearly there’s a group of people who dislike what I’ve said in this thread, as I’ve been downvoted quite a bit.
I’m not perfectly clear on why. My only position at any point has been this:
I see a universe which contains intelligent agents trying to fulfill their preferences. Then I see conversations about morality and ethics talking about actions being “right” or “wrong”. From the context and explanations, “right” seems to mean very different things. Like:
“Those actions which I prefer” or “Those actions which most agents in a particular place prefer” or “Those actions which fulfill arbitrary metric X”
Likewise, “wrong” inherits its meaning from whatever definition is given for “right”. It makes sense to me to talk about preferences. They’re important. If that’s what people are talking about when they discuss morality, then that makes perfect sense. What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
I don’t know anything about downvotes, but I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences. But it takes a conceptual shift.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences...Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
How does someone who thinks that ‘morality’ is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully.
What do you call the subject matter of that discussion? Answer: Metaethics.
What do you call success in this endeavor? Answer: “Dissolving the confusion”.
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
Moral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
I’m tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn’t see how preferences are even involved. But maybe I am not understanding your point.
Perhaps an example would help. Suppose I say, “It is morally wrong for Alice to lie to Bob.” How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice’s, Bob’s, mine, everybody else’s? For comparison purposes, also analyze the claim “It is morally wrong for Bob to strangle Alice.”
Due to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority .
“It is morally wrong for Bob to strangle Alice.”
If you say, “I don’t want you to strangle Alice.”, Bob might reply, “I don’t care what you want!”.
If you say, “Strangling Alice might have detrimental effects on your other preferences.”, Bob might reply, “I assign infinite utility to the death of Alice!” (which might very well be the case for humans in a temporary rage).
But if you say, “It is morally wrong to strangle Alice.”, Bob might get confused and reply, “You are right, I don’t want to be immoral!”. Which is really a form of coercive persuasion. Since when you say, “It is morally wrong to strangle Alice.”, you actually signal, “If you strangle Alice you will feel guilty.”. It is a manipulative method that might make Bob say, “You are right, I don’t want to be immoral!”, when what he actually means is, “I don’t want to feel guilty!”.
Primates don’t like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
This sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Complex influences, like your culture and upbringing.That’s also why some people don’t say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology.
It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn’t be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
But rationality shouldn’t be part of the solution because it is largely mistaken about cause and effect. Rationalitty is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense rationality is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself.
However, these claims are false, so you have to make a different argument.
I’ve seen this sort of substitution-argument a few times recently, so I’ll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact necessary to prevent each argument from being a re-derivation of human society from mathematical axioms. Arguers should try to be sensitive to the way in which the context of an argument may or may not change how that argument applies to other subjects (A simple example: “You should not enter that tunnel because your truck is taller than the ceiling’s clearance” is a good argument only if the truck in question is actually taller than the ceiling’s clearance.). This especially applies when arguments are not meant to be formal, or in fact when they are not intended to be arguments.
These substitution arguments are quite a shortcut. The perpetrator doesn’t actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn’t make sense, the perpetrator wins!
Making a valid argument about why the substitution argument doesn’t make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him.
Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
Cool, glad I’m not just imagining things! I think that sometimes this sort of argument can be valuable (“That person also has a subjective experience of divine inspiration, but came to a different conclusion”, frex), but I’ve become more suspicious of them recently—especially when I’m tempted to use one myself.
if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments!
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
True! This observation does not absolve us of our eternal vigilance.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Guilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow.
ETA: I’m not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I’m saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sometimes you’ll want to explain why your punishment of others is justified. If you don’t want to engage Perplexed’s “moral realism”, then either you don’t think there’s anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don’t think it’s a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
Assuming I haven’t totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
Okay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it’s still worth talking about.
Virtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference.
ETA: And also, as XiXiDu points out, to signal virtuosity.
Upvoted because I strongly agree with the spirit of this post, but I don’t think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter,
Because it harms other people directly or indirectly. Most immoral actions have that property.
to whom would it matter
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
and why would I care if I am happy and my preferences are satisfied?
Because you will probably be punished, and that tends to not satisfy your preferences.
Is it some sort of game that I am losing, where those who are the most right win?
If the moral code is correctly designed, yes.
What if I don’t want to play that game, what if I don’t care who wins?
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Because it harms other people directly or indirectly. Most immoral actions have that property.
Begging the question.
To the person you harm. To the victim’s friends and relatives.
Either that is part of my preferences or it isn’t.
To everyone in the society which is kept smoothly running by the moral code which you flout.
Either society is instrumental to my goals or it isn’t.
Because you will probably be punished, and that tends to not satisfy your preferences.
Game theory? Instrumental rationality? Cultural anthropology?
If the moral code is correctly designed, yes.
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
Then you are, by definition, irrational...
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
This community doesn’t talk about theology either[...]Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
It’s irrational to think that the evaluative buck stops with your own preferences.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
… what the optimal strategy is will change if the net values across the group changes.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Yes! That’s a point that I’ve repeated so often to so many different people [not on LW, though] that I’d more-or-less “given up”—it began to seem as futile as swatting flies in summer. Maybe I’ll resume swatting now I know I’m not alone.
Don’t think of morality as a doctrine guiding you as to how to behave.
This is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Avoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it’s a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
Then I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn’t.
Avoiding people who misbehave will change the state of the world even if that does not affect their behavior. It changes the world by protecting you. You are part of the world.
it makes sense to judge my own actions as good or bad, right or wrong. I can change them.
Yes, but if you judge a particular action of your own to be ‘wrong’, then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong.
Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong.
Before you object “But that is not morality!”, ask yourself how you learned the difference between right and wrong.
ask yourself how you learned the difference between right and wrong.
It’s a valid point that I probably learned morality this way. I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren’t covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a “better” alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don’t do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a “golden rule” that if I don’t want something to happen to me, I won’t do it to someone else even if the action is socially acceptable amidst my friends and wouldn’t be punished.
I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment.
Ah. Thanks for bringing up the Kohlberg stages—I hadn’t been thinking in those terms.
The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not ‘that which receives reward and punishment’, it is instead ‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
How many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a “normal” level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it’s the law and it’s wrong to break the law, implicitly because they would be punished if they did.) I’m not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
How many people are stuck at that level? I don’t know.
How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the ‘criminal’) by the punishment.
And also remember that we can use carrots as well as sticks. A smile and a “Thank you” provide a powerful carrot to many people. How many? Again, I don’t know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences
Assuming Amanojack explained your position correctly, then there aren’t just people fulfilling their preferences. There are people doing all kinds of things that fulfill or fail to fulfill their preferences—and, not entirely coincidentally, which bring happiness and grief to themselves or others. So then a common reasonable definition of morality (that doesn’t involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
there aren’t just people fulfilling their preferences.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word “preferences” may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent’s mind give way to evolved heuristics.
definition of morality (that doesn’t involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
If that’s how you would like to define it, that’s fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it
I suspect it’s a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex.
Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
Yes; absolutely. I suspect that a coherent definition of morality that isn’t contingent on those will have to reference a deity.
I don’t understand what you mean by preferences when you say “intelligent agents trying to fulfill their preferences”. I have met plenty of people who were trying to do things contrary to their preferences. Perhaps before you try (or someone tries for you) to distinguish morality from preferences, it might be helpful to distinguish precisely how preferences and behavior can differ?
Example? I prefer not to stay up late, but here I am doing it. It’s not that I’m acting against my preferences, because my current preference is to continue typing this sentence. It’s simply that English doesn’t differentiate very well between “current preferences”= “my preferences right this moment” and “current preferences”= “preferences I have generally these days.”
But I want an example of people acting contrary to their preferences, you’re giving one of yourself acting according to your current preferences. Hopefully, NMJablonski has an example of a common action that is genuinely contrary to the actor’s preferences. Otherwise, the word “preference” simply means “behavior” to him and shouldn’t be used by him. He would be able to simplify “the actions I prefer are the actions I perform,” or “morality is just behavior”, which isn’t very interesting to talk about.
“This-moment preferences” are synonymous with “behavior,” or more precisely, “(attempted/wished-for) action.” In other words, in this moment, my current preferences = what I am currently striving for.
Jablonski seems to be using “morality” to mean something more like the general preferences that one exhibits on a recurring basis, not this-moment preferences. And this is a recurring theme: that morality is questions like, “What general preferences should I cultivate?” (to get more enjoyment out of life)
Ok, so if I understand you correctly:
It is actually meaningful to ask “what general preferences should I cultivate to get more enjoyment out of life?” If so, you describe two types of preference: the higher-order preference (which I’ll call a Preference) to get enjoyment out of life, and the lower-order “preference” (which I’ll call a Habit or Current Behavior rather than a preference, to conform to more standard usage) of eating soggy bland french fries if they are sitting in front of you regardless of the likelihood of delicious pizza arriving. So because you prefer to save room for delicious pizza yet have the Habit of eating whatever is nearby and convenient, you can decide to change that Habit. You may do so by changing your behavior today and tomorrow and the day after, eventually forming a new Habit that conforms better to your preference for delicious foods.
Am I describing this appropriately?
If so, by the above usage, is morality a matter of Behavior, Habit, or Preference?
Sounds fairly close to what I think Jablonski is saying, yes.
Preference isn’t the best word choice. Ultimately it comes down to realizing that I want different things at different times, but in English future wanting is sometimes hard to distinguish from present wanting, which can easily result in a subtle equivocation. This semantic slippage is injecting confusion into the discussion.
Perhaps we have all had the experience of thinking something like, “When 11pm rolls around, I want to want to go to sleep.” And it makes sense to ask, “How can I make it so that I want to go to sleep when 11pm rolls around?” Sure, I presently want to go to sleep early tonight, but will I want to then? How can I make sure I will want to? Such questions of pure personal long-term utility seem to exemplify Jablonksi’s definition of morality.
Amanojack has, I think, explained my meaning well. It may be useful to reduce down to physical brains and talk about actual computational facts (i.e. utility function) that lead to behavior rather than use the slippery words “want” or “preference”.
Clearly there’s a group of people who dislike what I’ve said in this thread, as I’ve
been downvoted quite a bit.
Same here.
“Those actions which I prefer” or “Those actions which most agents in a particular
place prefer” or “Those actions which fulfill arbitrary metric X”
It doesn’t mean any of those things, since any of them can be judged wrong.
Likewise, “wrong” inherits its meaning from whatever definition is given for “right”.
It makes sense to me to talk about preferences. They’re important. If that’s what
people are talking about when they discuss morality, then that makes perfect sense.
Morality is about having the right preferences, as rationality is about having true beliefs.
What I do not understand is when people use the words “right” or “wrong” >independently of any agent’s preferences. I don’t see what they are referring to, or >what those words even mean in that context.
Do you think the sentence “there are truths no-one knows” is meaningful?
Morality is about having the right preferences, as rationality is about having true beliefs.
I understand what it would mean to have a true belief, as truth is noticeably independent of belief. I can be surprised, and I can anticipate. I have an understanding of a physical world of which I am part, and which generates my experiences.
It does not make any sense for there to be some “correct” preferences. Unlike belief, where there is an actual territory to map, preferences are merely a byproduct of the physical processes of intelligence. They have no higher or divine purpose which demands certain preferences be held. Evolution selects for those which aid survival, and it doesn’t matter if survival means aggression or cooperation. The universe doesn’t care.
I think you and other objective moralists in this thread suffer from extremely anthropocentric thinking. If you rewind the universe to a time before there are humans, in a time of early expansion and the first formation of galaxies, does there exist then the “correct” preferences that any agent must strive to discover? Do they exist independent of what kinds of life evolve in what conditions?
If you are able to zoom out of your skull, and view yourself and the world around you as interesting molecules going about their business, you’ll see how absurd this is. Play through the evolution of life on a planetary scale in your mind. Be aware of the molecular forces at work. Run it on fast forward. Stop and notice the points where intelligence is selected for. Watch social animals survive or die based on certain behaviors. See the origin of your own preferences, and why they are so different from some other humans.
Objective morality is a fantasy of self-importance, and a hold-over from ignorant quasi-religious philosophy which has now cloaked itself in scientific terms and hides in university philosophy departments. Physics is going to continue to play out. The only agents who can ever possibly care what you do are other physical intelligences in your light cone.
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
It is plainly the case that people can have morally wrong preferences, and therefore
no argument against ethics that ethics are not forced on people. People will suffer if
they hold incorrect or irrational factual beliefs, and they will suffer if they have evil preferences. In both cases there is a distinction between right and wrong, and in both
cases there is an option.
I think you and others on this thread suffer from a confusion between ontology and epistemology. There can be objective truths in mathematics without having
the number 23 floating around in space. Moral objectivity likewise does not demand
the physical existence of moral objects.
There are things I don’t want done to me. I should not therefore do them to others. I can reason my way to that conclusion without the need for moral objects, and without denying that I am made of atoms.
Wait. So you don’t believe in an objective notion of morality, in the sense of a morality that would be true even if there were no people? Instead, you think of morality as, like, a set of reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others?
Everything is non objective for some value of objective. It is doubtful that there are mathematical truths without mathematicians. But that does not make math as subjective as art.
Okay. The distinction I am drawing is: are moral facts something “out there” to be discovered, self-justifying, etc., or are they facts about people, their minds, their situations, and their relationships.
Could you answer the question for that value of objective? Or, if not, could you answer the question by ignoring the word “objective” or providing a particular value for it?
I translate that as: it’s better to talk about “moral values” than “moral facts” (moral facts being facts about what moral values are, I guess), and moral values are (approximately) reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others.
Something like that? If not, could you translate for me instead?
I take this to mean that, other than that, you agree.
(This is the charitable reading, however. You seem to be sending strong signals that you do not wish to have a productive discussion. If this is not your intent, be careful—I expect that it is easy to interpret posts like this as sending such signals.)
If this is true, then I think the vast majority of the disagreements you’ve been having in this thread have been due to unnecessary miscommunication.
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains. So, no, mathematics does not have an actual territory. It is an abstraction of physical behaviors that intelligences can use because intelligences are also physical. Mathematics works because we can perform isomorphic physical operations inside our brains.
It is plainly the case that people can have morally wrong preferences
You can say that as many times as you like, but that wont make it true.
ETA: You also still haven’t explained how a person can know that.
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains.
Only if is-real is a boolean. If it’s a number, then mathematics can be “platonically real” without us being Boltzmann brains.
As opposed to what? Subjective? What are the options? Because that helps to clarify what you mean by “objective”. Prices are created indirectly by subjective preferences and they fluctuate, but if I had to pick between calling them “subjective” or calling them “objective” I would pick “objective”, for a variety of reasons.
No; morality reduces to values that can only be defined with respect to an agent, or a set of agents plus an aggregation process. However, almost all of the optimizing agents (humans) that we know about share some values in common, which creates a limited sort of objectivity in that most of the contexts we would define morality with respect to agree qualitatively with each other, which usually allows people to get away with failing to specify the context.
It still isn’t clear what it means for a preference for murder to be “wrong”!
So far I can only infer your definition of “wrong” to be:
“Not among the correct preferences”
… but you still haven’t explained to us why you think there are correct preferences, besides to stamp your foot and say over and over again “There are obviously correct preferences” even when many people do not agree.
I see no reason to believe that there is a set of “correct” preferences to check against.
Even if there’s no such thing as objective right and wrong, they might easily be able to reason that being bloodthirsty is not in their best selfish interest.
Can people reason that bloodthirst is not a good preference to have...?
For me, now, it isn’t practical. In other circumstances it would be. It need not ever be a terminal goal but it could be an instrumental goal built in deeply.
Funny how you never quite answer the question as stated. Can you even say it is subjectively wrong?
It isn’t ‘funny’ at all. You were trying to force someone into a lose lose morality signalling position. It is appropriate to ignore such attempts and instead state what your actual position is.
In keeping with my analogy let’s translate your position into the corresponding position on physics:
I see a universe which contains intelligent agents with opinions and/or beliefs. Then I see conversations about physics and reality talking about beliefs being “true” or “false”. From the context and explanations, “true” seems to mean very different things. Like:
“My beliefs” or “The beliefs of most agents in a particular place” or “Those beliefs which fulfill arbitrary metric X”
Likewise, “false” inherits its meaning from whatever definition is given for “true”. It makes sense to me to talk about opinions and/or beliefs . They’re important. If that’s what people are talking about when they discuss truth, then that makes perfect sense. What I do not understand is when people use the words “true” or “false” independently of any agent’s opinion. I don’t see what they are referring to, or what those words even mean in that context.
Do you still agree with the changed version? If not, why not?
(I never realized how much fun it could be to play a chronophone.)
Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions. I see no cases where “right” has a meaning outside of an agent’s preferences. I don’t know how one would go about discovering the “rightness” of something, as one would a physical truth.
It is a poor analogy.
Edit: Seriously? I’m not trying to be obstinate here. Would people prefer I go away?
Seriously? I’m not trying to be obstinate here. Would people prefer I go away?
You’re not being obstinate. You’re more or less right, at least in the parent. There are a few nuances left to pick up but you are not likely to find them by arguing with Eugine.
For the record, I think in this thread Eugine_Nier follows a useful kind of “simple truth”, not making errors as a result, while some of the opponents demand sophistication in lieu of correctness.
I think we’re demanding clarity and substance, not sophistication. Honestly I feel like one of the major issues with moral discussions is that huge sophisticated arguments can emerge without any connection to substantive reality.
I would really appreciate it if someone would taboo the words “moral”, “good”, “evil”, “right”, “wrong”, “should”, etc. and try to make the point using simpler concepts that have less baggage and ambiguity.
That’s not the point. You must use your heuristics even if you don’t know how they work, and avoid demanding to know how they work or how they should work as a prerequisite to being allowed to use them. Before developing technical ideas about what it means for something to be true, or what it means for something to be right, you need to allow yourself to recognize when something is true, or is right.
I’m sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure, I’d use my feelings of right or wrong as much as the next guy, but that doesn’t make them objectively true.
Likewise, just because I intuitively feel like I have a time-continuous self, that doesn’t make consciousness fundamental.
As an agent, having knowledge of what I am, and what causes my experiences, changes my simple reliance on heuristics to a more accurate scientific exploration of the truth.
I still think it’s a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
Morality doesn’t work like physical law either. Nobody is compelled to be rational, but people who do reason can agree about certain things. That includes moral reasoning.
I’m saying that in “to be moral you must to follow whatever rules constitute morality”
the “must”
is a matter of logical necessity, as opposed to the two interpretations of compulsion considered by NMJ:
physical necessity, and edict.
You still haven’t explained in this framework why one can talk about how one gets that people “should” be moral anymore than people “should play chess”. If morality is just another game, then it loses all the force you associate with it, and it seems clear that you are distinguishing between chess and morality.
The rules of physics have a special quality of unavoidability, you don’t have an option to avoid them. Likewise people are held morally accountable under most circumstances and cant just avoid culpabability by saying “oh, I don’t play that game”. I don’t think these are aposteriori facts. I think physics is definitionally the science of the fundamental, and morality is definitionally where the evaluative buck stops.
… but they’re held morally accountable by agents whose preferences have been violated. The way you just described it means that morality is just those rules that the people around you currently care enough about to punish you if you break them.
In which case morality is entirely subjective and contingent on what those around you happen to value, no?
It can make sense to say that the person being punished was actually in the right. Were the British right to imprison Gandhi?
Peter, at this point, you seem very confused. You’ve asserted that morality is just like chess apparently comparing it to a game where one has agreed upon rules. You’ve then tried to assert that somehow morality is different and is a somehow more privileged game that people “should” play but the only evidence you’ve given is that in societies with a given moral system people who don’t abide by that moral system suffer. Yet your comment about Gandhi then endorses naive moral realism.
It is possible that there’s a coherent position here and we’re just failing to understand you. But right now that looks unlikely.
As I have pointed out about three times, the comparison with chess was to make a point about obligation, not to make a point about arbitrariness
the only evidence you’ve given is that in societies with a given moral system people who don’t abide by that moral system suffer.
I never gave that, that was someone else characterisation. What I said was that it is an anaytlcal trtuth that morality is where the evaluative buck stops.
I don’t know what you mean by the naive in naive realism. It is a a central characteristic of any kind of realism that you can have truth beyond conventional belief. The idea that there is more to morality than what a particular society wants to punish is a coherent one. It is better as morality, because subjectivism is too subject to get-out clauses. It is better as an explanation, because it can explain how de facto morality in societies and individuals can be overturned for something better.
Hmm… This is reminiscent of Eliezer’s (and my) metaethics¹. In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}. (Well, it may actually be a utility function, but sets are easier to convey in text.)
In that case, “should”, “moral”, “right” and the rest are all just different words for “the object is in the above set (which we call morality)”. And then “being moral” means “following those rules” as a matter of logical necessity, as you’ve said. But this depends on what you mean by “the rules constituting morality”, on which you haven’t said whether you agree.
In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}.
What determines the contents of the set / details of the utility function?
The short answer is: my/our preferences (suitably extrapolated).
The long answer is: it exists as a mathematical object regardless of anyone’s preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them.
But anyway, I think I’m mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
I see what you mean, and I don’t think I disagree.
I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of “rules that constitute morality” contingent upon what an agent already values (suitably extrapolated)?
No. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn’t mix up p-morality with morality.
So, you’re defining “morality” as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it “morality”?
saying “it’s all preferences” about morality is analogous to saying “it’s all opinion” about physics.
No matter what opinions anyone holds about gravity, objects near the surface of the earth not subject to other forces accelerate towards the earth at 9.8 meters per second per second. This is an empirical fact about physics, and we know ways our experience could be different if it were wrong. Do you have an example of a fact about morality, independent of preferences, such that we could notice if it is wrong?
No matter what opinions anyone holds about gravity, objects near the surface of the earth not subject to other forces accelerate towards the earth at 9.8 meters per second per second.
Do you have an example of a fact about morality, independent of preferences,
Killing innocent people is wrong barring extenuating circumstances.
(I’ll taboo the “weasel words” innocent and extenuating circumstances as soon as you taboo the “weasel words” near the surface of the earth and not subject to other forces.
such that we could notice if it is wrong?
I’m not sure it’s possible for my example to be wrong anymore then its possible for 2+2 to equal 3.
Yes, well opinions also anticipate observations. But in a sense by talking about “observable consequences” your taking advantage of the fact that the meta-theory of science is currently much more developed then the meta-theory of ethics.
But some preferences can be moral, just as some opinions can be true. There is no automatic entailment from “it is a preference” to “it has nothing to do with ethics”.
Currently, intuition. Along with the existing moral theories, such as they are.
Similar to the way people determined facts about physics, especially facts beyond the direct observation of their senses, before the scientific method was developed.
Right, and ‘facts’ about God. Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of… intuitions.
You can’t really argue that objective morality not being well-defined means that it is more likely to be a coherent notion.
Mostly by outside view analogy with the history of the development of science. I’ve read a number of ancient Greek and Roman philosophers (along with a few post-modernists) arguing against the possibility of a coherent theory of physics using arguments very similar to the ones people are using against morality.
I’ve also read a (much larger) number of philosophers trying to shoehorn what we today call science into using the only meta-theory then available in a semi-coherent state: the meta-theory of mathematics. Thus we see philosophers, Descartes being the most famous, trying and failing to study science by starting with a set of intuitively obvious axioms and attempting to derive physical statements from them.
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
As for likely I’m not sure how likely this is, I just think its more likely then a lot of people on this thread assume.
To be clear—you are talking about morality as something externally existing, some ‘facts’ that exist in the world and dictate what you should do, as opposed to a human system of don’t be a jerk. Is that an accurate portrayal?
If that is the case, there are two big questions that immediately come to mind (beyond “what are these facts” and “where did they come from”) - first, it seems that Moral Facts would have to interact with the world in some way in order for the study of big-M Morality to be useful at all (otherwise we could never learn what they are), or they would have to be somehow deducible from first principles. Are you supposing that they somehow directly induce intuitions in people (though, not all people? so, people with certain biological characteristics?)? (By (possibly humorous, though not mocking!) analogy, suppose the Moral Facts were being broadcast by radio towers on the moon, in which case they would be inaccessible until the invention of radio. The first radio is turned on and all signals are drowned out by “DON’T BE A JERK. THIS MESSAGE WILL REPEAT. DON’T BE A JERK. THIS MESSAGE WILL...”.)
The other question is, once we have ascertained that there are Moral Facts, what property makes them what we should do? For instance, suppose that all protons were inscribed in tiny calligraphy in, say, French, “La dernière personne qui est vivant, gagne.” (“The last person who is alive, wins”—apologies for Google Translate) Beyond being really freaky, what would give that commandment force to convince you to follow it? What could it even mean for something to be inherently what you should do?
It seems, ultimately, you have to ask “why” you should do “what you should do”. Common answers include that you should do “what God commands” because “that’s inherently What You Should Do, it is By Definition Good and Right”. Or, “don’t be a jerk” because “I’ll stop hanging out with you”. Or, “what makes you happy and fulfilled, including the part of you that desires to be kind and generous” because “the subjective experience of sentient beings are the only things we’ve actually observed to be Good or Bad so far”.
The distinction I am trying to make is between Moral Facts Engraved Into The Foundation Of The Universe and A Bunch Of Words And Behaviors And Attitudes That People Have (as a result of evolution & thinking about stuff etc.). I’m not sure if I’m being clear, is this description easier to interpret?
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
If I knew the answer we wouldn’t be having this discussion.
Define your terms, then you get a fair hearing. If you are just saying the terms could maybe someday be defined, this really isn’t the kind of thing that needs a response.
To put it in perspective, you are speculating that someday you will be able to define what the field you are talking about even is. And your best defense is that some people have made questionable arguments against this non-theory? Why should anyone care?
used in auxiliary function to express obligation, propriety, or expediency
As for obligation—I doubt you are under any obligation other than to avoid the usual uncontroversially nasty behavior, along with any specific obligations you may have to specific people you know. You would know what those are much better than I would. I don’t really see how an ordinary person could be all that puzzled about what his obligations are.
As for propriety—over and above your obligation to avoid uncontroversially nasty behavior, I doubt you have much trouble discovering what’s socially acceptable (stuff like, not farting in an elevator), and anyway, it’s not the end of the world if you offend somebody. Again, I don’t really see how an ordinary person is going to have a problem.
As for expediency—I doubt you intended the question that way.
If this doesn’t answer your question in full you probably need to explain the question. The utilitarians have this strange notion that morality is about maximizing global utility, so of course, morality in the way that they conceive it is a kind of life-encompassing total program of action, since every choice you make could either increase or decrease total utility. Maybe that’s what you want answered, i.e., what’s the best possible thing you could be doing.
But the “should” of obligation is not like this. We have certain obligations but these are fairly limited, and don’t provide us with a life-encompassing program of action. And the “should” of propriety is not like this either. People just don’t pay you any attention as long as you don’t get in their face too much, so again, the direction you get from this quarter is limited.
As for obligation—I doubt you are under any obligation other than to avoid the usual >uncontroversially nasty behavior, along with any specific obligations you may have to >specific people you know. You would know what those are much better than I would. I >don’t really see how an ordinary person could be all that puzzled about what his >obligations are.
You have collapsed several meanings of obligation together there. You may have explicit legal obligations to the state, and IOU style obligations to individuals who have done you a favour, and so on. But moralobligations go beyond all those, If you are living a brutal dictatorship, there are conceivable circumstances where you morally should not obey the law. Etc, etc.
If the people arguing that morality is just preference answer: “Do what you prefer”, my next question is “What should I prefer?”
In order to accomplish what?
Should you prefer chocolate ice cream or vanilla? As far as ice cream flavors go, “What should I prefer” seems meaningless...unless you are looking for an answer like, “It’s better to cultivate a preference for vanilla because it is slightly healthier” (you will thereby achieve better health than if you let yourself keep on preferring chocolate).
This gets into the time structure of experience. In other words, I would be interpreting your, “What should I prefer?” as, “What things should I learn to like (in order to get more enjoyment out of life)?” To bring it to a more traditionally moral issue, “Should I learn to like a vegetarian diet (in order to feel less guilt about killing animals)?”
Is that more or less the kind of question you want to answer?
This might have clarified for me what this dispute is about. At least I have a hypothesis, tell me if I’m on the wrong track.
Antirealists aren’t arguing that you should go on a hedonic rampage—we are allowed to keep on consulting our consciences to determined the answer to “what should I prefer.” In a community of decent and mentally healthy people we should flourish. But the main upshot of the antirealist position is that you cannot convince people with radically different backgrounds that their preferences are immoral and should be changed, even in principle.
At least, antirealism gives some support to this cynical point of view, and it’s this point of view that you are most interested in attacking. Am I right?
The other problem is that anti-realists don’t actually answer the question “what should I do?”, they merely pass the buck to the part of my brain responsible for my preferences but don’t give it any guidance on how to answer that question.
Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a “what color is the sky?” type of conversation.
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
Something more objective/universal.
Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences—obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I’d rather select a different social milieu, myself.
I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.
I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they’re contrary to these new preferences.
I’d think that’s a pretty concrete example of changing my preferences, unless we’re using different definitions of “preference.”
I suppose we are using different definitions of “preference”. I’m using it as a friendly term for a person’s utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can’t be understood that way. For example, what you’re calling food preferences are what I’d call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
Ahh, I re-read the thread with this understanding, and was struck by this:
I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)
Does that make sense as a “motivation for wanting to change your preferences”?
I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.
My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone’s preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I’m screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don’t go implementing it. Please.
In general, if the FAI is going to give “your preference” to you, your preference had better be something stable about you that you’ll still want when you get it.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it. I certainly would want an akrasia fix if it were available. Maybe that’s the important preference.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it.
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
At the end of the day, you’re going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
I don’t think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I’d agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren’t optimizing for a single clear goal like “happiness” or “lifetime income”.
predict the person
Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.
It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you’re going to have a lot of trouble making specific long-term predictions.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences
There isn’t an instrumental motive for changing ones preferences. That doesn’t add up to “never change your preferences” unless you assume that instrumentality—“does it help me achieve anything” is the ultimate way of evaulating things. But it isn’t: morality is.
It is morally wrong to design better gas chambers.
The interesting question is still the one you didn’t answer yet:
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.
The unlikely answer is “I wouldn’t do anything different”. Then I’d reply “So, morality makes no practical difference to your behavior?”, and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior, if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”. I am in the same situation in that when I’m willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.
If I guessed your answer wrong, please correct me. Otherwise I’d want to fix the vocabulary problem somehow. I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts, perhaps an “amoral preference” which would mean what you were calling “preference” before, and “moral preference” would include what you were calling “morality” before, but perhaps we’d choose different words if you objected to those. The next question would be:
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
...and I have no clue what your answer would be, so I can’t continue the conversation past that point without straightforward answers from you.
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
Follow morality.
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
One way to illustrate this distinction is using Eliezer’s “murder pill”. If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we’re talking about, which hasn’t happened.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
That’s not a definition of morality but an explanation of one reason why the “murder pill” distinction is important.
...the way human brains are designed, thinking about your preferences can cause them to change.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down
Are you saying there has never been any valid moral discourse or persuasion?
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
There’s a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven’t seen much non-sloppy thinking on the subject of changing preferences.
I suppose there could be such a thing—Joe designed an elegant high-throughput gas chamber, he wants to show the design to his friends, someone tells Joe that this could be used for mass murder, Joe hadn’t thought that the design might actually be used, so he hides his design somewhere so it won’t be used. But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
Are you saying there has never been any valid moral discourse or persuasion?
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
But there are other stories where the preference itself changes. “If you approve of womens rights, you should approve of Gay rights”.
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
“If you approve of womens rights, you should approve of Gay rights”.
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can’t envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we’re back to morality being a special case of psychology again.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
Because I don’t know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior,
But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.
if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”
I am not and never was using “preference” to mean something disjoint from morality.
If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism
or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can’t do that. I can only continue to challenge the
frame itself. Morality is already, in itself, the most important value. It isn’t “made” important by some greater good.
I am not and never was using “preference” to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences.
There’s a choice you’re making here, differently from me, and I’d like to get clear on what that choice is and understand why we’re making it differently.
I have a bunch of things I prefer. I’d rather eat strawberry ice cream than vanilla, and I’d rather not design higher-throughput gas chambers. For me those two preferences are similar in kind—they’re stuff I prefer and that’s all there is to be said about it.
You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.
I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it’s even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That’s not what I’m talking about. What I’m talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter “s” to be “blort” preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you’d be left wondering “Why does he care?”
And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.
The decision to use the concept of “morality” is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don’t use the concept, that doesn’t change whether anyone wants to build high-throughput gas chambers—it just means that we don’t have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there’s no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.
So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?
Morality is already, in itself, the most important value.
I hope we’re agreed that there are two different kinds of things here—the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
You’ve written quite a lot of words but you’re still stuck on the idea that all importance is instrumental importance, importance for something that doesn’t need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn’t stop
being you, and your new self wouldn’t be someone your old self would hate. That wouldn’t be the case if you suddenly started liking murder or gas chambers. You don’t now like people who like those things, and you wouldn’t now want to become one.
I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
If we understand what is going on , we should make the choice correctly—that
is, according to rational norms. If morality means something other than the merely
pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like “is the pragmatic useful”
are trivial and tautologous.
You should care about morality because it is a value and values are definitionally what is important and what should be cared about.
You’re not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting.
I’m looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held, which doesn’t seem very interesting. If we’re going to have the distinction, I like Eugene’s proposal that a moral preference is one that’s worth talking about better, but we need to make the distinction in such a way that something doesn’t get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held,
But what I actually gave as a definition is the concept of morality is the concept
of ultimate value and importance. A concept which even the nihilists need so that they
can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
You are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
One can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held?
If that question has no answer, the claim that the two are similar seems indefensible.
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
So it doesn’t matter if it only affects what you will do?
If I’m thinking for the purpose of figuring out my future actions, that’s a plan, not a belief, since planning is relevant when I haven’t yet decided what to do.
I suppose beliefs about other people’s actions are empirical.
I’ve lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I’ll reply.
[Morality is] the ultimate way of evaluating things… It is morally wrong to design better gas chambers.
Okay, that seems clear enough that I’d rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene’s evasiveness for Peter’s.
If you know that morality is the ultimate way of evaluating things, and you’re able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers?
Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
I’m pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use.
He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
I have spent some time thinking about how to apply the ideas of Eliezer’s metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.
So the question becomes how do you separate which of your intuitions are preferences and which are biases?
[H]ow do you separate which of your intuitions are preferences and which are biases?
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.
I don’t see how that question is relevant. I don’t see any good reason for you to dodge my question about what you’d do if your preferences contradicted your morality. It’s not like it’s an unusual situation—consider the internal conflicts of a homosexual Evangelist preacher, for example.
What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?
and uncertainty about the future should interact with the utility function in the proper way.
“The proper way” being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way.
Um, no. Unless you are some kind of mutant who doesn’t suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn’t interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.
Here is the example PhilGoetz gives in the article I linked above:
In Crime and punishment, I argued that people want to punish criminals, even if there is a painless, less-costly way to prevent crime. This means that people value punishing criminals. This value may have evolved to accomplish the social goal of reducing crime. Most readers agreed that, since we can deduce this underlying reason, and accomplish it more effectively through reasoning, preferring to punish criminals is an error in judgement.
Most people want to have sex. This value evolved to accomplish the goal of reproducing. Since we can deduce this underlying reason, and accomplish it more efficiently than by going out to bars every evening for ten years, is this desire for sex an error in judgement that we should erase?
Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question “what is truth”.
Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of… intuitions.
Intuitions are internalizations of custom, an aspect of which is morality. Our intuitions result from our long practice of observing custom. By “observing custom” I mean of course adhering to custom, abiding by custom. In particular, we observe morality—we adhere to it, we abide by it—and it is from observing morality that we gain our moral intuitions. This is a curious verbal coincidence, that the very same word “observe” applies in both cases even though it means quite different things. That is:
Our physical intuitions are a result of observing physics (in the sense of watching attentively).
Our moral intuitions are a result of observing morality (in the sense of abiding by).
However, discovering physics is not nearly as passive as is suggested by the word “observe”. We conduct experiments. We try things and see what happens. We test the physical world. We kick the rock—and discover that it kicks back. Reality kicks back hard, so it’s a good thing that children are so resilient. An adult that kicked reality as hard as kids kick it would break their bones.
And discovering morality is similarly not quite as I said. It’s not really by observing (abiding by) morality that we discover morality, but by failing to observe (violating) morality that we discover morality. We discover what the limits are by testing the limits. We are continually testing the limits, though we do it subtly. But if you let people walk all over you, before long they will walk all over you, because in their interactions with you they are repeatedly testing the limits, ever so subtly. We push on the limits of what’s allowable, what’s customary, what’s moral, and when we get push-back we retreat—slightly. Customs have to survive this continual testing of their limits. Any custom that fails the constant testing will be quickly violated and then forgotten. So the customs that have survived the constant testing that we put them through, are tough little critters that don’t roll over easily. We kick customs to see whether they kick back. Children kick hard, they violate custom wildly, so it’s a good thing that adults coddle them. An adult that kicked custom as hard as kids kick it would wind up in jail or dead.
Custom is “really” nothing other than other humans kicking back when we kick them. When we kick custom, we’re kicking other humans, and they kick back. Custom is an equilibrium, a kind of general truce, a set of limits on behavior that everyone observes and everyone enforces. Morality is an aspect of this equilibrium. It is, I think, the more serious, important bits of custom, the customary limits on behavior where we kick back really hard, or stab, or shoot, if those limits are violated.
Anyway, even though custom is “really” made out of people, the regularities that we discover in custom are impersonal. One person’s limits are pretty much another person’s limits. So custom, though at root personal, is also impersonal, in the “it’s not personal, it’s just business” sense of the movie mobster. So we discover regularities when we test custom—much as we discover regularities when we test physical reality.
Yes, but we’ve already determined that we don’t disagree—unless you think we still do? I was arguing against observing objective (i.e. externally existing) morality. I suspect that you disagree more with Eugine_Nier.
Right, and ‘facts’ about God. Except that intuitions about physics derive from >observations of physics, whereas intuitions about morality derive from observations >of… intuitions.
Which is true, and explains why it is a harder problem than physics, and less progress has been made.
Everything is inaccurate for some value of accurate. The point is you can’t arrive at an accurate definition without a good theory, and you can’t arrive at a good theory without an (inevitably inaccurate) definition.
I haven’t asserted that any definition of “Morality” can jump through the hoops
set up by NMJ and co. but there is an (averagely for Ordinary Language) inaccurate definition which is widely used.
The question in this thread was not “define Morality” but “explain how you determine which of “Killing innocent people is wrong barring extenuating circumstances” and “Killing innocent people is right barring extenuating circumstances” is morally right.”
(For people with other definitions of morality and / or other criteria for “rightness” besides morality, there may be other methods.)
The question was rather unhelpfully framed in Jublowskian terms of “observable consequences”. I think killing people is wrong because I don’t want to
be killed, and I don’t
want to Act on a Maxim I Would Not Wish to be Universal Law.
Because I’m trying to have a discussion with you about your beliefs?
Looking at this I find it hard to avoid concluding that you’re not interested in a productive discussion—you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I’m going to have to bow out of talking with you on this topic.
I believe murder is wrong. I believe you can figure that out if you don’t know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible.
A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
In this post: “How do you determine which one is accurate?”
In your response further down the thread: “I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]”
And then my post: “But you already have determined that one of them is accurate, right?”
That question was not one phrased in the way you object to, and yet you still haven’t answered it.
Though, at this point it seems one can infer (from the parent post) that the answer is something like “I reason about which principle is more beneficial to me.”
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless. It’s like believing in a god which can never be discovered. Good for you, but if the universe will play out exactly the same as if it wasn’t there, why should I care?
Furthermore, why posit the existence of such a thing at all?
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless.
On a tangent—I think the subjectivist flavor of that is unfortunate. You’re echoing Eliezer’s Making Beliefs Pay Rent, but the anticipations that he’s talking about are “anticipations of sensory experience”. Ultimately, we are subject to natural selection, so maybe a more important rent to pay than anticipation of sensory experiences, is not being removed from the gene pool. So we might instead say, “any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless.”
Elsewhere, in his article on Newcomb’s paradox, Eliezer says:
I don’t generally disagree with anything you wrote. Perhaps we miscommunicated.
“any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless.”
I think that would depend on how one uses “meaningless” but I appreciate wholeheartedly the sentiment that a rational agent wins, with the caveat that winning can mean something very different for various agents.
Moral beliefs aren’t beliefs about moral facts out there in reality, they are beliefs about what I should do next. “What should I do” is an orthogonal question to “what can I expect if I do X”. Since I can reason morally, I am hardly positing anything without warrant.
It doesn’t work like the empiricism you are used to because it is, in broad brush strokes, a different thing that solves a different problem.
Can you recognize that from my position it doesn’t work like the empiricism I’m used to because it’s almost entirely nonsensical appeals to nothing, arguing by definitions, and the exercising of the blind muscles of eld philosophy?
I am unpersuaded that there exists a set of correct preferences. You have, as far as I can see, made no effort to persuade me, but rather just repeatedly asserted that there are and asked me questions in terms that you refuse to define. I am not sure what you want from me in this case.
You may be entirely of the opinion that it is all stuff and nonsense: I am only interested in what can be rationally argued.
I don’t think you think it works like empiricism. I think you have tried to make it work
like empiricism and then given up. “I have a hammer in my hand, and it
won’t work on this ‘screw’ of yours, so you should discard it”.
People can and reason about what preferences they should have, and such reasoning can be as objective as mathematical reasoning, without the need for a special arena of objects.
In this context, it’s as “weasel-like” as “innocent”. In the sense that both are fudge factors you need to add to the otherwise elegant statement to make it true.
Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared—in contrast to my stored memory that 2 + 2 was supposed to equal 4. Moreover, when I visualized the process in my own mind, it seemed that making XX and XX come out to XXXX required an extra X to appear from nowhere, and was, moreover, inconsistent with other arithmetic I visualized, since subtracting XX from XXX left XX, but subtracting XX from XXXX left XXX. This would conflict with my stored memory that 3 − 2 = 1, but memory would be absurd in the face of physical and mental confirmation that XXX—XX = XX.
Does your example (or another you care to come up with) have observable consequences?
I don’t think you can explicate such a connection, especially not without any terms defined. In fact, it is just utterly pointless to try to develop a theory in a field that hasn’t even been defined in a coherent way. It’s not like it’s close to being defined, either.
For example, “Is abortion morally wrong?” combines about 12 possible questions into it because it has a least that many interpretations. Choose one, then we can study that. I just can’t see how otherwise rationality-oriented people can put up with such extreme vagueness. There is almost zero actual communication happening in this thread in the sense of actually expressing which interpretation of moral language anyone is taking. And once that starts happening it will cover way too many topics to ever reach a resolution. We’re simply going to have to stop compressing all these disparate-but-subtly-related concepts into a single field, taboo all the moralist language, and hug some queries (if any important ones actually remain).
I don’t think you can explicate such a connection, especially not without any terms defined. In fact, it is just utterly pointless to try to develop a theory in a field that hasn’t even been defined in a coherent way. It’s not like it’s close to being defined, either.
In any science I can think of people began developing it using intuitive notions, only being able to come up with definitions after substantial progress had been made.
...given our current state of knowledge about meta-ethics I can give no better definition
of the words “should”/”right”/”wrong” than the meaning they have in everyday use.
You can assume that the words have no specific meaning and are used to signal membership in a group. This explains why the flowchart in the original post has so many endpoints about what morality might mean. It explains why there seems to be no universal consensus on what specific actions are moral and which ones are not. It also explains why people have such strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
You can assume that the words have no specific meaning and are used to signal membership in a group.
One could make the same claim about words like “exits”/”true”/”false”. Especially if our knowledge of science was at the same state as our knowledge of ethics.
Just the words “exits”/”true”/”false” had a meaning even before the development of science and Bayeanism even though a lot of people used them to signal group affiliation, I believe the words “should”/”right”/”wrong” even though a lot of people use them to signal group affiliation
But science isn’t about words like “exist”, “true”, or “false”. Science is about words like “Frozen water is less dense than liquid water”. I can point at frozen water, liquid water, and a particular instance of the former floating on the latter. Scientific claims were well-defined even before there was enough knowledge to evaluate them. I can’t point at anything for claims about morality, so the analogy between ethics and science is not valid.
Come on people. Argument by analogy doesn’t prove anything even when the analogies are valid! Stop it.
If you don’t like the hypothesis that words like “should”, “right”, and “wrong” are social signaling, give some other explanation of the evidence that is simpler. The evidence in question is:
The flowchart in the original post has many endpoints about what morality might mean.
There seems to be no universal consensus on what specific actions are moral and which ones are not.
People have strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
You can’t point at anything for claims about pure maths either. That something is not empirical does not automatically invalidate it.
Morality is not just social signalling, because it makes sense to say some social signals (“I am higher status than you because I have more slaves”) are morally wrong.
Morality is not just social signalling, because it makes sense to say some social signals (“I am higher status than you because I have more slaves”) are morally wrong.
That conclusion does not follow. Saying you have slaves is a signal about morality and, depending on the audience, often a bad signal.
Note that there is a difference between “morality is about signalling” and “signalling is about morality”. If I say “I am high status because I live a moral life” I am blatantly using morality to signal, but it doesn’t remotely follow from that there is nothing to morality except signalling. It could be argued that, morally speaking, I should pursue morality
for its own sake and not to gain status.
But science isn’t about words like “exist”, “true”, or “false”. Science is about words like “Frozen water is less dense than liquid water”.
Only because the force of the word “exists” is implicit in the indicative mood of the word “is”.
Come on people. Argument by analogy doesn’t prove anything even when the analogies are valid! Stop it.
But they can help explain what people mean, and they can show argument prove too much.
The flowchart in the original post has many endpoints about what morality might mean.
I could draw an equally complicate flow chart about what “truth” and “exists”/”is” might mean.
There seems to be no universal consensus on what specific actions are moral and which ones are not.
The amount of consensus is roughly the same as the amount of consensus there was before the development of science about which statements are true and which aren’t.
People have strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
People had strong opinions about truth before the concept of empirical validation was developed.
Your criticisms of “truth” are not so far off, but you’re essentially saying that parts of science are wrong so you can be wrong, too. No actually, you think it is OK to flounder around in the field when you’re just starting out. Sure, but not when you don’t even know what it is you’re supposed to be studying—if anything! This is not analogous to physics, where the general goal was clear from the very beginning: figure out what physical mechanisms underly macro-scale phenomena, such as the hardness of metal, conductivity, magnetic attraction, gravity, etc.
You’re just running around to whatever you can grab onto to avoid the main point that there is nothing close to a semblance of delineation of what this “field” is actually about, and it is getting tiresome.
This is not analogous to physics, where the general goal was clear from the very beginning: figure out what physical mechanisms underly macro-scale phenomena, such as the hardness of metal, conductivity, magnetic attraction, gravity, etc.
That is sort of half true, but it feels like you’re just saying that to say it, as there have been criticisms of this same line of reasoning that you haven’t answered.
How about the fact that beliefs about physics actually pay rent? Do moral ones?
My point is that NMJablonski’s request is about as reasonable as demanding that someone arguing for the existence of a “Correct Theory of Physics” provide a clear reductionist description of what one means while tabooing words like ‘physics’, ‘reality’, ‘exists’, ‘experience’, etc.
No, the reductionist description of the Correct Theory of Physics eventually involves pointing at lab equipment. There is no lab equipment for morality, so the analogy is not valid.
No, the reductionist description of the Correct Theory of Physics eventually involves pointing at lab equipment. There is no lab equipment for morality, so the analogy is not valid.
I could point a gun to your head and ask you to explain why I shouldn’t pull the trigger.
I could point a gun to your head and ask you to explain why I shouldn’t pull the trigger.
That scenario doesn’t lead to discovering the truth. If I deceive you with bullshit and you don’t pull the trigger, that’s a victory for me. I invite you to try again, but next time pick an example where the participants are incentivised to make true statements.
ETA: …unless the truth we care about is just which flavors of bullshit will persuade you not to pull the trigger. If that’s what you mean by morality, you probably agree with me that it is just social signaling.
Like I mentioned elsewhere in this thread, the “No Universally Compelling Argument” post you site applies equally well to physical and even mathematical facts (in fact that was what Eliezer was mainly referring to in that post).
In fact, the main point of that sequence is that just because there are no universally compelling arguments doesn’t mean truth doesn’t exist. As Eliezer mentions in where recursive justification hits bottom:
Now, one lesson you might derive from this, is “Don’t be born with a stupid prior.” This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.
A formal proof is still a proof though, although nothing mandates that a listener must accept it. A mind can very well contain an absolute dismissal mechanism or optimize for something other than correctness.
We can understand what sort of assumptions we’re making when we derive information from mathematical axioms, or the axioms of induction, and how further information follows from that. But what assumptions are we making that would allow us to extrapolate absolute moral facts? Does our process give us any way to distinguish them from preferences?
Do you believe in God? If I defended the notion of God in a similar way—it is not straightforwardly empirical, it’s inappropriate to demand concrete definitions, it’s not under the domain of science, just because you can’t define it and measure it doesn’t mean it doesn’t exist—would you find that persuasive?
But I am only defending the idea that morality means something. Atheists think “God” means something. “uncountable set” means something even if the idea is thoroughly non-concrete.
All atheists have to adopt a broad definition of God, or else they would only be disbelieving in the 7th day adventist God, or whatever...ie they would believe in all deities except one, which is more than the average believer.
“Ah, well, if you disbelieve in woojits, then you must know what woojits are! So, what are woojits?” I have no idea.
“But how is that possible? If you don’t have a definition for woojits, on what basis do you reject belief in them?” Having a well-defined notion of something is a prerequisite for belief in it; I don’t have a well-defined notion of woojits; therefore I don’t believe in woojits.
“No, no. You’re confused. All woojit-disbelievers have to adopt a broad definition of woojits in order to disbelieve in them; otherwise they would merely disbelieve in a specific woojit.” (shrug) OK, if you like, I have a broad definition of woojit… so broad, in fact, that it is effectively identical to my definition of all the other concepts I don’t believe in and haven’t thought about, which is the overwhelming majority of all possible concepts. For my part, I consider this equivalent to not having a definition of woojit at all.
As I say, this gets silly. It’s just arguing about definitions of words.
Now, I would agree that atheists who grow up in theist cultures do have a definition of God, though I disagree with you that it’s necessarily broad: I know at least one atheist who was raised Roman Catholic, for example, and the god he disbelieves in is the Roman Catholic god of his youth, and the idea that “God” might conceivably refer to anything else just doesn’t have a lot of meaning to him.
No, I said that asking about the nature of moral claims means “moral” has some prima facie meaning. “woojit” is a made up word with no prima facie meaning. Not analogous.
It doesn’t sill go through, since it did not in the first place. It’s a concrete fact that you can look up “moral” in a dictionary, for all that what you read isn’t very useful.
How is that relevant? I don’t see why the presence in a dictionary matters. But even if it did, boojum is in some dictionaries and encyclopedia too. It is a type of snark.
It’s only in some,and not all, dictionaries because it is a made up word that is supposed to be ill defined and puzzling. Some Lexicographers feel that readers need to be advised that when they encounter this word, it is being used to flag “here is something strange and meaningless”.
So what matters then is if all dictionaries have it? Why does that matter? Does this mean we couldn’t have this discussion before dictionaries were invented? Did the nature of morality change with the invention of a dictionary? Moreover, if one got every dictionary to include “boojum” and “snark” would that then make it different?
If a word is defined in all dictionaries, then the claim that it is completely meaningless is extraordinary and poorly motivated. Dictionaries are of course only significant because they make usage concrete.
If a word is defined in all dictionaries, then the claim that it is completely meaningless is extraordinary and poorly motivated
The claim was about incoherence not whether it was “completely meaningless” and I fail to see how motivation is either relevant or you get anything about a claim being poorly motivated from this. If you prefer a different analogy, consider such terms as transubstantiation, consubstantiation, homoousion, hypostatic union, kerygma and modalism. Similarly, in a Hebrew dictionary you will have all ten Sephirot defined (Keter, chochmah, etc.). Is it is extraordinary and poorly motivated to say that these kabbalistic terms are incoherent?
The point about motivation is about where burdens lie.
The discussion so far has been about the accusation that somebody somewhere is culpably refusing to define “morality”. This is the first mention of incoherence.
“incoherent” is often used as a loose synonym for “I don’t like it”. That is not a useful form of argument. The examples of “incoherent” concepts you gave are a mixed bag of concepts ranging from the well defined but false, to the well defined but ungrounded, to the ill defined. If you want to say what
specific kind of incoherence “morality” has IYO, feel free.
he examples of “incoherent” concepts you gave are a mixed bag of concepts ranging from the well defined but false, to the well defined but ungrounded, to the ill defined. If you want to say what specific kind of incoherence “morality” has IYO, feel free.
You seem confused about what argument CuSithBell is arguing. The argument is not that morality is fundamentally incoherent or meaningless but that most definitions of it fall into those categories and that our common intuition is not sufficient to have useful discussions about it, so you need to supply a definition for what you mean. So far, you seem to have refused to do that. Do you see the distinction?
I’m not really sure what a “mistake of rationality” is, or how it differs from simply being mistaken about something.
That said, I would agree with you that my Roman Catholic atheist friend is not arriving at his atheism in a particularly rational way.
WRT woojits, I’m not jumping to any conclusions: I arrived at that conclusion step-by-step. Again: “Having a well-defined notion of something is a prerequisite for belief in it; I don’t have a well-defined notion of woojits; therefore I don’t believe in woojits.” You’re free to disagree with any part of that or all of it, but I’d prefer you didn’t simply ignore it.
A mistake of rationality is quite different from a perceptual error, for instance. It’s even different to being wrong, since one can be right for irrational reasons.
“Having a well-defined notion of something is a prerequisite for belief in it
I disagree. I believe in consciousness, but don’t have a well defined notion of it.
I don’t have a well-defined notion of woojits
On the one hand, “woojit” might be intended as a synonym for something you do believe in. On the other hand. if it is meaningless, “woojits don’t exist” is meaningless.
Either way, you should not conclude that woojits don’t exist because you don’t know
what they are
I think that it’s easy to be an atheist—i.e. one doesn’t have to make any difficult definitions or arguments to arrive at atheism, and those easy definitions and arguments are correct. If you think it’s harder than I do, that would be interesting and could explain why we have such different opinions here.
Fine. Then the atheist who doesn’t have a difficult definition of God, isn’t culpably refusing to explain her “new idea”, and someone who thinks there is something to be said about morality can stick with the vanilla definition that morality is Right and Wrong and Such.
I would be happy to continue down this line a ways longer if you would like, and we could get all the way down to the two of us in the same physical location rebuilding the concept of induction. I am confident that if necessary we could do that for “anticipations” and build our way back up. I am not confident that “morality” as it has been used here actually connects to any solid surface in reality, unless it ends up meaning the same thing as “preferences”.
I am confident that if necessary we could do that for “anticipations” and build our way back up.
In that case maybe we should continue a bit longer until you’re disabused of that belief. What I suspect will happen is that you’ll continue to attempt to define your words in terms of more and more tenuous abstractions until the words you’re using really are almost meaningless.
Why not try this: imagine an inquisitive nine-year-old asked you what you meant by “morality”; such a nine-year-old might not know what “define” means, but I expect you wouldn’t refuse to explain morality on those grounds.
I would only have to point to the distinction between Good Things and Naughty Things which all children have drummed into them from a much earlier age. That is what makes the claim not to have an OL undesrtanding of morality so unlikely.
Something is morally right if it fulfils the Correct Theory of Morality. I’m not claiming to have that.
Because of the above, I think you are making a claim that a singular Correct Theory of Morality exists. How would you explain that to a nine-year-old? That’s the discussion we could be having.
I don’t see any substantive, real world connection to words like “good” or “moral” in this context. I am assuming you do mean something real by them, and I am asking you to convey that meaning by using simpler words that we both already understand in concrete terms.
And I think you are as capable as anyone else of seeing the ordinary meanings of these terms. There is no guarantee that they are definable in simpler terms or in concrete terms, since it is likely that some concepts are basic or abstract. You have an unusual inability to understand these terms. and an unlikely background theory of meaning. I think those two facts are connected.
It’s almost never sufficient, but it is often necessary to discard wrong words.
..and it’s necessary to have a reasoned motivation for that. If you could really disprove things just by unmotivated refusal to use language, you could disprove everything. Meta-principle: treat one-size-fits-all arguments with suspicion.
Around here we call those “fully general counter-arguments”.
ETA: you’ve misunderstood the grandparent, the point of which is not about a refusal to use language but rather about using it more precisely so as to avoid miscommunication and errors.
I have not noticed NMJabalonski offering a more precise replacement vocabulary.
Probably because he doesn’t know what to replace it with. You introduced the words into the conversation. We’re trying to figure out what you mean by them.
This summarizes the situation nicely I think. Thanks.
I did not introduce the words “moral”, “good” etc. They are not some weird never-before encountered vocabulary.
You’re promoting illusion of transparency. Just explain what you mean, already.
I can only do that if you understand the language I intend to do the explaining in. It’s called English. Do you understand this language?
I have access to a number of dictionaries which, while written entirely in English, contain many definitions. Please, emulate them.
morality :concern with the distinction between good and evil or right and wrong; right or good conduct
good:morally admirable
Ethics (also known as moral philosophy) is a branch of philosophy which seeks to address questions about morality; that is, about concepts such as good and bad, right and wrong, justice, and virtue.
Let me try to guess the next few moves in hopes of speeding this up:
A: Admirable according to whom? (And why’d you use “morally” in the definition of “morality”?)
B: Most people. / Everyone. / Everyone who matters.
A: So basically, if a lot of people or everyone admires something, it is morally good? It’s a popularity contest?
B: No, it’s just objectively admirable.
A: I don’t understand what it would mean to be “objectively admirable”?
B: These are two common words. How can you not understand them?
A: Each might make sense separately, but together no. Perhaps you mean “universally admirable”?
B: Yeah, that sounds good.
A: So basically, if everyone admires something, you will want to call it “morally good.” They will probably appreciate and agree to those approving words, seeing as they all admire it as well.
Or...?
C; Now that you have enough of a handle on “morality” to see the difference between a theory of morality and a theory of flight, you can read the literature.
??? I’m just trying to understand what your definition of morality is.
Don’t you already know what it means? I though we established that you speak English.
You’re aware that words have more than one definition, and in debates it is customary to define key terms before beginning? Perhaps I could interest you in this.
The debate, which seems to be over, was largely about whether the word has any meaning at all,
So...
“Something is moral if it is good.”
and
“Something is good if it is moral.” ?
I think “admirable” might break the circle and ground the definitions, albeit tenuously.
It could, that’s true. Only, I think, if we clear up who’s doing the admiring. There would be disagreement among a lot of people as to what’s admirable.
Circularity is typical of ordinary dictionary defintiions. OTOH, it doesn’t stop people learning meanings.
We all speak English here to some degree.
The issue is that some words are floating, disconnected from anything in reality, and meaningless. Consider the question: do humans have souls?
What would it mean, in terms of actual experience, for humans to have souls? What is a soul? Can you understand how if someone refused to explain what a soul is, claiming it to be a basic thing which no other words can describe, it would be pretty confusing?
What would it mean, in terms of actual experience, for something to be “morally right”? What characteristics make it that way, and how do you know?
To disbelieve in souls, you have to know what “soul” means, You seem to have mistaken an issue of truth for one of meaning.
I think you are going to have to put up with that unfortunate confusion, since you can’t reduce everything to nothing.
Something is morally right if it fulfils the Correct Theory of Morality. I’m not claiming to have that. However, I can recognise theories of morality, and I can do that with my ordinary-language notiion of morality. (The theoretic is always based on the pre-theoretic. We do not reach the theoretic in one bound) I’m not creating stumbling blocks for myself by placing arbitrary requirments on definitions, like insisting that they are both concrete and reductive.
Why do you believe there exists a Correct Theory of Morality?
Why do you believe there exists a Correct Theory of Physics?
As Constant points out here all the arguments based on reductionism that you’re using could just as easily be used to argue that there is no correct theory of physics.
One difference between physics and morality is that there is currently a lot more consensus about what the correct theory of physics looks like then what the correct theory of morality looks like. However, that is a statement about the current time, if you were to go back a couple centuries you’d find that there was as little consensus about the correct theory of physics as there is today about the correct theory of morality.
It’s not an argument by reductionism...it’s simply trying to figure out how to interpret the words people are using—because it’s really not obvious. It only looks like reductionism because someone asks, “What is morality?” and the answer comes: “Right and wrong,” then “What should be done,” then “What is admirable”… It is all moralistic language that, if any of it means anything, it all means the same thing.
Well the original argument, way back in the thread, was NMJablonski arguing against the existence of a “Correct Theory of Morality” by demanding that Peter provide “a clear reductionist description of what [he’s] talking about” while “tabooing words like ‘ethics’, ‘morality’, ‘should’, etc.
My point is that NMJablonski’s request is about as reasonable as demanding that someone arguing for the existence of a “Correct Theory of Physics” provide a clear reductionist description of what one means while tabooing words like ‘physics’, ‘reality’, ‘exists’, ‘experience’, etc.
Fair enough, though I suspect that by asking for a “reductionist” description NMJablonski may have just been hoping for some kind of unambiguous wording.
My point, and possibly Peter’s, is that given our current state of knowledge about meta-ethics I can give no better definition of the words “should”/”right”/”wrong” than the meaning they have in everyday use.
Note, following my analogy with physics, that historically we developed a systematic way for judging the validity of statements about physics, i.e., the scientific method, several centuries before developing a semi-coherent meta-theory of physics, i.e., empiricism and Bayseanism. With morality we’re not even at the “scientific method” stage.
This is consistent with Jablonski’s point that “it’s all preferences.”
In keeping with my physics analogy, saying “it’s all preferences” about morality is analogous to saying “it’s all opinion” about physics.
Clearly there’s a group of people who dislike what I’ve said in this thread, as I’ve been downvoted quite a bit.
I’m not perfectly clear on why. My only position at any point has been this:
I see a universe which contains intelligent agents trying to fulfill their preferences. Then I see conversations about morality and ethics talking about actions being “right” or “wrong”. From the context and explanations, “right” seems to mean very different things. Like:
“Those actions which I prefer” or “Those actions which most agents in a particular place prefer” or “Those actions which fulfill arbitrary metric X”
Likewise, “wrong” inherits its meaning from whatever definition is given for “right”. It makes sense to me to talk about preferences. They’re important. If that’s what people are talking about when they discuss morality, then that makes perfect sense. What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
You signaled disagreement with someone about morality. What did you expect? :)
Your explanation is simple and fits the facts!
I like it :)
I don’t know anything about downvotes, but I do think that there is a way of understanding ‘right’ and ‘wrong’ independently of preferences. But it takes a conceptual shift.
Don’t think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
How does someone who thinks that ‘morality’ is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully.
What do you call the subject matter of that discussion? Answer: Metaethics.
What do you call success in this endeavor? Answer: “Dissolving the confusion”.
Moral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
I’m tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn’t see how preferences are even involved. But maybe I am not understanding your point.
Perhaps an example would help. Suppose I say, “It is morally wrong for Alice to lie to Bob.” How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice’s, Bob’s, mine, everybody else’s? For comparison purposes, also analyze the claim “It is morally wrong for Bob to strangle Alice.”
Due to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority .
If you say, “I don’t want you to strangle Alice.”, Bob might reply, “I don’t care what you want!”.
If you say, “Strangling Alice might have detrimental effects on your other preferences.”, Bob might reply, “I assign infinite utility to the death of Alice!” (which might very well be the case for humans in a temporary rage).
But if you say, “It is morally wrong to strangle Alice.”, Bob might get confused and reply, “You are right, I don’t want to be immoral!”. Which is really a form of coercive persuasion. Since when you say, “It is morally wrong to strangle Alice.”, you actually signal, “If you strangle Alice you will feel guilty.”. It is a manipulative method that might make Bob say, “You are right, I don’t want to be immoral!”, when what he actually means is, “I don’t want to feel guilty!”.
Primates don’t like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
This sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Complex influences, like your culture and upbringing.That’s also why some people don’t say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology.
It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn’t be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
But rationality shouldn’t be part of the solution because it is largely mistaken about cause and effect. Rationalitty is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense rationality is a meme that is part of a larger effect and therefore can’t be part of a reductionist explanation of itself.
However, these claims are false, so you have to make a different argument.
I’ve seen this sort of substitution-argument a few times recently, so I’ll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact necessary to prevent each argument from being a re-derivation of human society from mathematical axioms. Arguers should try to be sensitive to the way in which the context of an argument may or may not change how that argument applies to other subjects (A simple example: “You should not enter that tunnel because your truck is taller than the ceiling’s clearance” is a good argument only if the truck in question is actually taller than the ceiling’s clearance.). This especially applies when arguments are not meant to be formal, or in fact when they are not intended to be arguments.
These substitution arguments are quite a shortcut. The perpetrator doesn’t actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn’t make sense, the perpetrator wins!
Making a valid argument about why the substitution argument doesn’t make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him.
Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
Cool, glad I’m not just imagining things! I think that sometimes this sort of argument can be valuable (“That person also has a subjective experience of divine inspiration, but came to a different conclusion”, frex), but I’ve become more suspicious of them recently—especially when I’m tempted to use one myself.
Thing is, this is a general response to virtually any criticism whatsoever. And it’s often true! But it’s not always a terribly useful response. Sometimes it’s better to make explicit that bit of context, or that elided step.
Moreover it’s also a good thing to remember about the other guy’s argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises—that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it’s not just about substitutions. It’s a general point.
True! This observation does not absolve us of our eternal vigilance.
Emphatically agreed.
Guilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow.
ETA: I’m not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I’m saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
Perplexed wrote earlier:
Sometimes you’ll want to explain why your punishment of others is justified. If you don’t want to engage Perplexed’s “moral realism”, then either you don’t think there’s anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don’t think it’s a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
Assuming I haven’t totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
Okay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it’s still worth talking about.
Virtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference.
ETA: And also, as XiXiDu points out, to signal virtuosity.
Upvoted because I strongly agree with the spirit of this post, but I don’t think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
Like if an atheist is talking to a religious person then the subject matter is metatheology?
Which metrics do I use to judge others?
There has been some confusion over the word “preference” in the thread, so perhaps I should use “subjective value”. Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there’s a set of metrics for judging people which has some spooky, metaphysical property that makes it “better”?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don’t want to play that game, what if I don’t care who wins?
Because it harms other people directly or indirectly. Most immoral actions have that property.
To the person you harm. To the victim’s friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
Because you will probably be punished, and that tends to not satisfy your preferences.
If the moral code is correctly designed, yes.
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Begging the question.
Either that is part of my preferences or it isn’t.
Either society is instrumental to my goals or it isn’t.
Game theory? Instrumental rationality? Cultural anthropology?
If I am able to realize my goals, satisfy my preferences, don’t want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
Also, what did you mean by
… in response to “Because you will probably be punished, and that tends to not satisfy your preferences.” ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don’t think that moral philosophy fits this community.
This community doesn’t talk about theology either, it talks about probability and Occam’s razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term—rather like “advertising”.
Can all of it be described in those terms? Isn’t that a philosophical claim?
There’s nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven’t tried this).
Is it worth the cost? Probably you can experiment. It’s true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I’m right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It’s hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you’ve established deep loyalties, will advertise their amorality.
It’s irrational to think that the evaluative buck stops with your own preferences.
Maybe he doesn’t care about the “evaluative buck”, which while rather unfortunate, is certainly possible.
If he doesn’t care about rationality, he is still being irrational,
This.
I’m claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a ‘fair’ bargain.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
There are people who do not recognize this. It was, in fact, my point.
Edit: Hmm, did I say something rude Perplexed?
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
I think we may have reached the somewhat common on LW point where we’re arguing even though we have no disagreement.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
Yes! That’s a point that I’ve repeated so often to so many different people [not on LW, though] that I’d more-or-less “given up”—it began to seem as futile as swatting flies in summer. Maybe I’ll resume swatting now I know I’m not alone.
This is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Avoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it’s a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
Then I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn’t.
Avoiding people who misbehave will change the state of the world even if that does not affect their behavior. It changes the world by protecting you. You are part of the world.
Yes, but if you judge a particular action of your own to be ‘wrong’, then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong.
Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong.
Before you object “But that is not morality!”, ask yourself how you learned the difference between right and wrong.
It’s a valid point that I probably learned morality this way. I think that’s actually the definition of ‘preconventional’ morality-it’s based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren’t covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a “better” alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don’t do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a “golden rule” that if I don’t want something to happen to me, I won’t do it to someone else even if the action is socially acceptable amidst my friends and wouldn’t be punished.
Ah. Thanks for bringing up the Kohlberg stages—I hadn’t been thinking in those terms.
The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not ‘that which receives reward and punishment’, it is instead ‘that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level’.
How many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a “normal” level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it’s the law and it’s wrong to break the law, implicitly because they would be punished if they did.) I’m not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
How many people are stuck at that level? I don’t know.
How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the ‘criminal’) by the punishment.
And also remember that we can use carrots as well as sticks. A smile and a “Thank you” provide a powerful carrot to many people. How many? Again, I don’t know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
Cool! Swat away. Though I’m not particularly happy with the metaphor.
Assuming Amanojack explained your position correctly, then there aren’t just people fulfilling their preferences. There are people doing all kinds of things that fulfill or fail to fulfill their preferences—and, not entirely coincidentally, which bring happiness and grief to themselves or others. So then a common reasonable definition of morality (that doesn’t involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word “preferences” may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent’s mind give way to evolved heuristics.
If that’s how you would like to define it, that’s fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
I suspect it’s a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex.
Yes; absolutely. I suspect that a coherent definition of morality that isn’t contingent on those will have to reference a deity.
We are, near as I can tell, in perfect agreement on the substance of this issue. Aumann would be proud. :)
I don’t understand what you mean by preferences when you say “intelligent agents trying to fulfill their preferences”. I have met plenty of people who were trying to do things contrary to their preferences. Perhaps before you try (or someone tries for you) to distinguish morality from preferences, it might be helpful to distinguish precisely how preferences and behavior can differ?
Example? I prefer not to stay up late, but here I am doing it. It’s not that I’m acting against my preferences, because my current preference is to continue typing this sentence. It’s simply that English doesn’t differentiate very well between “current preferences”= “my preferences right this moment” and “current preferences”= “preferences I have generally these days.”
Seinfeld said it best.
But I want an example of people acting contrary to their preferences, you’re giving one of yourself acting according to your current preferences. Hopefully, NMJablonski has an example of a common action that is genuinely contrary to the actor’s preferences. Otherwise, the word “preference” simply means “behavior” to him and shouldn’t be used by him. He would be able to simplify “the actions I prefer are the actions I perform,” or “morality is just behavior”, which isn’t very interesting to talk about.
“This-moment preferences” are synonymous with “behavior,” or more precisely, “(attempted/wished-for) action.” In other words, in this moment, my current preferences = what I am currently striving for.
Jablonski seems to be using “morality” to mean something more like the general preferences that one exhibits on a recurring basis, not this-moment preferences. And this is a recurring theme: that morality is questions like, “What general preferences should I cultivate?” (to get more enjoyment out of life)
Ok, so if I understand you correctly: It is actually meaningful to ask “what general preferences should I cultivate to get more enjoyment out of life?” If so, you describe two types of preference: the higher-order preference (which I’ll call a Preference) to get enjoyment out of life, and the lower-order “preference” (which I’ll call a Habit or Current Behavior rather than a preference, to conform to more standard usage) of eating soggy bland french fries if they are sitting in front of you regardless of the likelihood of delicious pizza arriving. So because you prefer to save room for delicious pizza yet have the Habit of eating whatever is nearby and convenient, you can decide to change that Habit. You may do so by changing your behavior today and tomorrow and the day after, eventually forming a new Habit that conforms better to your preference for delicious foods.
Am I describing this appropriately? If so, by the above usage, is morality a matter of Behavior, Habit, or Preference?
Sounds fairly close to what I think Jablonski is saying, yes.
Preference isn’t the best word choice. Ultimately it comes down to realizing that I want different things at different times, but in English future wanting is sometimes hard to distinguish from present wanting, which can easily result in a subtle equivocation. This semantic slippage is injecting confusion into the discussion.
Perhaps we have all had the experience of thinking something like, “When 11pm rolls around, I want to want to go to sleep.” And it makes sense to ask, “How can I make it so that I want to go to sleep when 11pm rolls around?” Sure, I presently want to go to sleep early tonight, but will I want to then? How can I make sure I will want to? Such questions of pure personal long-term utility seem to exemplify Jablonksi’s definition of morality.
ok cool, replying to the original post then.
Oops, I totally missed this subthread.
Amanojack has, I think, explained my meaning well. It may be useful to reduce down to physical brains and talk about actual computational facts (i.e. utility function) that lead to behavior rather than use the slippery words “want” or “preference”.
Good idea. Like, “My present utility function calls for my future utility function to be such and such”?
I replied to Marius higher up in the thread with my efforts at preference-taboo.
Same here.
It doesn’t mean any of those things, since any of them can be judged wrong.
Morality is about having the right preferences, as rationality is about having true beliefs.
Do you think the sentence “there are truths no-one knows” is meaningful?
I understand what it would mean to have a true belief, as truth is noticeably independent of belief. I can be surprised, and I can anticipate. I have an understanding of a physical world of which I am part, and which generates my experiences.
It does not make any sense for there to be some “correct” preferences. Unlike belief, where there is an actual territory to map, preferences are merely a byproduct of the physical processes of intelligence. They have no higher or divine purpose which demands certain preferences be held. Evolution selects for those which aid survival, and it doesn’t matter if survival means aggression or cooperation. The universe doesn’t care.
I think you and other objective moralists in this thread suffer from extremely anthropocentric thinking. If you rewind the universe to a time before there are humans, in a time of early expansion and the first formation of galaxies, does there exist then the “correct” preferences that any agent must strive to discover? Do they exist independent of what kinds of life evolve in what conditions?
If you are able to zoom out of your skull, and view yourself and the world around you as interesting molecules going about their business, you’ll see how absurd this is. Play through the evolution of life on a planetary scale in your mind. Be aware of the molecular forces at work. Run it on fast forward. Stop and notice the points where intelligence is selected for. Watch social animals survive or die based on certain behaviors. See the origin of your own preferences, and why they are so different from some other humans.
Objective morality is a fantasy of self-importance, and a hold-over from ignorant quasi-religious philosophy which has now cloaked itself in scientific terms and hides in university philosophy departments. Physics is going to continue to play out. The only agents who can ever possibly care what you do are other physical intelligences in your light cone.
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
It is plainly the case that people can have morally wrong preferences, and therefore no argument against ethics that ethics are not forced on people. People will suffer if they hold incorrect or irrational factual beliefs, and they will suffer if they have evil preferences. In both cases there is a distinction between right and wrong, and in both cases there is an option.
I think you and others on this thread suffer from a confusion between ontology and epistemology. There can be objective truths in mathematics without having the number 23 floating around in space. Moral objectivity likewise does not demand the physical existence of moral objects.
There are things I don’t want done to me. I should not therefore do them to others. I can reason my way to that conclusion without the need for moral objects, and without denying that I am made of atoms.
Wait. So you don’t believe in an objective notion of morality, in the sense of a morality that would be true even if there were no people? Instead, you think of morality as, like, a set of reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others?
Everything is non objective for some value of objective. It is doubtful that there are mathematical truths without mathematicians. But that does not make math as subjective as art.
Okay. The distinction I am drawing is: are moral facts something “out there” to be discovered, self-justifying, etc., or are they facts about people, their minds, their situations, and their relationships.
Could you answer the question for that value of objective? Or, if not, could you answer the question by ignoring the word “objective” or providing a particular value for it?
The second is closer, but there is still the issue of the fact-value divide.
ETA: I have a substantive pre-written article on this, but where am I going to post it with my karma...?
I translate that as: it’s better to talk about “moral values” than “moral facts” (moral facts being facts about what moral values are, I guess), and moral values are (approximately) reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their “selfishness” a desire for the well-being of others.
Something like that? If not, could you translate for me instead?
I think the the fact that moral values apply to groups is important.
I take this to mean that, other than that, you agree.
(This is the charitable reading, however. You seem to be sending strong signals that you do not wish to have a productive discussion. If this is not your intent, be careful—I expect that it is easy to interpret posts like this as sending such signals.)
If this is true, then I think the vast majority of the disagreements you’ve been having in this thread have been due to unnecessary miscommunication.
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains. So, no, mathematics does not have an actual territory. It is an abstraction of physical behaviors that intelligences can use because intelligences are also physical. Mathematics works because we can perform isomorphic physical operations inside our brains.
You can say that as many times as you like, but that wont make it true.
ETA: You also still haven’t explained how a person can know that.
Only if is-real is a boolean. If it’s a number, then mathematics can be “platonically real” without us being Boltzmann brains.
Upvoted. That’s a good point, but also a whole other rabbit hole. Do you think morality is objective?
As opposed to what? Subjective? What are the options? Because that helps to clarify what you mean by “objective”. Prices are created indirectly by subjective preferences and they fluctuate, but if I had to pick between calling them “subjective” or calling them “objective” I would pick “objective”, for a variety of reasons.
No; morality reduces to values that can only be defined with respect to an agent, or a set of agents plus an aggregation process. However, almost all of the optimizing agents (humans) that we know about share some values in common, which creates a limited sort of objectivity in that most of the contexts we would define morality with respect to agree qualitatively with each other, which usually allows people to get away with failing to specify the context.
Upvoted. I think you could get a decent definition of the word “morality” along these lines.
A person can know that by reasoning about it.
If you think there is nothing wrong with having a preference for murder, it is about time you said so. It changes a lot.
It still isn’t clear what it means for a preference for murder to be “wrong”!
So far I can only infer your definition of “wrong” to be:
“Not among the correct preferences”
… but you still haven’t explained to us why you think there are correct preferences, besides to stamp your foot and say over and over again “There are obviously correct preferences” even when many people do not agree.
I see no reason to believe that there is a set of “correct” preferences to check against.
So you think there is nothing wrong in having a preference for murder? Yes or no?
I need to find out whether I should be arguing to specific cases from general principles or vice versa.
I do not believe there is a set of correct preferences. There is no objective right or wrong.
Funny how you never quite answer the question as stated. Can you even say it is subjectively wrong?
“Wrong” meaning what?
Would I prefer the people around me not be bloodthirsty? Yes, I would prefer that.
Can people reason that bloodthirst is not a good preference to have...?
Even if there’s no such thing as objective right and wrong, they might easily be able to reason that being bloodthirsty is not in their best selfish interest.
If there’s no right or wrong, why does that matter?
I don’t understand the question, nor why you singled out that fragment.
When you say “Even if there’s no such thing as objective right and wrong” you’re still implicitly presuming a default morality, namely ethical egoism.
Yes. Even subjective morality refutes NMJ’s nihilism.
I agree with Sewing-Machine
Being bloodthirsty would lead to results I do not prefer.
ETA: Therefore I would not choose to become bloodthirsty. This is based on existing preference.
For me, now, it isn’t practical. In other circumstances it would be. It need not ever be a terminal goal but it could be an instrumental goal built in deeply.
It isn’t ‘funny’ at all. You were trying to force someone into a lose lose morality signalling position. It is appropriate to ignore such attempts and instead state what your actual position is.
Your gambit here verges on logically rude.
In keeping with my analogy let’s translate your position into the corresponding position on physics:
Do you still agree with the changed version? If not, why not?
(I never realized how much fun it could be to play a chronophone.)
Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions. I see no cases where “right” has a meaning outside of an agent’s preferences. I don’t know how one would go about discovering the “rightness” of something, as one would a physical truth.
It is a poor analogy.
Edit: Seriously? I’m not trying to be obstinate here. Would people prefer I go away?
New edit: Thanks wedrifid. I was very confused.
You’re not being obstinate. You’re more or less right, at least in the parent. There are a few nuances left to pick up but you are not likely to find them by arguing with Eugine.
Please explain what the word “concrete” means independent of anyone’s beliefs and opinions.
How about this. You stop down-voting the comments in this thread you disagree with and I’ll do the same.
… I’m not down-voting the comments I disagree with.
I down-voted a couple of snide comments from Peter earlier.
Well, somebody is.
If it’s not you I’m sorry.
For the record, I think in this thread Eugine_Nier follows a useful kind of “simple truth”, not making errors as a result, while some of the opponents demand sophistication in lieu of correctness.
I think we’re demanding clarity and substance, not sophistication. Honestly I feel like one of the major issues with moral discussions is that huge sophisticated arguments can emerge without any connection to substantive reality.
I would really appreciate it if someone would taboo the words “moral”, “good”, “evil”, “right”, “wrong”, “should”, etc. and try to make the point using simpler concepts that have less baggage and ambiguity.
Clarity can be difficult. What do you mean by “truth”?
I mean it in precisely the sense that The Simple Truth does. Anticipation control.
That’s not the point. You must use your heuristics even if you don’t know how they work, and avoid demanding to know how they work or how they should work as a prerequisite to being allowed to use them. Before developing technical ideas about what it means for something to be true, or what it means for something to be right, you need to allow yourself to recognize when something is true, or is right.
I’m sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure, I’d use my feelings of right or wrong as much as the next guy, but that doesn’t make them objectively true.
Likewise, just because I intuitively feel like I have a time-continuous self, that doesn’t make consciousness fundamental.
As an agent, having knowledge of what I am, and what causes my experiences, changes my simple reliance on heuristics to a more accurate scientific exploration of the truth.
Just make sure that the particular piece of knowledge you demand is indeed available, and not, say, just the thing you are trying to figure out.
(Nod)
I still think it’s a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
Morality doesn’t work like physical law either. Nobody is compelled to be rational, but people who do reason can agree about certain things. That includes moral reasoning.
I think we should move this conversation back out of the other post, where it really doesn’t belong.
Can you clarify what you mean by this?
For what X are you saying “All agents that satisfy X must follow morality.”?
If you’re moving it anyway, I would recommend moving it here instead.
I’m saying that in “to be moral you must to follow whatever rules constitute morality” the “must” is a matter of logical necessity, as opposed to the two interpretations of compulsion considered by NMJ: physical necessity, and edict.
You still haven’t explained in this framework why one can talk about how one gets that people “should” be moral anymore than people “should play chess”. If morality is just another game, then it loses all the force you associate with it, and it seems clear that you are distinguishing between chess and morality.
The rules of physics have a special quality of unavoidability, you don’t have an option to avoid them. Likewise people are held morally accountable under most circumstances and cant just avoid culpabability by saying “oh, I don’t play that game”. I don’t think these are aposteriori facts. I think physics is definitionally the science of the fundamental, and morality is definitionally where the evaluative buck stops.
… but they’re held morally accountable by agents whose preferences have been violated. The way you just described it means that morality is just those rules that the people around you currently care enough about to punish you if you break them.
In which case morality is entirely subjective and contingent on what those around you happen to value, no?
It can make sense to say that the person being punished was actually in the right. Were the British right to imprison Gandhi?
Peter, at this point, you seem very confused. You’ve asserted that morality is just like chess apparently comparing it to a game where one has agreed upon rules. You’ve then tried to assert that somehow morality is different and is a somehow more privileged game that people “should” play but the only evidence you’ve given is that in societies with a given moral system people who don’t abide by that moral system suffer. Yet your comment about Gandhi then endorses naive moral realism.
It is possible that there’s a coherent position here and we’re just failing to understand you. But right now that looks unlikely.
As I have pointed out about three times, the comparison with chess was to make a point about obligation, not to make a point about arbitrariness
I never gave that, that was someone else characterisation. What I said was that it is an anaytlcal trtuth that morality is where the evaluative buck stops.
I don’t know what you mean by the naive in naive realism. It is a a central characteristic of any kind of realism that you can have truth beyond conventional belief. The idea that there is more to morality than what a particular society wants to punish is a coherent one. It is better as morality, because subjectivism is too subject to get-out clauses. It is better as an explanation, because it can explain how de facto morality in societies and individuals can be overturned for something better.
Yes, roughly speaking when the person being punished fits into the category ‘us’ rather than ‘them’. Especially ‘me’.
Hmm… This is reminiscent of Eliezer’s (and my) metaethics¹. In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}. (Well, it may actually be a utility function, but sets are easier to convey in text.)
In that case, “should”, “moral”, “right” and the rest are all just different words for “the object is in the above set (which we call morality)”. And then “being moral” means “following those rules” as a matter of logical necessity, as you’ve said. But this depends on what you mean by “the rules constituting morality”, on which you haven’t said whether you agree.
What do you think?
What determines the contents of the set / details of the utility function?
The short answer is: my/our preferences (suitably extrapolated).
The long answer is: it exists as a mathematical object regardless of anyone’s preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them.
But anyway, I think I’m mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
I see what you mean, and I don’t think I disagree.
I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of “rules that constitute morality” contingent upon what an agent already values (suitably extrapolated)?
No. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn’t mix up p-morality with morality.
So, you’re defining “morality” as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it “morality”?
No matter what opinions anyone holds about gravity, objects near the surface of the earth not subject to other forces accelerate towards the earth at 9.8 meters per second per second. This is an empirical fact about physics, and we know ways our experience could be different if it were wrong. Do you have an example of a fact about morality, independent of preferences, such that we could notice if it is wrong?
Killing innocent people is wrong barring extenuating circumstances.
(I’ll taboo the “weasel words” innocent and extenuating circumstances as soon as you taboo the “weasel words” near the surface of the earth and not subject to other forces.
I’m not sure it’s possible for my example to be wrong anymore then its possible for 2+2 to equal 3.
What is the difference between:
“Killing innocent people is wrong barring extenuating circumstances”
and
“Killing innocent people is right barring extenuating circumstances”
How do you determine which one is accurate? What observable consequences does each one predict? What do they lead you to anticipate?
Moral facts don’t lead me to anticipate observable consequences, but they do affect the actions I choose to take.
Preferences also do that.
Yes, well opinions also anticipate observations. But in a sense by talking about “observable consequences” your taking advantage of the fact that the meta-theory of science is currently much more developed then the meta-theory of ethics.
But some preferences can be moral, just as some opinions can be true. There is no automatic entailment from “it is a preference” to “it has nothing to do with ethics”.
The question was—how do you determine what the moral facts are?
Currently, intuition. Along with the existing moral theories, such as they are.
Similar to the way people determined facts about physics, especially facts beyond the direct observation of their senses, before the scientific method was developed.
Right, and ‘facts’ about God. Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of… intuitions.
You can’t really argue that objective morality not being well-defined means that it is more likely to be a coherent notion.
My point is that you can’t conclude the notion of morality is incoherent simple because we don’t yet have a sufficiently concrete definition.
Technically, yes. But I’m pretty much obliged, based on the current evidence, to conclude that it’s likely to be incoherent.
More to the point: why do you think it’s likely to be coherent?
Mostly by outside view analogy with the history of the development of science. I’ve read a number of ancient Greek and Roman philosophers (along with a few post-modernists) arguing against the possibility of a coherent theory of physics using arguments very similar to the ones people are using against morality.
I’ve also read a (much larger) number of philosophers trying to shoehorn what we today call science into using the only meta-theory then available in a semi-coherent state: the meta-theory of mathematics. Thus we see philosophers, Descartes being the most famous, trying and failing to study science by starting with a set of intuitively obvious axioms and attempting to derive physical statements from them.
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
As for likely I’m not sure how likely this is, I just think its more likely then a lot of people on this thread assume.
To be clear—you are talking about morality as something externally existing, some ‘facts’ that exist in the world and dictate what you should do, as opposed to a human system of don’t be a jerk. Is that an accurate portrayal?
If that is the case, there are two big questions that immediately come to mind (beyond “what are these facts” and “where did they come from”) - first, it seems that Moral Facts would have to interact with the world in some way in order for the study of big-M Morality to be useful at all (otherwise we could never learn what they are), or they would have to be somehow deducible from first principles. Are you supposing that they somehow directly induce intuitions in people (though, not all people? so, people with certain biological characteristics?)? (By (possibly humorous, though not mocking!) analogy, suppose the Moral Facts were being broadcast by radio towers on the moon, in which case they would be inaccessible until the invention of radio. The first radio is turned on and all signals are drowned out by “DON’T BE A JERK. THIS MESSAGE WILL REPEAT. DON’T BE A JERK. THIS MESSAGE WILL...”.)
The other question is, once we have ascertained that there are Moral Facts, what property makes them what we should do? For instance, suppose that all protons were inscribed in tiny calligraphy in, say, French, “La dernière personne qui est vivant, gagne.” (“The last person who is alive, wins”—apologies for Google Translate) Beyond being really freaky, what would give that commandment force to convince you to follow it? What could it even mean for something to be inherently what you should do?
It seems, ultimately, you have to ask “why” you should do “what you should do”. Common answers include that you should do “what God commands” because “that’s inherently What You Should Do, it is By Definition Good and Right”. Or, “don’t be a jerk” because “I’ll stop hanging out with you”. Or, “what makes you happy and fulfilled, including the part of you that desires to be kind and generous” because “the subjective experience of sentient beings are the only things we’ve actually observed to be Good or Bad so far”.
So, where do we stand now?
Now we’re getting somewhere. What do you mean by the work “jerk” and why is it any more meaningful then words like “moral”/”right”/”wrong”?
The distinction I am trying to make is between Moral Facts Engraved Into The Foundation Of The Universe and A Bunch Of Words And Behaviors And Attitudes That People Have (as a result of evolution & thinking about stuff etc.). I’m not sure if I’m being clear, is this description easier to interpret?
Near as I can tell, what you mean by “don’t be a jerk” is one possible example of what I mean by morality.
Hope that helps.
Great! Then I think we agree on that.
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
If I knew the answer we wouldn’t be having this discussion.
Define your terms, then you get a fair hearing. If you are just saying the terms could maybe someday be defined, this really isn’t the kind of thing that needs a response.
To put it in perspective, you are speculating that someday you will be able to define what the field you are talking about even is. And your best defense is that some people have made questionable arguments against this non-theory? Why should anyone care?
After thinking about it a little I think I can phrase it this way.
I want to answer the question: “What should I do?”
It’s kind of a pressing question since I need to do something (doing nothing counts as a choice and usually not a very good one).
If the people arguing that morality is just preference answer: “Do what you prefer”, my next question is “What should I prefer?”
Three definitions of “should”:
As for obligation—I doubt you are under any obligation other than to avoid the usual uncontroversially nasty behavior, along with any specific obligations you may have to specific people you know. You would know what those are much better than I would. I don’t really see how an ordinary person could be all that puzzled about what his obligations are.
As for propriety—over and above your obligation to avoid uncontroversially nasty behavior, I doubt you have much trouble discovering what’s socially acceptable (stuff like, not farting in an elevator), and anyway, it’s not the end of the world if you offend somebody. Again, I don’t really see how an ordinary person is going to have a problem.
As for expediency—I doubt you intended the question that way.
If this doesn’t answer your question in full you probably need to explain the question. The utilitarians have this strange notion that morality is about maximizing global utility, so of course, morality in the way that they conceive it is a kind of life-encompassing total program of action, since every choice you make could either increase or decrease total utility. Maybe that’s what you want answered, i.e., what’s the best possible thing you could be doing.
But the “should” of obligation is not like this. We have certain obligations but these are fairly limited, and don’t provide us with a life-encompassing program of action. And the “should” of propriety is not like this either. People just don’t pay you any attention as long as you don’t get in their face too much, so again, the direction you get from this quarter is limited.
You have collapsed several meanings of obligation together there. You may have explicit legal obligations to the state, and IOU style obligations to individuals who have done you a favour, and so on. But moralobligations go beyond all those, If you are living a brutal dictatorship, there are conceivable circumstances where you morally should not obey the law. Etc, etc.
In order to accomplish what?
Should you prefer chocolate ice cream or vanilla? As far as ice cream flavors go, “What should I prefer” seems meaningless...unless you are looking for an answer like, “It’s better to cultivate a preference for vanilla because it is slightly healthier” (you will thereby achieve better health than if you let yourself keep on preferring chocolate).
This gets into the time structure of experience. In other words, I would be interpreting your, “What should I prefer?” as, “What things should I learn to like (in order to get more enjoyment out of life)?” To bring it to a more traditionally moral issue, “Should I learn to like a vegetarian diet (in order to feel less guilt about killing animals)?”
Is that more or less the kind of question you want to answer?
Including the word ‘just’ misses the point. Being about preference in now way makes it less important.
This might have clarified for me what this dispute is about. At least I have a hypothesis, tell me if I’m on the wrong track.
Antirealists aren’t arguing that you should go on a hedonic rampage—we are allowed to keep on consulting our consciences to determined the answer to “what should I prefer.” In a community of decent and mentally healthy people we should flourish. But the main upshot of the antirealist position is that you cannot convince people with radically different backgrounds that their preferences are immoral and should be changed, even in principle.
At least, antirealism gives some support to this cynical point of view, and it’s this point of view that you are most interested in attacking. Am I right?
That’s a large part of it.
The other problem is that anti-realists don’t actually answer the question “what should I do?”, they merely pass the buck to the part of my brain responsible for my preferences but don’t give it any guidance on how to answer that question.
Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a “what color is the sky?” type of conversation.
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
Something more objective/universal.
Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences—obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I’d rather select a different social milieu, myself.
I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.
I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they’re contrary to these new preferences.
I’d think that’s a pretty concrete example of changing my preferences, unless we’re using different definitions of “preference.”
I suppose we are using different definitions of “preference”. I’m using it as a friendly term for a person’s utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can’t be understood that way. For example, what you’re calling food preferences are what I’d call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
Ahh, I re-read the thread with this understanding, and was struck by this:
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)
Does that make sense as a “motivation for wanting to change your preferences”?
I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.
My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone’s preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I’m screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don’t go implementing it. Please.
In general, if the FAI is going to give “your preference” to you, your preference had better be something stable about you that you’ll still want when you get it.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it. I certainly would want an akrasia fix if it were available. Maybe that’s the important preference.
Very much agreed.
At the end of the day, you’re going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
I don’t think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I’d agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren’t optimizing for a single clear goal like “happiness” or “lifetime income”.
Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.
It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you’re going to have a lot of trouble making specific long-term predictions.
There isn’t an instrumental motive for changing ones preferences. That doesn’t add up to “never change your preferences” unless you assume that instrumentality—“does it help me achieve anything” is the ultimate way of evaulating things. But it isn’t: morality is. It is morally wrong to design better gas chambers.
The interesting question is still the one you didn’t answer yet:
I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.
The unlikely answer is “I wouldn’t do anything different”. Then I’d reply “So, morality makes no practical difference to your behavior?”, and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior, if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”. I am in the same situation in that when I’m willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.
If I guessed your answer wrong, please correct me. Otherwise I’d want to fix the vocabulary problem somehow. I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts, perhaps an “amoral preference” which would mean what you were calling “preference” before, and “moral preference” would include what you were calling “morality” before, but perhaps we’d choose different words if you objected to those. The next question would be:
...and I have no clue what your answer would be, so I can’t continue the conversation past that point without straightforward answers from you.
Follow morality.
One way to illustrate this distinction is using Eliezer’s “murder pill”. If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we’re talking about, which hasn’t happened.
That’s not a definition of morality but an explanation of one reason why the “murder pill” distinction is important.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
Are you saying there has never been any valid moral discourse or persuasion?
There’s a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven’t seen much non-sloppy thinking on the subject of changing preferences.
I suppose there could be such a thing—Joe designed an elegant high-throughput gas chamber, he wants to show the design to his friends, someone tells Joe that this could be used for mass murder, Joe hadn’t thought that the design might actually be used, so he hides his design somewhere so it won’t be used. But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
But there are other stories where the preference itself changes. “If you approve of womens rights, you should approve of Gay rights”.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can’t envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we’re back to morality being a special case of psychology again.
Because I don’t know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.
They are not going to arrive without overcoming opposition somehow.
Does that mean your “because gays/women want them” isn’t valid? Why offer it then?
Because you reject them?
But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.
I am not and never was using “preference” to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.
If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can’t do that. I can only continue to challenge the frame itself. Morality is already, in itself, the most important value. It isn’t “made” important by some greater good.
There’s a choice you’re making here, differently from me, and I’d like to get clear on what that choice is and understand why we’re making it differently.
I have a bunch of things I prefer. I’d rather eat strawberry ice cream than vanilla, and I’d rather not design higher-throughput gas chambers. For me those two preferences are similar in kind—they’re stuff I prefer and that’s all there is to be said about it.
You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.
I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it’s even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That’s not what I’m talking about. What I’m talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter “s” to be “blort” preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you’d be left wondering “Why does he care?”
And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.
The decision to use the concept of “morality” is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don’t use the concept, that doesn’t change whether anyone wants to build high-throughput gas chambers—it just means that we don’t have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there’s no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.
So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?
I hope we’re agreed that there are two different kinds of things here—the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
You’ve written quite a lot of words but you’re still stuck on the idea that all importance is instrumental importance, importance for something that doesn’t need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn’t stop being you, and your new self wouldn’t be someone your old self would hate. That wouldn’t be the case if you suddenly started liking murder or gas chambers. You don’t now like people who like those things, and you wouldn’t now want to become one.
If we understand what is going on , we should make the choice correctly—that is, according to rational norms. If morality means something other than the merely pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like “is the pragmatic useful” are trivial and tautologous.
You’re not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting.
I’m looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held, which doesn’t seem very interesting. If we’re going to have the distinction, I like Eugene’s proposal that a moral preference is one that’s worth talking about better, but we need to make the distinction in such a way that something doesn’t get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
But what I actually gave as a definition is the concept of morality is the concept of ultimate value and importance. A concept which even the nihilists need so that they can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
You are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
One can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held?
If that question has no answer, the claim that the two are similar seems indefensible.
What makes them matter?
Reason about it?
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
Can you give an example? I tried to make one at http://lesswrong.com/lw/5eh/what_is_metaethics/43fh, but it twisted around into revising a belief instead of revising a preference.
So it doesn’t matter if it only affects what you will do?
If I’m thinking for the purpose of figuring out my future actions, that’s a plan, not a belief, since planning is relevant when I haven’t yet decided what to do.
I suppose beliefs about other people’s actions are empirical.
I’ve lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I’ll reply.
Okay, that seems clear enough that I’d rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene’s evasiveness for Peter’s.
If you know that morality is the ultimate way of evaluating things, and you’re able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers?
Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
I’m pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use.
He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
Agreed, so I deleted my post to avoid wasting Peter’s time responding.
Let’s try a different approach.
I have spent some time thinking about how to apply the ideas of Eliezer’s metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.
So the question becomes how do you separate which of your intuitions are preferences and which are biases?
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.
I don’t see how that question is relevant. I don’t see any good reason for you to dodge my question about what you’d do if your preferences contradicted your morality. It’s not like it’s an unusual situation—consider the internal conflicts of a homosexual Evangelist preacher, for example.
What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
That still doesn’t explain what the difference between your prefernces and your biases is.
That’s rather startling. Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?
“The proper way” being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
Um, no. Unless you are some kind of mutant who doesn’t suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn’t interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.
Here is the example PhilGoetz gives in the article I linked above:
I believe I answered your other question elsewhere in the thread.
Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question “what is truth”.
People clearly have opinions they act on. What makes you think we need this so-called “rationality” to tell us which opinions to have?
Intuitions are internalizations of custom, an aspect of which is morality. Our intuitions result from our long practice of observing custom. By “observing custom” I mean of course adhering to custom, abiding by custom. In particular, we observe morality—we adhere to it, we abide by it—and it is from observing morality that we gain our moral intuitions. This is a curious verbal coincidence, that the very same word “observe” applies in both cases even though it means quite different things. That is:
Our physical intuitions are a result of observing physics (in the sense of watching attentively).
Our moral intuitions are a result of observing morality (in the sense of abiding by).
However, discovering physics is not nearly as passive as is suggested by the word “observe”. We conduct experiments. We try things and see what happens. We test the physical world. We kick the rock—and discover that it kicks back. Reality kicks back hard, so it’s a good thing that children are so resilient. An adult that kicked reality as hard as kids kick it would break their bones.
And discovering morality is similarly not quite as I said. It’s not really by observing (abiding by) morality that we discover morality, but by failing to observe (violating) morality that we discover morality. We discover what the limits are by testing the limits. We are continually testing the limits, though we do it subtly. But if you let people walk all over you, before long they will walk all over you, because in their interactions with you they are repeatedly testing the limits, ever so subtly. We push on the limits of what’s allowable, what’s customary, what’s moral, and when we get push-back we retreat—slightly. Customs have to survive this continual testing of their limits. Any custom that fails the constant testing will be quickly violated and then forgotten. So the customs that have survived the constant testing that we put them through, are tough little critters that don’t roll over easily. We kick customs to see whether they kick back. Children kick hard, they violate custom wildly, so it’s a good thing that adults coddle them. An adult that kicked custom as hard as kids kick it would wind up in jail or dead.
Custom is “really” nothing other than other humans kicking back when we kick them. When we kick custom, we’re kicking other humans, and they kick back. Custom is an equilibrium, a kind of general truce, a set of limits on behavior that everyone observes and everyone enforces. Morality is an aspect of this equilibrium. It is, I think, the more serious, important bits of custom, the customary limits on behavior where we kick back really hard, or stab, or shoot, if those limits are violated.
Anyway, even though custom is “really” made out of people, the regularities that we discover in custom are impersonal. One person’s limits are pretty much another person’s limits. So custom, though at root personal, is also impersonal, in the “it’s not personal, it’s just business” sense of the movie mobster. So we discover regularities when we test custom—much as we discover regularities when we test physical reality.
Yes, but we’ve already determined that we don’t disagree—unless you think we still do? I was arguing against observing objective (i.e. externally existing) morality. I suspect that you disagree more with Eugine_Nier.
Which is true, and explains why it is a harder problem than physics, and less progress has been made.
I’m not sure I accept either of those claims, explanation or no.
Do you sincerely believe there is no difference? If not:: why not start by introspecting your own thinking on the subject?
Again, we come to this issue of not having a precise definition of “right” and “wrong”.
You’re dodging the questions that I asked.
I am not dodging them. I am arguing that they are inappropriate to the domain, and that not all definitions have to work that way.
But you already have determined that one of them is accurate, right?
Everything is inaccurate for some value of accurate. The point is you can’t arrive at an accurate definition without a good theory, and you can’t arrive at a good theory without an (inevitably inaccurate) definition.
It’s a problem to assert that you’ve determined which of A and B is accurate, but that there isn’t a way to determine which of A and B is accurate.
Edited to clarify: When I wrote this, the parent post started with the line “You say that like it’s a problem.”
I haven’t asserted that any definition of “Morality” can jump through the hoops set up by NMJ and co. but there is an (averagely for Ordinary Language) inaccurate definition which is widely used.
The question in this thread was not “define Morality” but “explain how you determine which of “Killing innocent people is wrong barring extenuating circumstances” and “Killing innocent people is right barring extenuating circumstances” is morally right.”
(For people with other definitions of morality and / or other criteria for “rightness” besides morality, there may be other methods.)
The question was rather unhelpfully framed in Jublowskian terms of “observable consequences”. I think killing people is wrong because I don’t want to be killed, and I don’t want to Act on a Maxim I Would Not Wish to be Universal Law.
My name is getting all sorts of U’s and W’s these days.
If there was a person who decided they did want to be killed, would killing become “right”?
Does he want everyone to die? Does he want to kill them against their wished? Are multiple agents going to converge on that opinion?
What are the answers under each of those possible conditions (or, at least, the interesting ones)?
Why do you need me to tell you? Under normal circumstances the normal “murder is worng” answer will obtain—that’s the point.
Because I’m trying to have a discussion with you about your beliefs?
Looking at this I find it hard to avoid concluding that you’re not interested in a productive discussion—you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I’m going to have to bow out of talking with you on this topic.
I believe murder is wrong. I believe you can figure that out if you don’t know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible. A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
I want a system that supports core intuitions. A consistent system can help to disambiguate intuitions.
And how do you decide which intuitions are “core intuitions”?
There’s a high degree of agreement about them. They seem particularly clear to me.
Can you give some of those? I’d be curious what such a list would look like.
eg., Murder, stealing
So what makes an intuition a core intuition and how did you determine that your intuitions about murder and stealing are core?
That’s a pretty short list.
In this post: “How do you determine which one is accurate?”
In your response further down the thread: “I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]”
And then my post: “But you already have determined that one of them is accurate, right?”
That question was not one phrased in the way you object to, and yet you still haven’t answered it.
Though, at this point it seems one can infer (from the parent post) that the answer is something like “I reason about which principle is more beneficial to me.”
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless. It’s like believing in a god which can never be discovered. Good for you, but if the universe will play out exactly the same as if it wasn’t there, why should I care?
Furthermore, why posit the existence of such a thing at all?
On a tangent—I think the subjectivist flavor of that is unfortunate. You’re echoing Eliezer’s Making Beliefs Pay Rent, but the anticipations that he’s talking about are “anticipations of sensory experience”. Ultimately, we are subject to natural selection, so maybe a more important rent to pay than anticipation of sensory experiences, is not being removed from the gene pool. So we might instead say, “any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless.”
Elsewhere, in his article on Newcomb’s paradox, Eliezer says:
Survival is ultimate victory.
I don’t generally disagree with anything you wrote. Perhaps we miscommunicated.
I think that would depend on how one uses “meaningless” but I appreciate wholeheartedly the sentiment that a rational agent wins, with the caveat that winning can mean something very different for various agents.
Moral beliefs aren’t beliefs about moral facts out there in reality, they are beliefs about what I should do next. “What should I do” is an orthogonal question to “what can I expect if I do X”. Since I can reason morally, I am hardly positing anything without warrant.
You just bundled up the whole issue, shoved it inside the word “should” and acted like it had been resolved.
I have stated several times that the whole issue has not been resolved. All I’m doing at the moment is refuting your over-hasty generalisation that:
“morality doesn’t work like empirical prediction, so ditch the whole thing”.
It doesn’t work like the empiricism you are used to because it is, in broad brush strokes, a different thing that solves a different problem.
Can you recognize that from my position it doesn’t work like the empiricism I’m used to because it’s almost entirely nonsensical appeals to nothing, arguing by definitions, and the exercising of the blind muscles of eld philosophy?
I am unpersuaded that there exists a set of correct preferences. You have, as far as I can see, made no effort to persuade me, but rather just repeatedly asserted that there are and asked me questions in terms that you refuse to define. I am not sure what you want from me in this case.
Why should I accept your bald assertions here?
You may be entirely of the opinion that it is all stuff and nonsense: I am only interested in what can be rationally argued.
I don’t think you think it works like empiricism. I think you have tried to make it work like empiricism and then given up. “I have a hammer in my hand, and it won’t work on this ‘screw’ of yours, so you should discard it”.
People can and reason about what preferences they should have, and such reasoning can be as objective as mathematical reasoning, without the need for a special arena of objects.
What is weasel-like with “near the surface of the earth”?
In this context, it’s as “weasel-like” as “innocent”. In the sense that both are fudge factors you need to add to the otherwise elegant statement to make it true.
What would it take to convince you your example is wrong?
Note how “2+2=4” has observable consequences:
Does your example (or another you care to come up with) have observable consequences?
I don’t think you can explicate such a connection, especially not without any terms defined. In fact, it is just utterly pointless to try to develop a theory in a field that hasn’t even been defined in a coherent way. It’s not like it’s close to being defined, either.
For example, “Is abortion morally wrong?” combines about 12 possible questions into it because it has a least that many interpretations. Choose one, then we can study that. I just can’t see how otherwise rationality-oriented people can put up with such extreme vagueness. There is almost zero actual communication happening in this thread in the sense of actually expressing which interpretation of moral language anyone is taking. And once that starts happening it will cover way too many topics to ever reach a resolution. We’re simply going to have to stop compressing all these disparate-but-subtly-related concepts into a single field, taboo all the moralist language, and hug some queries (if any important ones actually remain).
In any science I can think of people began developing it using intuitive notions, only being able to come up with definitions after substantial progress had been made.
You can assume that the words have no specific meaning and are used to signal membership in a group. This explains why the flowchart in the original post has so many endpoints about what morality might mean. It explains why there seems to be no universal consensus on what specific actions are moral and which ones are not. It also explains why people have such strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
One could make the same claim about words like “exits”/”true”/”false”. Especially if our knowledge of science was at the same state as our knowledge of ethics.
Just the words “exits”/”true”/”false” had a meaning even before the development of science and Bayeanism even though a lot of people used them to signal group affiliation, I believe the words “should”/”right”/”wrong” even though a lot of people use them to signal group affiliation
But science isn’t about words like “exist”, “true”, or “false”. Science is about words like “Frozen water is less dense than liquid water”. I can point at frozen water, liquid water, and a particular instance of the former floating on the latter. Scientific claims were well-defined even before there was enough knowledge to evaluate them. I can’t point at anything for claims about morality, so the analogy between ethics and science is not valid.
Come on people. Argument by analogy doesn’t prove anything even when the analogies are valid! Stop it.
If you don’t like the hypothesis that words like “should”, “right”, and “wrong” are social signaling, give some other explanation of the evidence that is simpler. The evidence in question is:
The flowchart in the original post has many endpoints about what morality might mean.
There seems to be no universal consensus on what specific actions are moral and which ones are not.
People have strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
You can’t point at anything for claims about pure maths either. That something is not empirical does not automatically invalidate it.
Morality is not just social signalling, because it makes sense to say some social signals (“I am higher status than you because I have more slaves”) are morally wrong.
That conclusion does not follow. Saying you have slaves is a signal about morality and, depending on the audience, often a bad signal.
Note that there is a difference between “morality is about signalling” and “signalling is about morality”. If I say “I am high status because I live a moral life” I am blatantly using morality to signal, but it doesn’t remotely follow from that there is nothing to morality except signalling. It could be argued that, morally speaking, I should pursue morality for its own sake and not to gain status.
That sounds like an effective signal to send—and a common one.
Noted yet not especially relevant to my comment.
Only because the force of the word “exists” is implicit in the indicative mood of the word “is”.
But they can help explain what people mean, and they can show argument prove too much.
I could draw an equally complicate flow chart about what “truth” and “exists”/”is” might mean.
The amount of consensus is roughly the same as the amount of consensus there was before the development of science about which statements are true and which aren’t.
People had strong opinions about truth before the concept of empirical validation was developed.
Your criticisms of “truth” are not so far off, but you’re essentially saying that parts of science are wrong so you can be wrong, too. No actually, you think it is OK to flounder around in the field when you’re just starting out. Sure, but not when you don’t even know what it is you’re supposed to be studying—if anything! This is not analogous to physics, where the general goal was clear from the very beginning: figure out what physical mechanisms underly macro-scale phenomena, such as the hardness of metal, conductivity, magnetic attraction, gravity, etc.
You’re just running around to whatever you can grab onto to avoid the main point that there is nothing close to a semblance of delineation of what this “field” is actually about, and it is getting tiresome.
I think the claim that ethicists don’t know at all what they are studying is unfounded.
I believe this is hindsight bias.
Ugg in 65,000 BC: Why water fire no mix? Why rock so hard? Why tree have shadow?
Eugine in 2011: What is the True Theory of Something-or-Other?
True.
That is sort of half true, but it feels like you’re just saying that to say it, as there have been criticisms of this same line of reasoning that you haven’t answered.
How about the fact that beliefs about physics actually pay rent? Do moral ones?
Good point. And no, they don’t—unless you need rent to tell you about people’s preferences.
Not in the sense of anticipated experience, however they do inform our actions.
No, the reductionist description of the Correct Theory of Physics eventually involves pointing at lab equipment. There is no lab equipment for morality, so the analogy is not valid.
I could point a gun to your head and ask you to explain why I shouldn’t pull the trigger.
That scenario doesn’t lead to discovering the truth. If I deceive you with bullshit and you don’t pull the trigger, that’s a victory for me. I invite you to try again, but next time pick an example where the participants are incentivised to make true statements.
ETA: …unless the truth we care about is just which flavors of bullshit will persuade you not to pull the trigger. If that’s what you mean by morality, you probably agree with me that it is just social signaling.
Well you could just as easily use your lab equipment to deceive me with bullshit.
And if he gave a true moral argument you would have to accept it?
How would you distinguish a true argument from a merely persuasive one?
Like I mentioned elsewhere in this thread, the “No Universally Compelling Argument” post you site applies equally well to physical and even mathematical facts (in fact that was what Eliezer was mainly referring to in that post).
In fact, the main point of that sequence is that just because there are no universally compelling arguments doesn’t mean truth doesn’t exist. As Eliezer mentions in where recursive justification hits bottom:
A formal proof is still a proof though, although nothing mandates that a listener must accept it. A mind can very well contain an absolute dismissal mechanism or optimize for something other than correctness.
We can understand what sort of assumptions we’re making when we derive information from mathematical axioms, or the axioms of induction, and how further information follows from that. But what assumptions are we making that would allow us to extrapolate absolute moral facts? Does our process give us any way to distinguish them from preferences?
That morality is not straightforwardly empirical is part of why it is inappropriate to demand concrete definitions.
Do you believe in God? If I defended the notion of God in a similar way—it is not straightforwardly empirical, it’s inappropriate to demand concrete definitions, it’s not under the domain of science, just because you can’t define it and measure it doesn’t mean it doesn’t exist—would you find that persuasive?
But I am only defending the idea that morality means something. Atheists think “God” means something. “uncountable set” means something even if the idea is thoroughly non-concrete.
Sure, but few-to-no atheists would say something like “‘God’ means something, but exactly what is an open problem.”
The idea of someone refusing to say what they mean by “uncountable set” is even stranger.
All atheists have to adopt a broad definition of God, or else they would only be disbelieving in the 7th day adventist God, or whatever...ie they would believe in all deities except one, which is more than the average believer.
This gets silly.
“Do you believe in woojits?” Well, no, I don’t.
“Ah, well, if you disbelieve in woojits, then you must know what woojits are! So, what are woojits?” I have no idea.
“But how is that possible? If you don’t have a definition for woojits, on what basis do you reject belief in them?” Having a well-defined notion of something is a prerequisite for belief in it; I don’t have a well-defined notion of woojits; therefore I don’t believe in woojits.
“No, no. You’re confused. All woojit-disbelievers have to adopt a broad definition of woojits in order to disbelieve in them; otherwise they would merely disbelieve in a specific woojit.” (shrug) OK, if you like, I have a broad definition of woojit… so broad, in fact, that it is effectively identical to my definition of all the other concepts I don’t believe in and haven’t thought about, which is the overwhelming majority of all possible concepts. For my part, I consider this equivalent to not having a definition of woojit at all.
As I say, this gets silly. It’s just arguing about definitions of words.
Now, I would agree that atheists who grow up in theist cultures do have a definition of God, though I disagree with you that it’s necessarily broad: I know at least one atheist who was raised Roman Catholic, for example, and the god he disbelieves in is the Roman Catholic god of his youth, and the idea that “God” might conceivably refer to anything else just doesn’t have a lot of meaning to him.
If you don’t know what woojits are, you shouldn’t jump to the conclusion that you don’t believe in them. That is a mistake of rationality.
If your RC has concluded that he is an atheist without even considering other gods, that is a mistake of rationality too.
But earlier you indicated that asking what a woojit is requires accepting the notion of woojits as coherent.
No, I said that asking about the nature of moral claims means “moral” has some prima facie meaning. “woojit” is a made up word with no prima facie meaning. Not analogous.
Replace woojit then with boojum and the point still goes through.
It doesn’t sill go through, since it did not in the first place. It’s a concrete fact that you can look up “moral” in a dictionary, for all that what you read isn’t very useful.
How is that relevant? I don’t see why the presence in a dictionary matters. But even if it did, boojum is in some dictionaries and encyclopedia too. It is a type of snark.
It’s only in some,and not all, dictionaries because it is a made up word that is supposed to be ill defined and puzzling. Some Lexicographers feel that readers need to be advised that when they encounter this word, it is being used to flag “here is something strange and meaningless”.
So what matters then is if all dictionaries have it? Why does that matter? Does this mean we couldn’t have this discussion before dictionaries were invented? Did the nature of morality change with the invention of a dictionary? Moreover, if one got every dictionary to include “boojum” and “snark” would that then make it different?
If a word is defined in all dictionaries, then the claim that it is completely meaningless is extraordinary and poorly motivated. Dictionaries are of course only significant because they make usage concrete.
The claim was about incoherence not whether it was “completely meaningless” and I fail to see how motivation is either relevant or you get anything about a claim being poorly motivated from this. If you prefer a different analogy, consider such terms as transubstantiation, consubstantiation, homoousion, hypostatic union, kerygma and modalism. Similarly, in a Hebrew dictionary you will have all ten Sephirot defined (Keter, chochmah, etc.). Is it is extraordinary and poorly motivated to say that these kabbalistic terms are incoherent?
The point about motivation is about where burdens lie.
The discussion so far has been about the accusation that somebody somewhere is culpably refusing to define “morality”. This is the first mention of incoherence.
“incoherent” is often used as a loose synonym for “I don’t like it”. That is not a useful form of argument. The examples of “incoherent” concepts you gave are a mixed bag of concepts ranging from the well defined but false, to the well defined but ungrounded, to the ill defined. If you want to say what specific kind of incoherence “morality” has IYO, feel free.
How are motivations relevant to where burdens lie?
Really? So, what about here?
You seem confused about what argument CuSithBell is arguing. The argument is not that morality is fundamentally incoherent or meaningless but that most definitions of it fall into those categories and that our common intuition is not sufficient to have useful discussions about it, so you need to supply a definition for what you mean. So far, you seem to have refused to do that. Do you see the distinction?
I’m not really sure what a “mistake of rationality” is, or how it differs from simply being mistaken about something.
That said, I would agree with you that my Roman Catholic atheist friend is not arriving at his atheism in a particularly rational way.
WRT woojits, I’m not jumping to any conclusions: I arrived at that conclusion step-by-step. Again: “Having a well-defined notion of something is a prerequisite for belief in it; I don’t have a well-defined notion of woojits; therefore I don’t believe in woojits.” You’re free to disagree with any part of that or all of it, but I’d prefer you didn’t simply ignore it.
A mistake of rationality is quite different from a perceptual error, for instance. It’s even different to being wrong, since one can be right for irrational reasons.
I disagree. I believe in consciousness, but don’t have a well defined notion of it.
On the one hand, “woojit” might be intended as a synonym for something you do believe in. On the other hand. if it is meaningless, “woojits don’t exist” is meaningless. Either way, you should not conclude that woojits don’t exist because you don’t know what they are
Agreed.
This doesn’t strike you as being a problem?
I probably don’t understand what you mean.
I think that it’s easy to be an atheist—i.e. one doesn’t have to make any difficult definitions or arguments to arrive at atheism, and those easy definitions and arguments are correct. If you think it’s harder than I do, that would be interesting and could explain why we have such different opinions here.
Fine. Then the atheist who doesn’t have a difficult definition of God, isn’t culpably refusing to explain her “new idea”, and someone who thinks there is something to be said about morality can stick with the vanilla definition that morality is Right and Wrong and Such.
A correct theory of physics would inform my anticipations.
Please, taboo “anticipations”.
Replace anticipations with:
My ability, as a mind (subjective observer), to construct an isomorphism in memory that corresponds to future experiences.
What’s an “isomorphism in memory”? What are “future experiences”? And what does it mean for them to “correspond”?
I would be happy to continue down this line a ways longer if you would like, and we could get all the way down to the two of us in the same physical location rebuilding the concept of induction. I am confident that if necessary we could do that for “anticipations” and build our way back up. I am not confident that “morality” as it has been used here actually connects to any solid surface in reality, unless it ends up meaning the same thing as “preferences”.
Do you disagree?
In that case maybe we should continue a bit longer until you’re disabused of that belief. What I suspect will happen is that you’ll continue to attempt to define your words in terms of more and more tenuous abstractions until the words you’re using really are almost meaningless.
I think “X is what the correct theory of X says” is true for all X. The Correct Theory can say “Nothing”, of course.
5, 8, 9, and so on.
Just explain what you mean, already. Otherwise, I’ve got better things to do.
I understand English. Please proceed. (I can’t speak for the other participants, but I infer that they understand English as well.)
Some of them claim not to understand some common words. If that stretches to “define” and “mean”. etc, the explanatory effort will be wasted.
Why not try this: imagine an inquisitive nine-year-old asked you what you meant by “morality”; such a nine-year-old might not know what “define” means, but I expect you wouldn’t refuse to explain morality on those grounds.
I would only have to point to the distinction between Good Things and Naughty Things which all children have drummed into them from a much earlier age. That is what makes the claim not to have an OL undesrtanding of morality so unlikely.
Imagine your nine-year-old interlocutor pointing out that not all children have the same Good Things and Naughty Things drummed into them.
So? You seem to think I am arguing for one particular theory.
Because of the above, I think you are making a claim that a singular Correct Theory of Morality exists. How would you explain that to a nine-year-old? That’s the discussion we could be having.
You continue to misrepresent my position.
I have not been offering one.
I have been requesting one.
I don’t see any substantive, real world connection to words like “good” or “moral” in this context. I am assuming you do mean something real by them, and I am asking you to convey that meaning by using simpler words that we both already understand in concrete terms.
And I think you are as capable as anyone else of seeing the ordinary meanings of these terms. There is no guarantee that they are definable in simpler terms or in concrete terms, since it is likely that some concepts are basic or abstract. You have an unusual inability to understand these terms. and an unlikely background theory of meaning. I think those two facts are connected.
I think you will find my thoughts on this matter are relatively common in this community.
But not in the wider world.