Yes, well opinions also anticipate observations. But in a sense by talking about “observable consequences” your taking advantage of the fact that the meta-theory of science is currently much more developed then the meta-theory of ethics.
But some preferences can be moral, just as some opinions can be true. There is no automatic entailment from “it is a preference” to “it has nothing to do with ethics”.
Currently, intuition. Along with the existing moral theories, such as they are.
Similar to the way people determined facts about physics, especially facts beyond the direct observation of their senses, before the scientific method was developed.
Right, and ‘facts’ about God. Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of… intuitions.
You can’t really argue that objective morality not being well-defined means that it is more likely to be a coherent notion.
Mostly by outside view analogy with the history of the development of science. I’ve read a number of ancient Greek and Roman philosophers (along with a few post-modernists) arguing against the possibility of a coherent theory of physics using arguments very similar to the ones people are using against morality.
I’ve also read a (much larger) number of philosophers trying to shoehorn what we today call science into using the only meta-theory then available in a semi-coherent state: the meta-theory of mathematics. Thus we see philosophers, Descartes being the most famous, trying and failing to study science by starting with a set of intuitively obvious axioms and attempting to derive physical statements from them.
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
As for likely I’m not sure how likely this is, I just think its more likely then a lot of people on this thread assume.
To be clear—you are talking about morality as something externally existing, some ‘facts’ that exist in the world and dictate what you should do, as opposed to a human system of don’t be a jerk. Is that an accurate portrayal?
If that is the case, there are two big questions that immediately come to mind (beyond “what are these facts” and “where did they come from”) - first, it seems that Moral Facts would have to interact with the world in some way in order for the study of big-M Morality to be useful at all (otherwise we could never learn what they are), or they would have to be somehow deducible from first principles. Are you supposing that they somehow directly induce intuitions in people (though, not all people? so, people with certain biological characteristics?)? (By (possibly humorous, though not mocking!) analogy, suppose the Moral Facts were being broadcast by radio towers on the moon, in which case they would be inaccessible until the invention of radio. The first radio is turned on and all signals are drowned out by “DON’T BE A JERK. THIS MESSAGE WILL REPEAT. DON’T BE A JERK. THIS MESSAGE WILL...”.)
The other question is, once we have ascertained that there are Moral Facts, what property makes them what we should do? For instance, suppose that all protons were inscribed in tiny calligraphy in, say, French, “La dernière personne qui est vivant, gagne.” (“The last person who is alive, wins”—apologies for Google Translate) Beyond being really freaky, what would give that commandment force to convince you to follow it? What could it even mean for something to be inherently what you should do?
It seems, ultimately, you have to ask “why” you should do “what you should do”. Common answers include that you should do “what God commands” because “that’s inherently What You Should Do, it is By Definition Good and Right”. Or, “don’t be a jerk” because “I’ll stop hanging out with you”. Or, “what makes you happy and fulfilled, including the part of you that desires to be kind and generous” because “the subjective experience of sentient beings are the only things we’ve actually observed to be Good or Bad so far”.
The distinction I am trying to make is between Moral Facts Engraved Into The Foundation Of The Universe and A Bunch Of Words And Behaviors And Attitudes That People Have (as a result of evolution & thinking about stuff etc.). I’m not sure if I’m being clear, is this description easier to interpret?
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
If I knew the answer we wouldn’t be having this discussion.
Define your terms, then you get a fair hearing. If you are just saying the terms could maybe someday be defined, this really isn’t the kind of thing that needs a response.
To put it in perspective, you are speculating that someday you will be able to define what the field you are talking about even is. And your best defense is that some people have made questionable arguments against this non-theory? Why should anyone care?
used in auxiliary function to express obligation, propriety, or expediency
As for obligation—I doubt you are under any obligation other than to avoid the usual uncontroversially nasty behavior, along with any specific obligations you may have to specific people you know. You would know what those are much better than I would. I don’t really see how an ordinary person could be all that puzzled about what his obligations are.
As for propriety—over and above your obligation to avoid uncontroversially nasty behavior, I doubt you have much trouble discovering what’s socially acceptable (stuff like, not farting in an elevator), and anyway, it’s not the end of the world if you offend somebody. Again, I don’t really see how an ordinary person is going to have a problem.
As for expediency—I doubt you intended the question that way.
If this doesn’t answer your question in full you probably need to explain the question. The utilitarians have this strange notion that morality is about maximizing global utility, so of course, morality in the way that they conceive it is a kind of life-encompassing total program of action, since every choice you make could either increase or decrease total utility. Maybe that’s what you want answered, i.e., what’s the best possible thing you could be doing.
But the “should” of obligation is not like this. We have certain obligations but these are fairly limited, and don’t provide us with a life-encompassing program of action. And the “should” of propriety is not like this either. People just don’t pay you any attention as long as you don’t get in their face too much, so again, the direction you get from this quarter is limited.
As for obligation—I doubt you are under any obligation other than to avoid the usual >uncontroversially nasty behavior, along with any specific obligations you may have to >specific people you know. You would know what those are much better than I would. I >don’t really see how an ordinary person could be all that puzzled about what his >obligations are.
You have collapsed several meanings of obligation together there. You may have explicit legal obligations to the state, and IOU style obligations to individuals who have done you a favour, and so on. But moralobligations go beyond all those, If you are living a brutal dictatorship, there are conceivable circumstances where you morally should not obey the law. Etc, etc.
If the people arguing that morality is just preference answer: “Do what you prefer”, my next question is “What should I prefer?”
In order to accomplish what?
Should you prefer chocolate ice cream or vanilla? As far as ice cream flavors go, “What should I prefer” seems meaningless...unless you are looking for an answer like, “It’s better to cultivate a preference for vanilla because it is slightly healthier” (you will thereby achieve better health than if you let yourself keep on preferring chocolate).
This gets into the time structure of experience. In other words, I would be interpreting your, “What should I prefer?” as, “What things should I learn to like (in order to get more enjoyment out of life)?” To bring it to a more traditionally moral issue, “Should I learn to like a vegetarian diet (in order to feel less guilt about killing animals)?”
Is that more or less the kind of question you want to answer?
This might have clarified for me what this dispute is about. At least I have a hypothesis, tell me if I’m on the wrong track.
Antirealists aren’t arguing that you should go on a hedonic rampage—we are allowed to keep on consulting our consciences to determined the answer to “what should I prefer.” In a community of decent and mentally healthy people we should flourish. But the main upshot of the antirealist position is that you cannot convince people with radically different backgrounds that their preferences are immoral and should be changed, even in principle.
At least, antirealism gives some support to this cynical point of view, and it’s this point of view that you are most interested in attacking. Am I right?
The other problem is that anti-realists don’t actually answer the question “what should I do?”, they merely pass the buck to the part of my brain responsible for my preferences but don’t give it any guidance on how to answer that question.
Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a “what color is the sky?” type of conversation.
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
Something more objective/universal.
Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences—obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I’d rather select a different social milieu, myself.
I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.
I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they’re contrary to these new preferences.
I’d think that’s a pretty concrete example of changing my preferences, unless we’re using different definitions of “preference.”
I suppose we are using different definitions of “preference”. I’m using it as a friendly term for a person’s utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can’t be understood that way. For example, what you’re calling food preferences are what I’d call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
Ahh, I re-read the thread with this understanding, and was struck by this:
I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)
Does that make sense as a “motivation for wanting to change your preferences”?
I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.
My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone’s preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I’m screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don’t go implementing it. Please.
In general, if the FAI is going to give “your preference” to you, your preference had better be something stable about you that you’ll still want when you get it.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it. I certainly would want an akrasia fix if it were available. Maybe that’s the important preference.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it.
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
At the end of the day, you’re going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
I don’t think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I’d agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren’t optimizing for a single clear goal like “happiness” or “lifetime income”.
predict the person
Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.
It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you’re going to have a lot of trouble making specific long-term predictions.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences
There isn’t an instrumental motive for changing ones preferences. That doesn’t add up to “never change your preferences” unless you assume that instrumentality—“does it help me achieve anything” is the ultimate way of evaulating things. But it isn’t: morality is.
It is morally wrong to design better gas chambers.
The interesting question is still the one you didn’t answer yet:
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.
The unlikely answer is “I wouldn’t do anything different”. Then I’d reply “So, morality makes no practical difference to your behavior?”, and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior, if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”. I am in the same situation in that when I’m willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.
If I guessed your answer wrong, please correct me. Otherwise I’d want to fix the vocabulary problem somehow. I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts, perhaps an “amoral preference” which would mean what you were calling “preference” before, and “moral preference” would include what you were calling “morality” before, but perhaps we’d choose different words if you objected to those. The next question would be:
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
...and I have no clue what your answer would be, so I can’t continue the conversation past that point without straightforward answers from you.
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
Follow morality.
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
One way to illustrate this distinction is using Eliezer’s “murder pill”. If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we’re talking about, which hasn’t happened.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
That’s not a definition of morality but an explanation of one reason why the “murder pill” distinction is important.
...the way human brains are designed, thinking about your preferences can cause them to change.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down
Are you saying there has never been any valid moral discourse or persuasion?
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
There’s a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven’t seen much non-sloppy thinking on the subject of changing preferences.
I suppose there could be such a thing—Joe designed an elegant high-throughput gas chamber, he wants to show the design to his friends, someone tells Joe that this could be used for mass murder, Joe hadn’t thought that the design might actually be used, so he hides his design somewhere so it won’t be used. But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
Are you saying there has never been any valid moral discourse or persuasion?
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
But there are other stories where the preference itself changes. “If you approve of womens rights, you should approve of Gay rights”.
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
“If you approve of womens rights, you should approve of Gay rights”.
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can’t envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we’re back to morality being a special case of psychology again.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
Because I don’t know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior,
But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.
if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”
I am not and never was using “preference” to mean something disjoint from morality.
If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism
or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can’t do that. I can only continue to challenge the
frame itself. Morality is already, in itself, the most important value. It isn’t “made” important by some greater good.
I am not and never was using “preference” to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences.
There’s a choice you’re making here, differently from me, and I’d like to get clear on what that choice is and understand why we’re making it differently.
I have a bunch of things I prefer. I’d rather eat strawberry ice cream than vanilla, and I’d rather not design higher-throughput gas chambers. For me those two preferences are similar in kind—they’re stuff I prefer and that’s all there is to be said about it.
You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.
I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it’s even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That’s not what I’m talking about. What I’m talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter “s” to be “blort” preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you’d be left wondering “Why does he care?”
And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.
The decision to use the concept of “morality” is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don’t use the concept, that doesn’t change whether anyone wants to build high-throughput gas chambers—it just means that we don’t have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there’s no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.
So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?
Morality is already, in itself, the most important value.
I hope we’re agreed that there are two different kinds of things here—the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
You’ve written quite a lot of words but you’re still stuck on the idea that all importance is instrumental importance, importance for something that doesn’t need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn’t stop
being you, and your new self wouldn’t be someone your old self would hate. That wouldn’t be the case if you suddenly started liking murder or gas chambers. You don’t now like people who like those things, and you wouldn’t now want to become one.
I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
If we understand what is going on , we should make the choice correctly—that
is, according to rational norms. If morality means something other than the merely
pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like “is the pragmatic useful”
are trivial and tautologous.
You should care about morality because it is a value and values are definitionally what is important and what should be cared about.
You’re not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting.
I’m looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held, which doesn’t seem very interesting. If we’re going to have the distinction, I like Eugene’s proposal that a moral preference is one that’s worth talking about better, but we need to make the distinction in such a way that something doesn’t get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held,
But what I actually gave as a definition is the concept of morality is the concept
of ultimate value and importance. A concept which even the nihilists need so that they
can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
You are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
One can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held?
If that question has no answer, the claim that the two are similar seems indefensible.
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
So it doesn’t matter if it only affects what you will do?
If I’m thinking for the purpose of figuring out my future actions, that’s a plan, not a belief, since planning is relevant when I haven’t yet decided what to do.
I suppose beliefs about other people’s actions are empirical.
I’ve lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I’ll reply.
[Morality is] the ultimate way of evaluating things… It is morally wrong to design better gas chambers.
Okay, that seems clear enough that I’d rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene’s evasiveness for Peter’s.
If you know that morality is the ultimate way of evaluating things, and you’re able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers?
Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
I’m pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use.
He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
I have spent some time thinking about how to apply the ideas of Eliezer’s metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.
So the question becomes how do you separate which of your intuitions are preferences and which are biases?
[H]ow do you separate which of your intuitions are preferences and which are biases?
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.
I don’t see how that question is relevant. I don’t see any good reason for you to dodge my question about what you’d do if your preferences contradicted your morality. It’s not like it’s an unusual situation—consider the internal conflicts of a homosexual Evangelist preacher, for example.
What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?
and uncertainty about the future should interact with the utility function in the proper way.
“The proper way” being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way.
Um, no. Unless you are some kind of mutant who doesn’t suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn’t interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.
Here is the example PhilGoetz gives in the article I linked above:
In Crime and punishment, I argued that people want to punish criminals, even if there is a painless, less-costly way to prevent crime. This means that people value punishing criminals. This value may have evolved to accomplish the social goal of reducing crime. Most readers agreed that, since we can deduce this underlying reason, and accomplish it more effectively through reasoning, preferring to punish criminals is an error in judgement.
Most people want to have sex. This value evolved to accomplish the goal of reproducing. Since we can deduce this underlying reason, and accomplish it more efficiently than by going out to bars every evening for ten years, is this desire for sex an error in judgement that we should erase?
Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question “what is truth”.
Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of… intuitions.
Intuitions are internalizations of custom, an aspect of which is morality. Our intuitions result from our long practice of observing custom. By “observing custom” I mean of course adhering to custom, abiding by custom. In particular, we observe morality—we adhere to it, we abide by it—and it is from observing morality that we gain our moral intuitions. This is a curious verbal coincidence, that the very same word “observe” applies in both cases even though it means quite different things. That is:
Our physical intuitions are a result of observing physics (in the sense of watching attentively).
Our moral intuitions are a result of observing morality (in the sense of abiding by).
However, discovering physics is not nearly as passive as is suggested by the word “observe”. We conduct experiments. We try things and see what happens. We test the physical world. We kick the rock—and discover that it kicks back. Reality kicks back hard, so it’s a good thing that children are so resilient. An adult that kicked reality as hard as kids kick it would break their bones.
And discovering morality is similarly not quite as I said. It’s not really by observing (abiding by) morality that we discover morality, but by failing to observe (violating) morality that we discover morality. We discover what the limits are by testing the limits. We are continually testing the limits, though we do it subtly. But if you let people walk all over you, before long they will walk all over you, because in their interactions with you they are repeatedly testing the limits, ever so subtly. We push on the limits of what’s allowable, what’s customary, what’s moral, and when we get push-back we retreat—slightly. Customs have to survive this continual testing of their limits. Any custom that fails the constant testing will be quickly violated and then forgotten. So the customs that have survived the constant testing that we put them through, are tough little critters that don’t roll over easily. We kick customs to see whether they kick back. Children kick hard, they violate custom wildly, so it’s a good thing that adults coddle them. An adult that kicked custom as hard as kids kick it would wind up in jail or dead.
Custom is “really” nothing other than other humans kicking back when we kick them. When we kick custom, we’re kicking other humans, and they kick back. Custom is an equilibrium, a kind of general truce, a set of limits on behavior that everyone observes and everyone enforces. Morality is an aspect of this equilibrium. It is, I think, the more serious, important bits of custom, the customary limits on behavior where we kick back really hard, or stab, or shoot, if those limits are violated.
Anyway, even though custom is “really” made out of people, the regularities that we discover in custom are impersonal. One person’s limits are pretty much another person’s limits. So custom, though at root personal, is also impersonal, in the “it’s not personal, it’s just business” sense of the movie mobster. So we discover regularities when we test custom—much as we discover regularities when we test physical reality.
Yes, but we’ve already determined that we don’t disagree—unless you think we still do? I was arguing against observing objective (i.e. externally existing) morality. I suspect that you disagree more with Eugine_Nier.
Right, and ‘facts’ about God. Except that intuitions about physics derive from >observations of physics, whereas intuitions about morality derive from observations >of… intuitions.
Which is true, and explains why it is a harder problem than physics, and less progress has been made.
Moral facts don’t lead me to anticipate observable consequences, but they do affect the actions I choose to take.
Preferences also do that.
Yes, well opinions also anticipate observations. But in a sense by talking about “observable consequences” your taking advantage of the fact that the meta-theory of science is currently much more developed then the meta-theory of ethics.
But some preferences can be moral, just as some opinions can be true. There is no automatic entailment from “it is a preference” to “it has nothing to do with ethics”.
The question was—how do you determine what the moral facts are?
Currently, intuition. Along with the existing moral theories, such as they are.
Similar to the way people determined facts about physics, especially facts beyond the direct observation of their senses, before the scientific method was developed.
Right, and ‘facts’ about God. Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of… intuitions.
You can’t really argue that objective morality not being well-defined means that it is more likely to be a coherent notion.
My point is that you can’t conclude the notion of morality is incoherent simple because we don’t yet have a sufficiently concrete definition.
Technically, yes. But I’m pretty much obliged, based on the current evidence, to conclude that it’s likely to be incoherent.
More to the point: why do you think it’s likely to be coherent?
Mostly by outside view analogy with the history of the development of science. I’ve read a number of ancient Greek and Roman philosophers (along with a few post-modernists) arguing against the possibility of a coherent theory of physics using arguments very similar to the ones people are using against morality.
I’ve also read a (much larger) number of philosophers trying to shoehorn what we today call science into using the only meta-theory then available in a semi-coherent state: the meta-theory of mathematics. Thus we see philosophers, Descartes being the most famous, trying and failing to study science by starting with a set of intuitively obvious axioms and attempting to derive physical statements from them.
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
As for likely I’m not sure how likely this is, I just think its more likely then a lot of people on this thread assume.
To be clear—you are talking about morality as something externally existing, some ‘facts’ that exist in the world and dictate what you should do, as opposed to a human system of don’t be a jerk. Is that an accurate portrayal?
If that is the case, there are two big questions that immediately come to mind (beyond “what are these facts” and “where did they come from”) - first, it seems that Moral Facts would have to interact with the world in some way in order for the study of big-M Morality to be useful at all (otherwise we could never learn what they are), or they would have to be somehow deducible from first principles. Are you supposing that they somehow directly induce intuitions in people (though, not all people? so, people with certain biological characteristics?)? (By (possibly humorous, though not mocking!) analogy, suppose the Moral Facts were being broadcast by radio towers on the moon, in which case they would be inaccessible until the invention of radio. The first radio is turned on and all signals are drowned out by “DON’T BE A JERK. THIS MESSAGE WILL REPEAT. DON’T BE A JERK. THIS MESSAGE WILL...”.)
The other question is, once we have ascertained that there are Moral Facts, what property makes them what we should do? For instance, suppose that all protons were inscribed in tiny calligraphy in, say, French, “La dernière personne qui est vivant, gagne.” (“The last person who is alive, wins”—apologies for Google Translate) Beyond being really freaky, what would give that commandment force to convince you to follow it? What could it even mean for something to be inherently what you should do?
It seems, ultimately, you have to ask “why” you should do “what you should do”. Common answers include that you should do “what God commands” because “that’s inherently What You Should Do, it is By Definition Good and Right”. Or, “don’t be a jerk” because “I’ll stop hanging out with you”. Or, “what makes you happy and fulfilled, including the part of you that desires to be kind and generous” because “the subjective experience of sentient beings are the only things we’ve actually observed to be Good or Bad so far”.
So, where do we stand now?
Now we’re getting somewhere. What do you mean by the work “jerk” and why is it any more meaningful then words like “moral”/”right”/”wrong”?
The distinction I am trying to make is between Moral Facts Engraved Into The Foundation Of The Universe and A Bunch Of Words And Behaviors And Attitudes That People Have (as a result of evolution & thinking about stuff etc.). I’m not sure if I’m being clear, is this description easier to interpret?
Near as I can tell, what you mean by “don’t be a jerk” is one possible example of what I mean by morality.
Hope that helps.
Great! Then I think we agree on that.
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
If I knew the answer we wouldn’t be having this discussion.
Define your terms, then you get a fair hearing. If you are just saying the terms could maybe someday be defined, this really isn’t the kind of thing that needs a response.
To put it in perspective, you are speculating that someday you will be able to define what the field you are talking about even is. And your best defense is that some people have made questionable arguments against this non-theory? Why should anyone care?
After thinking about it a little I think I can phrase it this way.
I want to answer the question: “What should I do?”
It’s kind of a pressing question since I need to do something (doing nothing counts as a choice and usually not a very good one).
If the people arguing that morality is just preference answer: “Do what you prefer”, my next question is “What should I prefer?”
Three definitions of “should”:
As for obligation—I doubt you are under any obligation other than to avoid the usual uncontroversially nasty behavior, along with any specific obligations you may have to specific people you know. You would know what those are much better than I would. I don’t really see how an ordinary person could be all that puzzled about what his obligations are.
As for propriety—over and above your obligation to avoid uncontroversially nasty behavior, I doubt you have much trouble discovering what’s socially acceptable (stuff like, not farting in an elevator), and anyway, it’s not the end of the world if you offend somebody. Again, I don’t really see how an ordinary person is going to have a problem.
As for expediency—I doubt you intended the question that way.
If this doesn’t answer your question in full you probably need to explain the question. The utilitarians have this strange notion that morality is about maximizing global utility, so of course, morality in the way that they conceive it is a kind of life-encompassing total program of action, since every choice you make could either increase or decrease total utility. Maybe that’s what you want answered, i.e., what’s the best possible thing you could be doing.
But the “should” of obligation is not like this. We have certain obligations but these are fairly limited, and don’t provide us with a life-encompassing program of action. And the “should” of propriety is not like this either. People just don’t pay you any attention as long as you don’t get in their face too much, so again, the direction you get from this quarter is limited.
You have collapsed several meanings of obligation together there. You may have explicit legal obligations to the state, and IOU style obligations to individuals who have done you a favour, and so on. But moralobligations go beyond all those, If you are living a brutal dictatorship, there are conceivable circumstances where you morally should not obey the law. Etc, etc.
In order to accomplish what?
Should you prefer chocolate ice cream or vanilla? As far as ice cream flavors go, “What should I prefer” seems meaningless...unless you are looking for an answer like, “It’s better to cultivate a preference for vanilla because it is slightly healthier” (you will thereby achieve better health than if you let yourself keep on preferring chocolate).
This gets into the time structure of experience. In other words, I would be interpreting your, “What should I prefer?” as, “What things should I learn to like (in order to get more enjoyment out of life)?” To bring it to a more traditionally moral issue, “Should I learn to like a vegetarian diet (in order to feel less guilt about killing animals)?”
Is that more or less the kind of question you want to answer?
Including the word ‘just’ misses the point. Being about preference in now way makes it less important.
This might have clarified for me what this dispute is about. At least I have a hypothesis, tell me if I’m on the wrong track.
Antirealists aren’t arguing that you should go on a hedonic rampage—we are allowed to keep on consulting our consciences to determined the answer to “what should I prefer.” In a community of decent and mentally healthy people we should flourish. But the main upshot of the antirealist position is that you cannot convince people with radically different backgrounds that their preferences are immoral and should be changed, even in principle.
At least, antirealism gives some support to this cynical point of view, and it’s this point of view that you are most interested in attacking. Am I right?
That’s a large part of it.
The other problem is that anti-realists don’t actually answer the question “what should I do?”, they merely pass the buck to the part of my brain responsible for my preferences but don’t give it any guidance on how to answer that question.
Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a “what color is the sky?” type of conversation.
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
Something more objective/universal.
Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences—obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I’d rather select a different social milieu, myself.
I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.
I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they’re contrary to these new preferences.
I’d think that’s a pretty concrete example of changing my preferences, unless we’re using different definitions of “preference.”
I suppose we are using different definitions of “preference”. I’m using it as a friendly term for a person’s utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can’t be understood that way. For example, what you’re calling food preferences are what I’d call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
Ahh, I re-read the thread with this understanding, and was struck by this:
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)
Does that make sense as a “motivation for wanting to change your preferences”?
I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.
My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone’s preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I’m screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don’t go implementing it. Please.
In general, if the FAI is going to give “your preference” to you, your preference had better be something stable about you that you’ll still want when you get it.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it. I certainly would want an akrasia fix if it were available. Maybe that’s the important preference.
Very much agreed.
At the end of the day, you’re going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
I don’t think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I’d agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren’t optimizing for a single clear goal like “happiness” or “lifetime income”.
Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.
It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you’re going to have a lot of trouble making specific long-term predictions.
There isn’t an instrumental motive for changing ones preferences. That doesn’t add up to “never change your preferences” unless you assume that instrumentality—“does it help me achieve anything” is the ultimate way of evaulating things. But it isn’t: morality is. It is morally wrong to design better gas chambers.
The interesting question is still the one you didn’t answer yet:
I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.
The unlikely answer is “I wouldn’t do anything different”. Then I’d reply “So, morality makes no practical difference to your behavior?”, and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior, if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”. I am in the same situation in that when I’m willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.
If I guessed your answer wrong, please correct me. Otherwise I’d want to fix the vocabulary problem somehow. I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts, perhaps an “amoral preference” which would mean what you were calling “preference” before, and “moral preference” would include what you were calling “morality” before, but perhaps we’d choose different words if you objected to those. The next question would be:
...and I have no clue what your answer would be, so I can’t continue the conversation past that point without straightforward answers from you.
Follow morality.
One way to illustrate this distinction is using Eliezer’s “murder pill”. If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we’re talking about, which hasn’t happened.
That’s not a definition of morality but an explanation of one reason why the “murder pill” distinction is important.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
Are you saying there has never been any valid moral discourse or persuasion?
There’s a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven’t seen much non-sloppy thinking on the subject of changing preferences.
I suppose there could be such a thing—Joe designed an elegant high-throughput gas chamber, he wants to show the design to his friends, someone tells Joe that this could be used for mass murder, Joe hadn’t thought that the design might actually be used, so he hides his design somewhere so it won’t be used. But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
But there are other stories where the preference itself changes. “If you approve of womens rights, you should approve of Gay rights”.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can’t envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we’re back to morality being a special case of psychology again.
Because I don’t know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.
They are not going to arrive without overcoming opposition somehow.
Does that mean your “because gays/women want them” isn’t valid? Why offer it then?
Because you reject them?
But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.
I am not and never was using “preference” to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.
If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can’t do that. I can only continue to challenge the frame itself. Morality is already, in itself, the most important value. It isn’t “made” important by some greater good.
There’s a choice you’re making here, differently from me, and I’d like to get clear on what that choice is and understand why we’re making it differently.
I have a bunch of things I prefer. I’d rather eat strawberry ice cream than vanilla, and I’d rather not design higher-throughput gas chambers. For me those two preferences are similar in kind—they’re stuff I prefer and that’s all there is to be said about it.
You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.
I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it’s even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That’s not what I’m talking about. What I’m talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter “s” to be “blort” preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you’d be left wondering “Why does he care?”
And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.
The decision to use the concept of “morality” is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don’t use the concept, that doesn’t change whether anyone wants to build high-throughput gas chambers—it just means that we don’t have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there’s no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.
So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?
I hope we’re agreed that there are two different kinds of things here—the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
You’ve written quite a lot of words but you’re still stuck on the idea that all importance is instrumental importance, importance for something that doesn’t need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn’t stop being you, and your new self wouldn’t be someone your old self would hate. That wouldn’t be the case if you suddenly started liking murder or gas chambers. You don’t now like people who like those things, and you wouldn’t now want to become one.
If we understand what is going on , we should make the choice correctly—that is, according to rational norms. If morality means something other than the merely pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like “is the pragmatic useful” are trivial and tautologous.
You’re not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting.
I’m looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held, which doesn’t seem very interesting. If we’re going to have the distinction, I like Eugene’s proposal that a moral preference is one that’s worth talking about better, but we need to make the distinction in such a way that something doesn’t get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
But what I actually gave as a definition is the concept of morality is the concept of ultimate value and importance. A concept which even the nihilists need so that they can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
You are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
One can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held?
If that question has no answer, the claim that the two are similar seems indefensible.
What makes them matter?
Reason about it?
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
Can you give an example? I tried to make one at http://lesswrong.com/lw/5eh/what_is_metaethics/43fh, but it twisted around into revising a belief instead of revising a preference.
So it doesn’t matter if it only affects what you will do?
If I’m thinking for the purpose of figuring out my future actions, that’s a plan, not a belief, since planning is relevant when I haven’t yet decided what to do.
I suppose beliefs about other people’s actions are empirical.
I’ve lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I’ll reply.
Okay, that seems clear enough that I’d rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene’s evasiveness for Peter’s.
If you know that morality is the ultimate way of evaluating things, and you’re able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers?
Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
I’m pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use.
He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
Agreed, so I deleted my post to avoid wasting Peter’s time responding.
Let’s try a different approach.
I have spent some time thinking about how to apply the ideas of Eliezer’s metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.
So the question becomes how do you separate which of your intuitions are preferences and which are biases?
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.
I don’t see how that question is relevant. I don’t see any good reason for you to dodge my question about what you’d do if your preferences contradicted your morality. It’s not like it’s an unusual situation—consider the internal conflicts of a homosexual Evangelist preacher, for example.
What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
That still doesn’t explain what the difference between your prefernces and your biases is.
That’s rather startling. Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?
“The proper way” being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
Um, no. Unless you are some kind of mutant who doesn’t suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn’t interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.
Here is the example PhilGoetz gives in the article I linked above:
I believe I answered your other question elsewhere in the thread.
Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question “what is truth”.
People clearly have opinions they act on. What makes you think we need this so-called “rationality” to tell us which opinions to have?
Intuitions are internalizations of custom, an aspect of which is morality. Our intuitions result from our long practice of observing custom. By “observing custom” I mean of course adhering to custom, abiding by custom. In particular, we observe morality—we adhere to it, we abide by it—and it is from observing morality that we gain our moral intuitions. This is a curious verbal coincidence, that the very same word “observe” applies in both cases even though it means quite different things. That is:
Our physical intuitions are a result of observing physics (in the sense of watching attentively).
Our moral intuitions are a result of observing morality (in the sense of abiding by).
However, discovering physics is not nearly as passive as is suggested by the word “observe”. We conduct experiments. We try things and see what happens. We test the physical world. We kick the rock—and discover that it kicks back. Reality kicks back hard, so it’s a good thing that children are so resilient. An adult that kicked reality as hard as kids kick it would break their bones.
And discovering morality is similarly not quite as I said. It’s not really by observing (abiding by) morality that we discover morality, but by failing to observe (violating) morality that we discover morality. We discover what the limits are by testing the limits. We are continually testing the limits, though we do it subtly. But if you let people walk all over you, before long they will walk all over you, because in their interactions with you they are repeatedly testing the limits, ever so subtly. We push on the limits of what’s allowable, what’s customary, what’s moral, and when we get push-back we retreat—slightly. Customs have to survive this continual testing of their limits. Any custom that fails the constant testing will be quickly violated and then forgotten. So the customs that have survived the constant testing that we put them through, are tough little critters that don’t roll over easily. We kick customs to see whether they kick back. Children kick hard, they violate custom wildly, so it’s a good thing that adults coddle them. An adult that kicked custom as hard as kids kick it would wind up in jail or dead.
Custom is “really” nothing other than other humans kicking back when we kick them. When we kick custom, we’re kicking other humans, and they kick back. Custom is an equilibrium, a kind of general truce, a set of limits on behavior that everyone observes and everyone enforces. Morality is an aspect of this equilibrium. It is, I think, the more serious, important bits of custom, the customary limits on behavior where we kick back really hard, or stab, or shoot, if those limits are violated.
Anyway, even though custom is “really” made out of people, the regularities that we discover in custom are impersonal. One person’s limits are pretty much another person’s limits. So custom, though at root personal, is also impersonal, in the “it’s not personal, it’s just business” sense of the movie mobster. So we discover regularities when we test custom—much as we discover regularities when we test physical reality.
Yes, but we’ve already determined that we don’t disagree—unless you think we still do? I was arguing against observing objective (i.e. externally existing) morality. I suspect that you disagree more with Eugine_Nier.
Which is true, and explains why it is a harder problem than physics, and less progress has been made.
I’m not sure I accept either of those claims, explanation or no.