Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a “what color is the sky?” type of conversation.
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
Something more objective/universal.
Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences—obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I’d rather select a different social milieu, myself.
I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.
I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they’re contrary to these new preferences.
I’d think that’s a pretty concrete example of changing my preferences, unless we’re using different definitions of “preference.”
I suppose we are using different definitions of “preference”. I’m using it as a friendly term for a person’s utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can’t be understood that way. For example, what you’re calling food preferences are what I’d call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
Ahh, I re-read the thread with this understanding, and was struck by this:
I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)
Does that make sense as a “motivation for wanting to change your preferences”?
I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.
My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone’s preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I’m screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don’t go implementing it. Please.
In general, if the FAI is going to give “your preference” to you, your preference had better be something stable about you that you’ll still want when you get it.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it. I certainly would want an akrasia fix if it were available. Maybe that’s the important preference.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it.
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
At the end of the day, you’re going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
I don’t think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I’d agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren’t optimizing for a single clear goal like “happiness” or “lifetime income”.
predict the person
Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.
It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you’re going to have a lot of trouble making specific long-term predictions.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences
There isn’t an instrumental motive for changing ones preferences. That doesn’t add up to “never change your preferences” unless you assume that instrumentality—“does it help me achieve anything” is the ultimate way of evaulating things. But it isn’t: morality is.
It is morally wrong to design better gas chambers.
The interesting question is still the one you didn’t answer yet:
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.
The unlikely answer is “I wouldn’t do anything different”. Then I’d reply “So, morality makes no practical difference to your behavior?”, and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior, if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”. I am in the same situation in that when I’m willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.
If I guessed your answer wrong, please correct me. Otherwise I’d want to fix the vocabulary problem somehow. I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts, perhaps an “amoral preference” which would mean what you were calling “preference” before, and “moral preference” would include what you were calling “morality” before, but perhaps we’d choose different words if you objected to those. The next question would be:
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
...and I have no clue what your answer would be, so I can’t continue the conversation past that point without straightforward answers from you.
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
Follow morality.
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
One way to illustrate this distinction is using Eliezer’s “murder pill”. If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we’re talking about, which hasn’t happened.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
That’s not a definition of morality but an explanation of one reason why the “murder pill” distinction is important.
...the way human brains are designed, thinking about your preferences can cause them to change.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down
Are you saying there has never been any valid moral discourse or persuasion?
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
There’s a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven’t seen much non-sloppy thinking on the subject of changing preferences.
I suppose there could be such a thing—Joe designed an elegant high-throughput gas chamber, he wants to show the design to his friends, someone tells Joe that this could be used for mass murder, Joe hadn’t thought that the design might actually be used, so he hides his design somewhere so it won’t be used. But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
Are you saying there has never been any valid moral discourse or persuasion?
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
But there are other stories where the preference itself changes. “If you approve of womens rights, you should approve of Gay rights”.
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
“If you approve of womens rights, you should approve of Gay rights”.
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can’t envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we’re back to morality being a special case of psychology again.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
Because I don’t know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior,
But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.
if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”
I am not and never was using “preference” to mean something disjoint from morality.
If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism
or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.
Okay, you’re making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can’t do that. I can only continue to challenge the
frame itself. Morality is already, in itself, the most important value. It isn’t “made” important by some greater good.
I am not and never was using “preference” to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences.
There’s a choice you’re making here, differently from me, and I’d like to get clear on what that choice is and understand why we’re making it differently.
I have a bunch of things I prefer. I’d rather eat strawberry ice cream than vanilla, and I’d rather not design higher-throughput gas chambers. For me those two preferences are similar in kind—they’re stuff I prefer and that’s all there is to be said about it.
You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.
I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it’s even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That’s not what I’m talking about. What I’m talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter “s” to be “blort” preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you’d be left wondering “Why does he care?”
And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.
The decision to use the concept of “morality” is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don’t use the concept, that doesn’t change whether anyone wants to build high-throughput gas chambers—it just means that we don’t have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there’s no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.
So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?
Morality is already, in itself, the most important value.
I hope we’re agreed that there are two different kinds of things here—the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
You’ve written quite a lot of words but you’re still stuck on the idea that all importance is instrumental importance, importance for something that doesn’t need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn’t stop
being you, and your new self wouldn’t be someone your old self would hate. That wouldn’t be the case if you suddenly started liking murder or gas chambers. You don’t now like people who like those things, and you wouldn’t now want to become one.
I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
If we understand what is going on , we should make the choice correctly—that
is, according to rational norms. If morality means something other than the merely
pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like “is the pragmatic useful”
are trivial and tautologous.
You should care about morality because it is a value and values are definitionally what is important and what should be cared about.
You’re not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting.
I’m looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held, which doesn’t seem very interesting. If we’re going to have the distinction, I like Eugene’s proposal that a moral preference is one that’s worth talking about better, but we need to make the distinction in such a way that something doesn’t get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held,
But what I actually gave as a definition is the concept of morality is the concept
of ultimate value and importance. A concept which even the nihilists need so that they
can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
You are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
One can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held?
If that question has no answer, the claim that the two are similar seems indefensible.
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
So it doesn’t matter if it only affects what you will do?
If I’m thinking for the purpose of figuring out my future actions, that’s a plan, not a belief, since planning is relevant when I haven’t yet decided what to do.
I suppose beliefs about other people’s actions are empirical.
I’ve lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I’ll reply.
[Morality is] the ultimate way of evaluating things… It is morally wrong to design better gas chambers.
Okay, that seems clear enough that I’d rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene’s evasiveness for Peter’s.
If you know that morality is the ultimate way of evaluating things, and you’re able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers?
Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
I’m pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use.
He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
I have spent some time thinking about how to apply the ideas of Eliezer’s metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.
So the question becomes how do you separate which of your intuitions are preferences and which are biases?
[H]ow do you separate which of your intuitions are preferences and which are biases?
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.
I don’t see how that question is relevant. I don’t see any good reason for you to dodge my question about what you’d do if your preferences contradicted your morality. It’s not like it’s an unusual situation—consider the internal conflicts of a homosexual Evangelist preacher, for example.
What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?
and uncertainty about the future should interact with the utility function in the proper way.
“The proper way” being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way.
Um, no. Unless you are some kind of mutant who doesn’t suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn’t interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.
Here is the example PhilGoetz gives in the article I linked above:
In Crime and punishment, I argued that people want to punish criminals, even if there is a painless, less-costly way to prevent crime. This means that people value punishing criminals. This value may have evolved to accomplish the social goal of reducing crime. Most readers agreed that, since we can deduce this underlying reason, and accomplish it more effectively through reasoning, preferring to punish criminals is an error in judgement.
Most people want to have sex. This value evolved to accomplish the goal of reproducing. Since we can deduce this underlying reason, and accomplish it more efficiently than by going out to bars every evening for ten years, is this desire for sex an error in judgement that we should erase?
Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question “what is truth”.
Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a “what color is the sky?” type of conversation.
If you do agree with them, what would you want from a meta-ethical theory that you don’t already have?
Something more objective/universal.
Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.
I can see a motive for changing one’s beliefs, since false beliefs will often fail to support the activity of enacting one’s preferences. I can’t see a motive for changing one’s preferences—obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I’d rather select a different social milieu, myself.
I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.
I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they’re contrary to these new preferences.
I’d think that’s a pretty concrete example of changing my preferences, unless we’re using different definitions of “preference.”
I suppose we are using different definitions of “preference”. I’m using it as a friendly term for a person’s utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can’t be understood that way. For example, what you’re calling food preferences are what I’d call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
Ahh, I re-read the thread with this understanding, and was struck by this:
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)
Does that make sense as a “motivation for wanting to change your preferences”?
I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.
My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone’s preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I’m screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don’t go implementing it. Please.
In general, if the FAI is going to give “your preference” to you, your preference had better be something stable about you that you’ll still want when you get it.
If there’s no fix for akrasia, then it’s hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I’m spewing BS about stuff that sounds nice to do, but I really don’t want to do it. I certainly would want an akrasia fix if it were available. Maybe that’s the important preference.
Very much agreed.
At the end of the day, you’re going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
I don’t think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I’d agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren’t optimizing for a single clear goal like “happiness” or “lifetime income”.
Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.
It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you’re going to have a lot of trouble making specific long-term predictions.
There isn’t an instrumental motive for changing ones preferences. That doesn’t add up to “never change your preferences” unless you assume that instrumentality—“does it help me achieve anything” is the ultimate way of evaulating things. But it isn’t: morality is. It is morally wrong to design better gas chambers.
The interesting question is still the one you didn’t answer yet:
I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.
The unlikely answer is “I wouldn’t do anything different”. Then I’d reply “So, morality makes no practical difference to your behavior?”, and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.
The likely answer is “If I’m willpower-depleted, I’d do the immoral thing I prefer, but on a good day I’d have enough willpower and I’d do the moral thing. I prefer to have enough willpower to do the moral thing in general.” In that case, I would have to admit that I’m in the same situation, except with a vocabulary change. I define “preference” to include everything that drives a person’s behavior, if we assume that they aren’t suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I’m calling “preference” is the same as what you’re calling “preference and morality”. I am in the same situation in that when I’m willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.
If I guessed your answer wrong, please correct me. Otherwise I’d want to fix the vocabulary problem somehow. I like using the word “preference” to include all the things that drive a person, so I’d prefer to say that your preference has two parts, perhaps an “amoral preference” which would mean what you were calling “preference” before, and “moral preference” would include what you were calling “morality” before, but perhaps we’d choose different words if you objected to those. The next question would be:
...and I have no clue what your answer would be, so I can’t continue the conversation past that point without straightforward answers from you.
Follow morality.
One way to illustrate this distinction is using Eliezer’s “murder pill”. If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that’s a definition of morality, then morality is a subset of psychology, which probably isn’t what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can’t be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we’re talking about, which hasn’t happened.
That’s not a definition of morality but an explanation of one reason why the “murder pill” distinction is important.
If that’s a valid argument, then logic, mathematics, etc are branches of psychology.
Are you saying there has never been any valid moral discourse or persuasion?
There’s a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven’t seen much non-sloppy thinking on the subject of changing preferences.
I suppose there could be such a thing—Joe designed an elegant high-throughput gas chamber, he wants to show the design to his friends, someone tells Joe that this could be used for mass murder, Joe hadn’t thought that the design might actually be used, so he hides his design somewhere so it won’t be used. But that’s changing Joe’s belief about whether sharing his design is likely to cause mass murder, not changing Joe’s preference about whether he wants mass murder to happen.
No, I’m saying that morality is a useless concept and that what you’re calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
But there are other stories where the preference itself changes. “If you approve of womens rights, you should approve of Gay rights”.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can’t envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we’re back to morality being a special case of psychology again.
Because I don’t know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.
They are not going to arrive without overcoming opposition somehow.
Does that mean your “because gays/women want them” isn’t valid? Why offer it then?
Because you reject them?
But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.
I am not and never was using “preference” to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.
If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can’t do that. I can only continue to challenge the frame itself. Morality is already, in itself, the most important value. It isn’t “made” important by some greater good.
There’s a choice you’re making here, differently from me, and I’d like to get clear on what that choice is and understand why we’re making it differently.
I have a bunch of things I prefer. I’d rather eat strawberry ice cream than vanilla, and I’d rather not design higher-throughput gas chambers. For me those two preferences are similar in kind—they’re stuff I prefer and that’s all there is to be said about it.
You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.
I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it’s even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That’s not what I’m talking about. What I’m talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter “s” to be “blort” preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you’d be left wondering “Why does he care?”
And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.
The decision to use the concept of “morality” is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don’t use the concept, that doesn’t change whether anyone wants to build high-throughput gas chambers—it just means that we don’t have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there’s no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.
So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?
I hope we’re agreed that there are two different kinds of things here—the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we’ll all prefer to make the latter choice pragmatically.
You’ve written quite a lot of words but you’re still stuck on the idea that all importance is instrumental importance, importance for something that doesn’t need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn’t stop being you, and your new self wouldn’t be someone your old self would hate. That wouldn’t be the case if you suddenly started liking murder or gas chambers. You don’t now like people who like those things, and you wouldn’t now want to become one.
If we understand what is going on , we should make the choice correctly—that is, according to rational norms. If morality means something other than the merely pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like “is the pragmatic useful” are trivial and tautologous.
You’re not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting.
I’m looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it’s strongly held, which doesn’t seem very interesting. If we’re going to have the distinction, I like Eugene’s proposal that a moral preference is one that’s worth talking about better, but we need to make the distinction in such a way that something doesn’t get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
But what I actually gave as a definition is the concept of morality is the concept of ultimate value and importance. A concept which even the nihilists need so that they can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
You are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
Just because you do have a stongly held preference, it doesn’t mean you should. The difference between true beliefs and fervently held ones is similar.
One can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held?
If that question has no answer, the claim that the two are similar seems indefensible.
What makes them matter?
Reason about it?
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
Can you give an example? I tried to make one at http://lesswrong.com/lw/5eh/what_is_metaethics/43fh, but it twisted around into revising a belief instead of revising a preference.
So it doesn’t matter if it only affects what you will do?
If I’m thinking for the purpose of figuring out my future actions, that’s a plan, not a belief, since planning is relevant when I haven’t yet decided what to do.
I suppose beliefs about other people’s actions are empirical.
I’ve lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I’ll reply.
Okay, that seems clear enough that I’d rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene’s evasiveness for Peter’s.
If you know that morality is the ultimate way of evaluating things, and you’re able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers?
Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
I’m pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use.
He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
Agreed, so I deleted my post to avoid wasting Peter’s time responding.
Let’s try a different approach.
I have spent some time thinking about how to apply the ideas of Eliezer’s metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.
So the question becomes how do you separate which of your intuitions are preferences and which are biases?
Well, valid preferences look like they’re derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.
I don’t see how that question is relevant. I don’t see any good reason for you to dodge my question about what you’d do if your preferences contradicted your morality. It’s not like it’s an unusual situation—consider the internal conflicts of a homosexual Evangelist preacher, for example.
What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
That still doesn’t explain what the difference between your prefernces and your biases is.
That’s rather startling. Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?
“The proper way” being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
Um, no. Unless you are some kind of mutant who doesn’t suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn’t interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.
Here is the example PhilGoetz gives in the article I linked above:
I believe I answered your other question elsewhere in the thread.
Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question “what is truth”.
People clearly have opinions they act on. What makes you think we need this so-called “rationality” to tell us which opinions to have?