The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It’s impossible to show you’ve dispensed with any concept, except to show that it isn’t useful for what you’re doing. That is what I’ve done. I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.
Amanojack
How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
I don’t know what exactly “justify” is supposed to mean, but I’ll interpret it as “show to be useful for helping me win.” In that case, it’s simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That’s all.
To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of “true” and “justified” probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake—essential ones! In the end, if you dissolve “truth” it just ends up meaning something like “seemingly reliable guidepost for my actions.”
Is this basically saying that you can tell someone else’s utility function by demonstrated preference? It sounds a lot like that.
Why not just phrase it in terms of utility? “Justification” can mean too many different things.
Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.
Putting it in terms of beliefs paying rent in anticipated experiences, the belief “all swans are white” told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn’t as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong—that is, cause me to lose.
So can’t this all be better phrased in more established LW terms?
I agree with this, if that makes any difference.
I missed this:
If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to “no, that’s not true”.
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I’m just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don’t/wouldn’t care about “truth” in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences.
I agree, if you mean things like, “If I now believe that she is really a he, I don’t want to take ‘her’ home anymore.”
I think moral values are ultimate because I can;t think of a valid argument of the form “I should do because ”.
Neither can I. I just don’t draw the same conclusion. There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else, so it’s not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English’s fault, and I don’t really care which it is, but it would be preferable for something to actually make it across the inferential gap.)
EDIT: Oops, posted too soon.
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn’t carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn’t matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn’t want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.
I never said they had to be “immediately useful” (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it’s not an objection either.
Then why not just call it “universal morality”?
Then you basically seem to be saying I should signal a certain morality if I want to get on well in society. Well I do agree.
You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don’t-agree-with-it.
Zorg said the same thing about his pan-galactic ethics.
I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.
Did you even read the post we’re commenting on?
That doesn’t affect anything. You still have no proof for the revised version.
Wait, you want proof that getting what I want is what I care about most?
Other people out there in the non-existent Objective World?
Read what I wrote again.
I don’t think moral anti-realists are generally immoral peopl
Read.
“To speak of there being something/nothing out there is meaningless to me unless I can see why to care.”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Note the bold.
Whose language ? What language?
English, and all the rest that I know of.
If you think all language is a problem, what do you intend to replace it with?
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
It refers to the stuff that doesn’t go away when you stop believing in it.
If so, I suggest “permanent” as a clearer word choice.
“Should” for what purpose?
Believing in truth is what rational people do.
Winning is what rational people do. We can go back and forth like this.
Which is good because...?
It benefits me, because I enjoy helping people. See, I can say, “So what?” in response to “You’re wrong.” Then you say, “You’re still wrong.” And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for “being wrong.”
I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn’t care about truth at all, the process probably isn’t going to work.
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don’t care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed—very unusually—is claimed to not have any effect on such things, “true” and “false” become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can’t, I will happily discard them.
So being wrong and not caring you are in the wrong is not the same as being right.
Yet I still don’t care, and by your own admission I suffer not in the slightest from my lack of caring.
I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.
Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter.
Morality matters more than anything else by definition.
Which would be? If you refer me to the dictionary again, I think we’re done here.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
That’s what I was responding to.
What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”, truth is a rather hard thing to eliminate.
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
But you still haven’t defined objective value.
Dictionary says, “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining “value” differently, how?
I would indeed it prefer if other people had certain moral sentiments. I don’t think I ever suggested otherwise.
In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”