Eliezer didn’t really miss anything. What you’re asking boils down to, “If I value happiness more than truth, should I delude myself into holding a false belief that has no significant consequence except making me happy?”
The obvious answer to that question is, “Yes, unless you can change yourself so that your happiness does not require belief in something that doesn’t exist”.
The second option is something that Eliezer addressed in Joy in the Merely Real. He didn’t address the first option, self-deception, because this website is about truth-seeking, and anyway, most people who want to deceive themselves don’t need help to do it.
I was embarrassed for a while (about 25 minutes after reading your comment and Ciphergoth’s) that my ideas would be reduced to the cliche’s you are apparently responding to. But then I realized I don’t need to take it personally, but qualify what I mean.
First, there’s nothing from my question to Eliezer to indicate that I value happiness more than truth, or that I value happiness at all. There are things I value more than truth; or rather, I only find it possible to value truth above all else within a system that is coherent and consistent and thus allows a meaningful concept of truth.
If “feels bereft of meaning” doesn’t mean that it makes you unhappy, the only other interpretation that even begins to make sense to me, is that an important part of your terminal values is entirely dependent on the truth of theism.
To experience what that must feel like, I try to imagine how I would feel if I discovered that solipsism is true and that I have no way of ever really affecting anything that happens to me. It would make me unhappy, sure, but more significantly it would also make my existence meaningless in the very real sense that while the desires that are encoded in my brain (or whatever it is that produces my mind) would not magically cease to exist, I would have to acknowledge that there is no possible way for my desires to ever become reality.
Is this closer to what you’re talking about? If it isn’t, I’m going to have to conclude that either I’m a lot stupider than I thought, or you’re talking about a square circle, something impossible.
Orthonormal writes that in the absence of a Framework of Objective Value, he found he still cared about things (the welfare of friends and family, the fate of the world, the truth of my his beliefs, etc).
In contrast, I find my caring begins fading away. Some values go quickly and go first—the fate of the world, the truth of my own beliefs—but other values linger, long enough for me to question the validity of a worldview that would leave me indifferent to my family.
Orthonormal also writes that in response to my hypothetical question about purpose,
If asked, they might answer along the lines of “so that more people can exist and be happy”; “so that ever more interesting and fun and beautiful patterns can come into being”; “so that we can continue to learn and understand more and more of the strange and wonderful patterns of reality”, etc. None of these are magical answers;
And none of these are terminal values for me. Existence, happiness, fun and beauty are pretty much completely meaningless to me in of themselves. In fact, the something which causes me to hesitate when I might feel indifference to my family is a feeling of responsibility.
It occurs to me that satisfying my moral responsibility might be a terminal value for me. If I have none; if it really is the case that I have no moral responsibility to exist and love, I’d happily not exist and not love.
Orthonormal, yourself, Eliezer, all seem to argue that value nihilism just doesn’t happen. Others concede that nihilism does happen, but that this doesn’t bother them or that they’d rather sit with an uncomfortable truth than be deluded. So perhaps it’s the case that people are intrinsically motivated in different ways, or that people have different thresholds for how much lack of meaning they can tolerate. Or other ‘solutions’ come to mind.
It seems to me that you conflate the lack of an outside moral authority with a lack of meaning to morality. Consider “fairness”. Suppose 3 people with equal intrinsic needs (e.g. equal caloric reserves and need for food) put in an equal amount of work on trapping a deer, with no history of past interaction between any of them. Fairness would call for each of them to receive an equal share of the deer. A 90/9/1 split is unfair. It is unfair even if none of them realize it is unfair; if you had a whole society where women got 10% the wages of men, it wouldn’t suddenly become massively unfair at the first instant someone pointed it out. It is just that an equal split is the state of affairs we describe by the word “fair” and to describe 90/9/1 you’d need some other word like “foograh”.
In the same sense, something can be just as fair, or unfair, without there being any God, nor yet somehow “the laws of physics”, to state with controlling and final authority that it is fair.
Actually, even God’s authority can’t make a 90/9/1 split “fair”. A God could enforce the split, but not make it fair.
So who needs an authority to tell us what we should do, either? God couldn’t make murder right—so who needs God to make it wrong?
Thank you for your effort to understand. However, I don’t believe this is in the right direction. I’m afraid I misunderstood or misrepresented my feelings about moral responsibility.
For thoroughness, I’ll try to explain it better here, but I don’t think it’s such a useful clue after all. I hear physical materialists explaining that they still feel value outside an objective value framework naturally/spontaneously. I was reporting that I didn’t -- for some set of values, the values just seemed to fade away in the absence of an objective value framework. However, I admit that some values remained. The first value to obviously remain was a sense of moral responsibility, and it was that value that kept me faithful to the others. So perhaps it is a so-called ‘terminal value’, in any case, it was the limit where some part of myself said “if this is Truth, then I don’t value Truth”.
The reason I feel value outside of an objective value framework is that I taught myself over weeks and months to do so. If a theist had the rug pulled out from under them morally speaking then they might well be completely bewildered by how to act and how to think. I am sure this would cause great confusion and pain. The process of moving from a theist world view to a materialistic world view is not some flipped switch, a person has to teach themselves new emotional and procedural reactions to common every day problems. The manner in which to do this is to start from the truth as best you can approximate it and train yourself to have emotional reactions that are in accordance with the truth. There is no easy way to to do this but I personally find it much easier to have a happy life once I trained myself to feel emotions in relation to facts rather than fictions.
some part of myself said “if this is Truth, then I don’t value Truth”.
I’m not sure there’s much more to discuss with you on the topic of theism, then; the object-level arguments are irrelevant to whether you believe. (There are plenty of other exciting topics around here, of course.) All I can do is attempt to convince you that atheism really isn’t what it feels like from your perspective.
EDIT: There was another paragraph here before I thought better of it.
All I can do is attempt to convince you that atheism really isn’t what it feels like from your perspective.
Perhaps we could say “needn’t be what it feels like from your perspective”. It clearly is that feeling for some. I wonder to what extent their difficulty is, in fact, an external-tribal-belief shaped hole in their neurological makeup.
All I can do is attempt to convince [byrnema] that atheism really isn’t what it feels like from your perspective.
I’m not sure that’s possible. As someone who’s been an atheist for at least 30 years, I’d say atheism does feel like that, unless there’s some other external source of morality to lean on.
From the back and forth on this thread, I’m now wondering if there’s a major divide between those who mostly care deeply without needing a reason to care, and those who mostly don’t.
I’m not surprised to encounter people here who find nihlism comfortable, or at least tolerable, for that reason. People who find it disabling—who can’t care without believing that there’s an external reason to care—not so much.
Orthonormal, yourself, Eliezer, all seem to argue that value nihilism just doesn’t happen.
That’s a rather poor interpretation. I pointed out from my own experience that nihilism is not a necessary consequence of leaving religion. I swear to you that when I was religious I agonized over my fear of nihilism, that I loved Dostoyevsky and dreaded Nietzsche, that I poured out my soul in chapels and confessionals time and time again. I had a fierce conscience then, and I still have one now. I feel the same emotional and moral passions as before; I just recognize them as a part of me rather than a message from the heart of the cosmos— I don’t need permission from the universe to care about others!
I don’t deny that others have adopted positions of moral nihilism when leaving a faith; I know several of them from my philosophy classes. But this is not necessary, and not rational; therefore it is not a good instrumental excuse to maintain theism.
Now, I cannot tell you what you actually feel; but consider two possibilities in addition to your own:
What you experience may be an expectation of your values vanishing rather than an actual attenuation of them. This expectation can be mistaken!
A temporary change in mood can also affect the strength of values, and I did go through a few months of mild depression when I apostasized. But it passed, and I have since felt even better than I had before it.
This might turn out to be vacuous, but it seems useful to me. Here goes nothing:
Do you have a favorite color? Or a favorite number, or word, or shirt, or other arbitrary thing? (Not something that’s a favorite because it reminds you of something else, or something that you like because it’s useful; something that you like just because you like it.)
Assuming you do, what objective value does it have over other similar things? None, right? Saying that purple is a better color than orange, or three is a better number than five (to use my own favorites) simply doesn’t make sense.
But, assuming you answered ‘yes’ to the first question, you still like the thing, otherwise it wouldn’t be a favorite. It makes sense to describe such things as fun or beautiful, and to use the word ‘happiness’ to describe the emotion they evoke. And you can have favorites among any type of things, including moral systems. Rationality doesn’t mean giving those up—they’re not irrational, they’re arational. (It does mean being careful to make sure they don’t conflict with each other or with reality, though—thinking that purple is somehow ‘really’ better than orange would be irrational.)
And none of these are terminal values for me. Existence, happiness, fun and beauty are pretty much completely meaningless to me in of themselves.
You are not really entitled to your own stated values. You can’t just assert that beauty is meaningless to you and through this act make it so. If beauty is important to you, being absolutely convinced that it’s not won’t make it unimportant. You are simply wrong and confused about your values, at which point getting a better conscious understanding of what is “morality” becomes even more important than if you were naive and relied on natural intuition alone.
I’m not sure to what extent terminal values can be chosen or not, but it seems to me that (the following slightly different than what you were describing) if you become absolutely convinced that your values aren’t important, then it would be difficult to continue thinking your values are important. Maybe the fact that I can’t be convinced of the unimportance of my values explains why I can’t really be convinced there’s no Framework of Objective Value, since my brain keeps outputting that this would make my values unimportant. But maybe, by the end of this thread, my brain will stop outputting that. I’m willing to do the necessary mental work.
By the way, Furcas seemed to understand the negation of value I’m experiencing via an analogy of solipsism.
If we had been Babyeaters, we would think that eating babies is the right-B thing to do. This doesn’t in any way imply we should be enthusiastic or even blasé about baby-eating, because we value the right thing, not the right-B thing that expresses the Babyeaters’ morality!
I understand that you can’t imagine a value being important without it being completely objective and universal. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value!
Imagine first that there is an objective value that you just don’t care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn’t an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.
Imagine first that there is an objective value that you just don’t care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn’t an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.
This the best (very) short guide to naturalistic metaethics I’ve read so far.
This is very helpful. The only thing I would clarify is that the lesson I need to learn is that importance ≠ objectivity. (I’m not at all concerned about universality.)
I understand that you can’t imagine a value being important without it being completely objective [...]. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value!
I’m not sure. With a squirrel in the universe, I would have thought the universe was better with more nuts than with less. I can understand there being no objective value, but I can’t understand objective value being causally or meaningfully distinct from the subjective value.
Next, imagine that there is something you care about, deeply, that just isn’t an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.
Hm. I have no problem with ‘permission’. I just find that I don’t care about caring about it. If it’s not actually horrible, then let the universe fill up with it! My impression is that intellectually (not viscerally, of course) I fail to weight my subjective view of things. If some mathematical proof really convinced me that something I thought subjectively horrible was objectively good, I think I would start liking it.
(The only issue, that I mentioned before, is that a sense of moral responsibility would prevent me from being convinced by a mathematical proof to suddenly acquire beliefs that would cause me to do something I’ve already learned is immoral. I would have to consider the probability that I’m insane or hallucinating the proof, etc.)
I can barely imagine value nihilism, but not a value nihilism from which God or physics could possibly rescue you. If you think that your value nihilism has something to do with God, then I’m going to rate it as much more likely that you suffer from basic confusion, than that the absence of God is actually responsible for your values collapse whereas a real God could have saved it and let you live happily ever after just by ordering you to have fun.
I think the basic problem is that evolution re-used some of the same machinery to implement both beliefs and values. Our beliefs reflect features of the external world, so people expect to find similar external features corresponding to their values.
Actually searching for these features will fail to produce any results, which would be very dismaying as long as the beliefs-values confusion remains.
The God meme acts as a curiosity stopper; it says that these external features really do exist, but you’re too stupid to understand all the details, so don’t bother thinking about it.
Exactly! I think this is exactly the sort of ‘solution’ that I hoped physical materialism could propose.
I’d have to think about whether the source of the problem is what Peter has guessed (whether this particular confusion) but from the inside it exactly feels like a hard-wiring problem (given by evolution) that I can’t reconcile.
As I wrote above in this thread, I agree that there’s not any clear way that the existence of God could solve this problem.
[Note: I took out several big chunks about how religions address this problem, but I understand people here don’t want to hear about religion discussed in a positive light. But the relevant bit:]
Peter de Blanc wrote:
The God meme acts as a curiosity stopper; it says that these external features really do exist, but you’re too stupid to understand all the details, so don’t bother thinking about it.
And this seems exactly right. Without the God meme telling me that it all works out somehow—for example, somehow the subjective/object value problem works out—I’m left in a confused state.
What if the existence of a Framework of Objective Value wasn’t the only thing you were wrong about? What if you are also wrong in your belief that you need this Framework in order to care about the things that used to be meaningful to you? What if this was simply one of the many things that your old religious beliefs had fooled you about?
It is possible to be mistaken about one’s self, just as we can be mistaken about the rest of reality. I know it feels like you need a Framework, but this feeling is merely evidence, not mathematical proof. And considering the number of ex-believers who used to feel as you do and who now live a meaningful life, you have to admit that your feeling isn’t very strong evidence. Ask yourself how you know what you think you know.
I would be quite happy to be wrong. I can’t think of a single reason not to wish to be wrong. (Not even the sting of a drop in status; in my mind, it would improve my status to have presented a problem that actually has a solution instead of one that just leads in circles.)
Ask yourself how you know what you think you know.
Through the experiment of assimilating the ideas of Less Wrong over the course a year, I found my worldview changing and becoming more and more bereft of meaning as it seemed more and more logical that value is subjective. This means that no state of the universe is objectively any “better” than any other state, there’s no coherent notion of progress, etc. And I can actually feel that pretty well; right on the edge of my consciousness, an awareness that nothing matters, I’m just a program that’s running in some physical reality. I feel no loyalty or identity with this program, it just exists. And I find it hard to believe I ought to go there; some intuition tells me this isn’t what I’m supposed to be learning. I’ve lost my way somehow.
This reminds me of the labyrinth metaphor. Where the hell am I? Why am I the only one to find this particular dead end? Should I really listen to my friends on the walkie-talkie saying, ‘keep going, it’s not really a deep bottomless chasm!’, or shouldn’t I try and describe it better to make certain you know where I’m at?
When I first gave up the idea of objective morality I also plummeted into a weird sort of ambivalence. It lasted for a few years. Finally, I confronted the question of why I even bothered continuing to exist. I decided I wanted to live. I then decided I needed an arbitrary guiding principle in life to help me maintain that desire. I decided I wanted to live as interesting a life as possible. That was my only goal in life, and it was only there to keep me wanting to live.
I pursued that goal for a few years, rather half-heartedly. It was enough to keep me going, but not much more. Then, one day, rather suddenly, I fell completely in love. Really, blubberingly, stupidly in love. I was completely consumed and couldn’t have cared less if it was objectively meaningless. A week later, I found out the girl was also in love with me, and I promptly stopped loving her.
Meditating on the whole thing afterwards, I realized I hadn’t been in love, but had experienced some other, probably quite disgusting emotion. I had been pulled up from the abyss of subjectivity by the worst kind of garbage! It felt like the punchline of a zen koan. I realized that wallowing in ambivalence was just as worthless as embracing the stupidest purpose, and became ambivalent to the lack of objectivity itself.
After that I began rediscovering and embracing my natural desires. A few years of that and I finally settled down into what I consider a healthy person. But, to this day, I still occasionally feel the fuzzy awareness at the edge of my consciousness that everything is meaningless. And, when I do, I just don’t care. So what if everything I love is objectively worthless? Meaninglessness itself is meaningless, so screw it!
I realize this whole story is probably bereft of any sort of rational take away, but I thought I’d share anyway, in the hopes of at least giving you some hope. Failing that, it was at least nice to take a break from rationality to write about something totally irrational.
Why am I the only one to find this particular dead end?
You are not.
I cannot remember a time I genuinely believed in God, though I was raised Baptist by a fundamentalist believer. I don’t know why I didn’t succumb. When I was a teen, I didn’t really bother doing anything I didn’t want to do, except to avoid immediate punishment. All of my goals were basically just fantasies. Sometime during the 90s I applied Pascal’s Wager to objective morality and began behaving as though it existed, since it seemed clear that a more intelligent goal-seeking being than I might well discover some objective morality which I couldn’t understand the argument for, and that working toward an objective morality (which is the same thing as a universal top goal, since “morality” consists of statements about goals) requires that I attempt to maximize my ability to do so when it’s explained what it is. This is basically the same role you’re using God for, if I understand correctly.
Unfortunately, as my hope for a positive singularity dwindles, so does my level of caring about, basically, everything not immediately satisfying to me. I remind myself that the Wager still holds even with a very small chance, but very small chance persistently feels like zero chance.
Anyway, I don’t have a solution, but I wanted to point out that this problem is felt by at least some other people as well, and doesn’t necessarily have anything to do with God, per se. I suppose some might suggest that I’ve merely substituted a sufficiently intelligent goal-seeker for “God”...
If you’re still concerned about that after all the discussion about it, it might be a good idea to get some more one-on-one help. Off the top of my head I’d suggest finding a reputable Buddhist monk/master/whatever to work with: I know that meditation sometimes evokes the kind of problem you’re afraid of encountering, so they should have some way of dealing with that.
This means that no state of the universe is objectively any “better” than any other state, there’s no coherent notion of progress, etc.
This is wrong. Some states are really objectively better than other states. The trick is, “better” originates from your own preference, not God-given decree. You care about getting the world to be objectively better, while a pebble-sorter cares about getting the world to be objectively more prime.
Rather, it is using a different definition of ‘better’ (or, you could argue, ‘objectively’) than you are. Byrnema’s usage may not be sophisticated or the most useful way to carve reality but it is a popular usage and his intended meaning is clear.
Some states are really objectively better than other states. The trick is, “better” originates from your own preference, not God-given decree.
That is the framework I use. I agree that byrnema could benefit from an improved understanding of this kind of philosophy. Nevertheless, byrnema’s statement is a straightforward use of language that is easy to understand, trivially true and entirely unhelpful.
I’m pretty sure I can’t be confused about the real-world content of this discussion, but we are having trouble communicating. As a way out, you could suggest reasonable interpretations of “better” and “objectively” that make byrnema’s “no state of the universe is objectively any “better” than any other state” into a correct statement.
I’m pretty sure I can’t be confused about the real-world content of this discussion
You appear to have a solid understanding of the deep philosophy. Your basic claims in the twoancestors are wrong and trivially so at about the level of language parsing and logic.
It doesn’t work for most of any reasonable definition, because you’d need “better” to mean “absolute indifference”
Far from being required, “absolute indifference” is doesn’t even work as a meaning in the context: “No state of the universe is objectively any “absolute indifference” than any other state”. If you fixed the grammar to make the meaning fit it would make the statement wrong.
As a way out, you could suggest reasonable interpretations of “better” and “objectively” that make byrnema’s “no state of the universe is objectively any “better” than any other state” into a correct statement.
I’m not comfortable making any precise descriptions for a popular philosophy that I think is stupid (my way of thinking about the underlying concepts more or less matches yours). But it would be something along the lines of defining “objectively better” to mean “scores high in a description or implementation of betterness outside of the universe, not dependent on me, etc”. Then, if there is in fact no such ‘objectively better’ thingumy (God, silly half baked philosophy of universal morality, etc) people would say stuff like byrnema did and it wouldn’t be wrong, just useless.
“No state of the universe is objectively any “absolute indifference” than any other state”.
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
I’m not comfortable making any precise descriptions for a popular philosophy that I think is stupid
That “stupid” for me got identified as “incorrect”, not a way to correctly interpret the byrnema’s phrase to make it right (but a reasonable guess about the way the phrase came to be).
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
And this I think is why people find moral non-cognitivism so easy to misunderstand—people always try to parse it to understand which variety of moral realism you subscribe to.
“There is no final true moral standard.”
“Ah, so you’re saying that all acts are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral standards are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral judgements are equally good according to the final true moral standard?”
I like to use the word “transcendent”, as in “no transcendent morality”, where the word “transcendent” is chosen to sound very impressive and important but not actually mean anything.
However, you can still be a moral cognitivist and believe that moral statements have truth-values, they just won’t be transcendent truth-values. What is a “transcendent truth-value”? Shrugs.
It’s not like “transcedental morality” is a way the universe could have been but wasn’t.
Yes, I think that transcendent is a great adjective for this concept of morality I’m attached to. I like it because it makes it clear why I would label the attachment ‘theistic’ even though I have no attachment that I’m aware of to other necessarily ‘religious’ beliefs.
Since I do ‘believe in’ physical materialism, I expect science to eventually explain that morality can transcend the subjective/objective chasm in some way or that if morality does not, to identify whether this fact about the universe is consistent or inconsistent with my particular programming. (This latter component specifically is the part I was thinking you haven’t covered; I can only say this much now because the discussion had helped develop my thoughts quite a bit already.)
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
That is a description that you can get to using your definition of ‘better’ (approximately, depending on how you prefer to represent differences between human preferences). It still completely does away with the meaning Byrnema conveyed.
That “stupid” for me got identified as “incorrect”, not a way to correctly interpret the byrnema’s phrase to make it right (but a reasonable guess about the way the phrase came to be).
That was clear. But no matter how superior our philosophy we are still considering straw men if we parse common language with our own idiosyncratic variant. We must choose between translating from their language, forcing them to use ours, ignoring them or, well, being wrong a lot.
This thread between you and Vladimir_Nesov is fascinating, because you’re talking about exactly what I don’t understand. Allusions to my worldview being unsophisticated, not useful, stupid and incorrect fill me with the excitement of anticipation that there is a high probability of there being something to learn here.
Some comments:
(1) It appears that the whole issue of what I meant when I wrote, “no state of the universe is objectively any “better” than any other state,” has been resolved. We agree that it is trivially true, useless and on some level insane to be concerned with it.
(2) Vladimir_Nesov wrote, “You care about getting the world to be objectively better [in the way you define better], while a pebble-sorter cares about getting the world to be objectively more prime [the way he defines better].”
This is a good point to launch from. Suppose it is true that there is no objective ‘better’, so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don’t place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.
For what it’s worth, I understand that this value I’m lacking—to persist in caring about my subjective values even if they’re not objectively substantiated—is a subjective value. While I seem to lack it, you guys could very reasonably have this value in great measure.
So. Is this a value I can work on developing? Or is there some logical fallacy I’m making that would make this whole dilemma moot once I understood it?
This is connected to the Rebelling Within Nature post: have you considered that your criterion “you shouldn’t care about a value if it isn’t objective”, is another value that is particular to you as a human? A simple Paperclip Maximizer wouldn’t have the criterion “stop caring about paperclips if it turns out the goodness of paperclips isn’t written into the fabric of the universe”. (Nor would it have the criterion of respecting other agents’ moralities, another thing which you value.)
This is a good point to launch from. Suppose it is true that there is no objective ‘better’, so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don’t place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.
Have a look at Eliezer’s posts on morality and perhaps ‘subjectively objective’. (But also consider Adelene’s suggestion on looking into whether your dissociation is the result of a neurological or psychological state that you could benefit from fixing.)
For what it’s worth, I understand that this value I’m lacking—to persist in caring about my subjective values even if they’re not objectively substantiated—is a subjective value.
Meanwhile I think you do, in fact, have this subjective measure. Not because you must for any philosophical reason but because your behaviour and descriptions indicate that you do subjectively care about your subjective value. Even thought you don’t think you do. To put it another way, your subjective values are objective facts about the state of the universe and your part thereof and I believe you are wrong about them.
Some states are really objectively better than other states. The trick is, “better” originates from your own preference
Is there a sense in which you did not just say “The trick is to pretend that your subjective preference is really a statement about objective values”? If by “objectively better” you don’t mean “better according to a metric that doesn’t depend on subjective preferences”, then I think you may be talking past the problem.
By “objectively better” I mean that given an ordering called “better”, it is an objective fact that one state is “better” than another state. The ordering “better” is constructed from your own decision-making algorithm, you could say from subjective preference. This ordering however is not a matter of personal choice: you can’t decide what it is, you only decide given what it already happens to be. It is only “subjective” in the sense that different agents have different preference.
I can’t quite follow that description. “More prime” really is an objective description of a yardstick against which you can measure the world. So is “preferred by me”. But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (link to the thread—H/T to Wei Dai).
The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.
“Preferred by byrnema” is bad terminology because of another confusion, where she seems to assume that she knows what she really prefers. So, I could say “objectively more preferred by byrnema”, but that can be misinterpreted as “objectively more the way byrnema thinks it should be”, which is circular as the foundation for byrnema’s own decision-making, just as with a calculator Y that when asked “2+2=?” thinks of an answer in the form “What will calculator Y answer?”, and then prints out “42″, which thus turns out to be a correct answer to “What will calculator Y answer?”. By intermediary of the concept of “better”, it’s easier to distinguish what byrnema really prefers (but can’t know in detail), and what she thinks she prefers, or knows of what she really prefers (or what is “better”).
This comment probably does a better job at explaining the distinction, but it took a bigger set-up (and I’m not saying anything not already contained in Eliezer’s metaethics sequence).
Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (I can’t think of a search term, so no link).
It was in the post for asking Eliezer Questions for his video interview.
The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.
It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people’s more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.
It was in the post for asking Eliezer Questions for his video interview.
Nope, can’t find my comments on this topic there.
It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people’s more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.
I assure you that I’m speaking in good faith. If you see a way in which I’m talking past byrnema, help me to understand.
I don’t doubt that. I probably should consider my words more carefully so I don’t cause offence except when I mean to. Both because it would be better and because it is practical.
Assume I didn’t use the word ‘pretentious’ and instead stated that “when people go about saying people are wrong I expect them to have a higher standard of correctness while doing so than I otherwise would.” If you substituted “your thinking is insane” for “this is wrong” I probably would have upvoted.
But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
I suspect it may be even more confusing if you pressed Vladmir into territory where his preferences did not match those of byrnema. I would then expect him to make the claim “You care about getting the world to be objectively , I care about getting the world objectively better, while a pebble sorter cares about getting the world to be objectively more prime”. But that line between ‘sharing’ better around and inventing words like booglewhatsit is often to be applied inconsistently so I cannot be sure on Vladmir’s take.
A free-floating belief system doesn’t have to be double-think. In fact, the whole point of it would be to fill gaps because you would like a coherent, consistent world view even when one isn’t given to you. I think that continuing to care about subjective value knowing that there is no objective value requires a disconcerting level of double-think.
Eliezer didn’t really miss anything. What you’re asking boils down to, “If I value happiness more than truth, should I delude myself into holding a false belief that has no significant consequence except making me happy?”
The obvious answer to that question is, “Yes, unless you can change yourself so that your happiness does not require belief in something that doesn’t exist”.
The second option is something that Eliezer addressed in Joy in the Merely Real. He didn’t address the first option, self-deception, because this website is about truth-seeking, and anyway, most people who want to deceive themselves don’t need help to do it.
I was embarrassed for a while (about 25 minutes after reading your comment and Ciphergoth’s) that my ideas would be reduced to the cliche’s you are apparently responding to. But then I realized I don’t need to take it personally, but qualify what I mean.
First, there’s nothing from my question to Eliezer to indicate that I value happiness more than truth, or that I value happiness at all. There are things I value more than truth; or rather, I only find it possible to value truth above all else within a system that is coherent and consistent and thus allows a meaningful concept of truth.
If “feels bereft of meaning” doesn’t mean that it makes you unhappy, the only other interpretation that even begins to make sense to me, is that an important part of your terminal values is entirely dependent on the truth of theism.
To experience what that must feel like, I try to imagine how I would feel if I discovered that solipsism is true and that I have no way of ever really affecting anything that happens to me. It would make me unhappy, sure, but more significantly it would also make my existence meaningless in the very real sense that while the desires that are encoded in my brain (or whatever it is that produces my mind) would not magically cease to exist, I would have to acknowledge that there is no possible way for my desires to ever become reality.
Is this closer to what you’re talking about? If it isn’t, I’m going to have to conclude that either I’m a lot stupider than I thought, or you’re talking about a square circle, something impossible.
It is much closer to what I’m talking about.
Orthonormal writes that in the absence of a Framework of Objective Value, he found he still cared about things (the welfare of friends and family, the fate of the world, the truth of my his beliefs, etc).
In contrast, I find my caring begins fading away. Some values go quickly and go first—the fate of the world, the truth of my own beliefs—but other values linger, long enough for me to question the validity of a worldview that would leave me indifferent to my family.
Orthonormal also writes that in response to my hypothetical question about purpose,
And none of these are terminal values for me. Existence, happiness, fun and beauty are pretty much completely meaningless to me in of themselves. In fact, the something which causes me to hesitate when I might feel indifference to my family is a feeling of responsibility.
It occurs to me that satisfying my moral responsibility might be a terminal value for me. If I have none; if it really is the case that I have no moral responsibility to exist and love, I’d happily not exist and not love.
Orthonormal, yourself, Eliezer, all seem to argue that value nihilism just doesn’t happen. Others concede that nihilism does happen, but that this doesn’t bother them or that they’d rather sit with an uncomfortable truth than be deluded. So perhaps it’s the case that people are intrinsically motivated in different ways, or that people have different thresholds for how much lack of meaning they can tolerate. Or other ‘solutions’ come to mind.
It seems to me that you conflate the lack of an outside moral authority with a lack of meaning to morality. Consider “fairness”. Suppose 3 people with equal intrinsic needs (e.g. equal caloric reserves and need for food) put in an equal amount of work on trapping a deer, with no history of past interaction between any of them. Fairness would call for each of them to receive an equal share of the deer. A 90/9/1 split is unfair. It is unfair even if none of them realize it is unfair; if you had a whole society where women got 10% the wages of men, it wouldn’t suddenly become massively unfair at the first instant someone pointed it out. It is just that an equal split is the state of affairs we describe by the word “fair” and to describe 90/9/1 you’d need some other word like “foograh”.
In the same sense, something can be just as fair, or unfair, without there being any God, nor yet somehow “the laws of physics”, to state with controlling and final authority that it is fair.
Actually, even God’s authority can’t make a 90/9/1 split “fair”. A God could enforce the split, but not make it fair.
So who needs an authority to tell us what we should do, either? God couldn’t make murder right—so who needs God to make it wrong?
Thank you for your effort to understand. However, I don’t believe this is in the right direction. I’m afraid I misunderstood or misrepresented my feelings about moral responsibility.
For thoroughness, I’ll try to explain it better here, but I don’t think it’s such a useful clue after all. I hear physical materialists explaining that they still feel value outside an objective value framework naturally/spontaneously. I was reporting that I didn’t -- for some set of values, the values just seemed to fade away in the absence of an objective value framework. However, I admit that some values remained. The first value to obviously remain was a sense of moral responsibility, and it was that value that kept me faithful to the others. So perhaps it is a so-called ‘terminal value’, in any case, it was the limit where some part of myself said “if this is Truth, then I don’t value Truth”.
The reason I feel value outside of an objective value framework is that I taught myself over weeks and months to do so. If a theist had the rug pulled out from under them morally speaking then they might well be completely bewildered by how to act and how to think. I am sure this would cause great confusion and pain. The process of moving from a theist world view to a materialistic world view is not some flipped switch, a person has to teach themselves new emotional and procedural reactions to common every day problems. The manner in which to do this is to start from the truth as best you can approximate it and train yourself to have emotional reactions that are in accordance with the truth. There is no easy way to to do this but I personally find it much easier to have a happy life once I trained myself to feel emotions in relation to facts rather than fictions.
Upvoted for honesty and clarity.
I’m not sure there’s much more to discuss with you on the topic of theism, then; the object-level arguments are irrelevant to whether you believe. (There are plenty of other exciting topics around here, of course.) All I can do is attempt to convince you that atheism really isn’t what it feels like from your perspective.
EDIT: There was another paragraph here before I thought better of it.
Perhaps we could say “needn’t be what it feels like from your perspective”. It clearly is that feeling for some. I wonder to what extent their difficulty is, in fact, an external-tribal-belief shaped hole in their neurological makeup.
Agreed. I should remember I’m not neurotypical, in several ways.
I’m not sure that’s possible. As someone who’s been an atheist for at least 30 years, I’d say atheism does feel like that, unless there’s some other external source of morality to lean on.
From the back and forth on this thread, I’m now wondering if there’s a major divide between those who mostly care deeply without needing a reason to care, and those who mostly don’t.
I’d thought of that myself a few days ago. It seems like something that we’d experience selection bias against encountering here.
I would expect to see nihilist atheists overrepresented here—one of the principles of rationality is believing even when your emotions oppose it.
I’m not surprised to encounter people here who find nihlism comfortable, or at least tolerable, for that reason. People who find it disabling—who can’t care without believing that there’s an external reason to care—not so much.
I don’t feel that way at all, personally—I’m very happy to value what I value without any kind of cosmic backing.
That’s a rather poor interpretation. I pointed out from my own experience that nihilism is not a necessary consequence of leaving religion. I swear to you that when I was religious I agonized over my fear of nihilism, that I loved Dostoyevsky and dreaded Nietzsche, that I poured out my soul in chapels and confessionals time and time again. I had a fierce conscience then, and I still have one now. I feel the same emotional and moral passions as before; I just recognize them as a part of me rather than a message from the heart of the cosmos— I don’t need permission from the universe to care about others!
I don’t deny that others have adopted positions of moral nihilism when leaving a faith; I know several of them from my philosophy classes. But this is not necessary, and not rational; therefore it is not a good instrumental excuse to maintain theism.
Now, I cannot tell you what you actually feel; but consider two possibilities in addition to your own:
What you experience may be an expectation of your values vanishing rather than an actual attenuation of them. This expectation can be mistaken!
A temporary change in mood can also affect the strength of values, and I did go through a few months of mild depression when I apostasized. But it passed, and I have since felt even better than I had before it.
This might turn out to be vacuous, but it seems useful to me. Here goes nothing:
Do you have a favorite color? Or a favorite number, or word, or shirt, or other arbitrary thing? (Not something that’s a favorite because it reminds you of something else, or something that you like because it’s useful; something that you like just because you like it.)
Assuming you do, what objective value does it have over other similar things? None, right? Saying that purple is a better color than orange, or three is a better number than five (to use my own favorites) simply doesn’t make sense.
But, assuming you answered ‘yes’ to the first question, you still like the thing, otherwise it wouldn’t be a favorite. It makes sense to describe such things as fun or beautiful, and to use the word ‘happiness’ to describe the emotion they evoke. And you can have favorites among any type of things, including moral systems. Rationality doesn’t mean giving those up—they’re not irrational, they’re arational. (It does mean being careful to make sure they don’t conflict with each other or with reality, though—thinking that purple is somehow ‘really’ better than orange would be irrational.)
Reminds me of Wittgenstein’s “Ethics and aesthetics are one and the same”. Not literally true I don’t think, but I found it enlightening all the same.
You are not really entitled to your own stated values. You can’t just assert that beauty is meaningless to you and through this act make it so. If beauty is important to you, being absolutely convinced that it’s not won’t make it unimportant. You are simply wrong and confused about your values, at which point getting a better conscious understanding of what is “morality” becomes even more important than if you were naive and relied on natural intuition alone.
I’m not sure to what extent terminal values can be chosen or not, but it seems to me that (the following slightly different than what you were describing) if you become absolutely convinced that your values aren’t important, then it would be difficult to continue thinking your values are important. Maybe the fact that I can’t be convinced of the unimportance of my values explains why I can’t really be convinced there’s no Framework of Objective Value, since my brain keeps outputting that this would make my values unimportant. But maybe, by the end of this thread, my brain will stop outputting that. I’m willing to do the necessary mental work.
By the way, Furcas seemed to understand the negation of value I’m experiencing via an analogy of solipsism.
One last time, importance ≠ universality.
If we had been Babyeaters, we would think that eating babies is the right-B thing to do. This doesn’t in any way imply we should be enthusiastic or even blasé about baby-eating, because we value the right thing, not the right-B thing that expresses the Babyeaters’ morality!
I understand that you can’t imagine a value being important without it being completely objective and universal. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value!
Imagine first that there is an objective value that you just don’t care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn’t an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.
This the best (very) short guide to naturalistic metaethics I’ve read so far.
This is very helpful. The only thing I would clarify is that the lesson I need to learn is that importance ≠ objectivity. (I’m not at all concerned about universality.)
I’m not sure. With a squirrel in the universe, I would have thought the universe was better with more nuts than with less. I can understand there being no objective value, but I can’t understand objective value being causally or meaningfully distinct from the subjective value.
Hm. I have no problem with ‘permission’. I just find that I don’t care about caring about it. If it’s not actually horrible, then let the universe fill up with it! My impression is that intellectually (not viscerally, of course) I fail to weight my subjective view of things. If some mathematical proof really convinced me that something I thought subjectively horrible was objectively good, I think I would start liking it.
(The only issue, that I mentioned before, is that a sense of moral responsibility would prevent me from being convinced by a mathematical proof to suddenly acquire beliefs that would cause me to do something I’ve already learned is immoral. I would have to consider the probability that I’m insane or hallucinating the proof, etc.)
I can barely imagine value nihilism, but not a value nihilism from which God or physics could possibly rescue you. If you think that your value nihilism has something to do with God, then I’m going to rate it as much more likely that you suffer from basic confusion, than that the absence of God is actually responsible for your values collapse whereas a real God could have saved it and let you live happily ever after just by ordering you to have fun.
I think the basic problem is that evolution re-used some of the same machinery to implement both beliefs and values. Our beliefs reflect features of the external world, so people expect to find similar external features corresponding to their values.
Actually searching for these features will fail to produce any results, which would be very dismaying as long as the beliefs-values confusion remains.
The God meme acts as a curiosity stopper; it says that these external features really do exist, but you’re too stupid to understand all the details, so don’t bother thinking about it.
Exactly! I think this is exactly the sort of ‘solution’ that I hoped physical materialism could propose.
I’d have to think about whether the source of the problem is what Peter has guessed (whether this particular confusion) but from the inside it exactly feels like a hard-wiring problem (given by evolution) that I can’t reconcile.
As I wrote above in this thread, I agree that there’s not any clear way that the existence of God could solve this problem.
[Note: I took out several big chunks about how religions address this problem, but I understand people here don’t want to hear about religion discussed in a positive light. But the relevant bit:]
Peter de Blanc wrote:
And this seems exactly right. Without the God meme telling me that it all works out somehow—for example, somehow the subjective/object value problem works out—I’m left in a confused state.
What if the existence of a Framework of Objective Value wasn’t the only thing you were wrong about? What if you are also wrong in your belief that you need this Framework in order to care about the things that used to be meaningful to you? What if this was simply one of the many things that your old religious beliefs had fooled you about?
It is possible to be mistaken about one’s self, just as we can be mistaken about the rest of reality. I know it feels like you need a Framework, but this feeling is merely evidence, not mathematical proof. And considering the number of ex-believers who used to feel as you do and who now live a meaningful life, you have to admit that your feeling isn’t very strong evidence. Ask yourself how you know what you think you know.
I would be quite happy to be wrong. I can’t think of a single reason not to wish to be wrong. (Not even the sting of a drop in status; in my mind, it would improve my status to have presented a problem that actually has a solution instead of one that just leads in circles.)
Through the experiment of assimilating the ideas of Less Wrong over the course a year, I found my worldview changing and becoming more and more bereft of meaning as it seemed more and more logical that value is subjective. This means that no state of the universe is objectively any “better” than any other state, there’s no coherent notion of progress, etc. And I can actually feel that pretty well; right on the edge of my consciousness, an awareness that nothing matters, I’m just a program that’s running in some physical reality. I feel no loyalty or identity with this program, it just exists. And I find it hard to believe I ought to go there; some intuition tells me this isn’t what I’m supposed to be learning. I’ve lost my way somehow.
This reminds me of the labyrinth metaphor. Where the hell am I? Why am I the only one to find this particular dead end? Should I really listen to my friends on the walkie-talkie saying, ‘keep going, it’s not really a deep bottomless chasm!’, or shouldn’t I try and describe it better to make certain you know where I’m at?
When I first gave up the idea of objective morality I also plummeted into a weird sort of ambivalence. It lasted for a few years. Finally, I confronted the question of why I even bothered continuing to exist. I decided I wanted to live. I then decided I needed an arbitrary guiding principle in life to help me maintain that desire. I decided I wanted to live as interesting a life as possible. That was my only goal in life, and it was only there to keep me wanting to live.
I pursued that goal for a few years, rather half-heartedly. It was enough to keep me going, but not much more. Then, one day, rather suddenly, I fell completely in love. Really, blubberingly, stupidly in love. I was completely consumed and couldn’t have cared less if it was objectively meaningless. A week later, I found out the girl was also in love with me, and I promptly stopped loving her.
Meditating on the whole thing afterwards, I realized I hadn’t been in love, but had experienced some other, probably quite disgusting emotion. I had been pulled up from the abyss of subjectivity by the worst kind of garbage! It felt like the punchline of a zen koan. I realized that wallowing in ambivalence was just as worthless as embracing the stupidest purpose, and became ambivalent to the lack of objectivity itself.
After that I began rediscovering and embracing my natural desires. A few years of that and I finally settled down into what I consider a healthy person. But, to this day, I still occasionally feel the fuzzy awareness at the edge of my consciousness that everything is meaningless. And, when I do, I just don’t care. So what if everything I love is objectively worthless? Meaninglessness itself is meaningless, so screw it!
I realize this whole story is probably bereft of any sort of rational take away, but I thought I’d share anyway, in the hopes of at least giving you some hope. Failing that, it was at least nice to take a break from rationality to write about something totally irrational.
You are not.
I cannot remember a time I genuinely believed in God, though I was raised Baptist by a fundamentalist believer. I don’t know why I didn’t succumb. When I was a teen, I didn’t really bother doing anything I didn’t want to do, except to avoid immediate punishment. All of my goals were basically just fantasies. Sometime during the 90s I applied Pascal’s Wager to objective morality and began behaving as though it existed, since it seemed clear that a more intelligent goal-seeking being than I might well discover some objective morality which I couldn’t understand the argument for, and that working toward an objective morality (which is the same thing as a universal top goal, since “morality” consists of statements about goals) requires that I attempt to maximize my ability to do so when it’s explained what it is. This is basically the same role you’re using God for, if I understand correctly.
Unfortunately, as my hope for a positive singularity dwindles, so does my level of caring about, basically, everything not immediately satisfying to me. I remind myself that the Wager still holds even with a very small chance, but very small chance persistently feels like zero chance.
Anyway, I don’t have a solution, but I wanted to point out that this problem is felt by at least some other people as well, and doesn’t necessarily have anything to do with God, per se. I suppose some might suggest that I’ve merely substituted a sufficiently intelligent goal-seeker for “God”...
If you’re still concerned about that after all the discussion about it, it might be a good idea to get some more one-on-one help. Off the top of my head I’d suggest finding a reputable Buddhist monk/master/whatever to work with: I know that meditation sometimes evokes the kind of problem you’re afraid of encountering, so they should have some way of dealing with that.
This is wrong. Some states are really objectively better than other states. The trick is, “better” originates from your own preference, not God-given decree. You care about getting the world to be objectively better, while a pebble-sorter cares about getting the world to be objectively more prime.
Rather, it is using a different definition of ‘better’ (or, you could argue, ‘objectively’) than you are. Byrnema’s usage may not be sophisticated or the most useful way to carve reality but it is a popular usage and his intended meaning is clear.
That is the framework I use. I agree that byrnema could benefit from an improved understanding of this kind of philosophy. Nevertheless, byrnema’s statement is a straightforward use of language that is easy to understand, trivially true and entirely unhelpful.
It doesn’t work for most of any reasonable definition, because you’d need “better” to mean “absolute indifference”, which doesn’t rhyme.
No it wouldn’t. You are confused.
I’m pretty sure I can’t be confused about the real-world content of this discussion, but we are having trouble communicating. As a way out, you could suggest reasonable interpretations of “better” and “objectively” that make byrnema’s “no state of the universe is objectively any “better” than any other state” into a correct statement.
You appear to have a solid understanding of the deep philosophy. Your basic claims in the two ancestors are wrong and trivially so at about the level of language parsing and logic.
Far from being required, “absolute indifference” is doesn’t even work as a meaning in the context: “No state of the universe is objectively any “absolute indifference” than any other state”. If you fixed the grammar to make the meaning fit it would make the statement wrong.
I’m not comfortable making any precise descriptions for a popular philosophy that I think is stupid (my way of thinking about the underlying concepts more or less matches yours). But it would be something along the lines of defining “objectively better” to mean “scores high in a description or implementation of betterness outside of the universe, not dependent on me, etc”. Then, if there is in fact no such ‘objectively better’ thingumy (God, silly half baked philosophy of universal morality, etc) people would say stuff like byrnema did and it wouldn’t be wrong, just useless.
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
That “stupid” for me got identified as “incorrect”, not a way to correctly interpret the byrnema’s phrase to make it right (but a reasonable guess about the way the phrase came to be).
And this I think is why people find moral non-cognitivism so easy to misunderstand—people always try to parse it to understand which variety of moral realism you subscribe to.
“There is no final true moral standard.”
“Ah, so you’re saying that all acts are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral standards are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral judgements are equally good according to the final true moral standard?”
\whimper**
I like to use the word “transcendent”, as in “no transcendent morality”, where the word “transcendent” is chosen to sound very impressive and important but not actually mean anything.
However, you can still be a moral cognitivist and believe that moral statements have truth-values, they just won’t be transcendent truth-values. What is a “transcendent truth-value”? Shrugs.
It’s not like “transcedental morality” is a way the universe could have been but wasn’t.
Yes, I think that transcendent is a great adjective for this concept of morality I’m attached to. I like it because it makes it clear why I would label the attachment ‘theistic’ even though I have no attachment that I’m aware of to other necessarily ‘religious’ beliefs.
Since I do ‘believe in’ physical materialism, I expect science to eventually explain that morality can transcend the subjective/objective chasm in some way or that if morality does not, to identify whether this fact about the universe is consistent or inconsistent with my particular programming. (This latter component specifically is the part I was thinking you haven’t covered; I can only say this much now because the discussion had helped develop my thoughts quite a bit already.)
Er, did you actually read the Metaethics sequence?
That is a description that you can get to using your definition of ‘better’ (approximately, depending on how you prefer to represent differences between human preferences). It still completely does away with the meaning Byrnema conveyed.
That was clear. But no matter how superior our philosophy we are still considering straw men if we parse common language with our own idiosyncratic variant. We must choose between translating from their language, forcing them to use ours, ignoring them or, well, being wrong a lot.
This thread between you and Vladimir_Nesov is fascinating, because you’re talking about exactly what I don’t understand. Allusions to my worldview being unsophisticated, not useful, stupid and incorrect fill me with the excitement of anticipation that there is a high probability of there being something to learn here.
Some comments:
(1) It appears that the whole issue of what I meant when I wrote, “no state of the universe is objectively any “better” than any other state,” has been resolved. We agree that it is trivially true, useless and on some level insane to be concerned with it.
(2) Vladimir_Nesov wrote, “You care about getting the world to be objectively better [in the way you define better], while a pebble-sorter cares about getting the world to be objectively more prime [the way he defines better].”
This is a good point to launch from. Suppose it is true that there is no objective ‘better’, so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don’t place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.
For what it’s worth, I understand that this value I’m lacking—to persist in caring about my subjective values even if they’re not objectively substantiated—is a subjective value. While I seem to lack it, you guys could very reasonably have this value in great measure.
So. Is this a value I can work on developing? Or is there some logical fallacy I’m making that would make this whole dilemma moot once I understood it?
This is connected to the Rebelling Within Nature post: have you considered that your criterion “you shouldn’t care about a value if it isn’t objective”, is another value that is particular to you as a human? A simple Paperclip Maximizer wouldn’t have the criterion “stop caring about paperclips if it turns out the goodness of paperclips isn’t written into the fabric of the universe”. (Nor would it have the criterion of respecting other agents’ moralities, another thing which you value.)
Have a look at Eliezer’s posts on morality and perhaps ‘subjectively objective’. (But also consider Adelene’s suggestion on looking into whether your dissociation is the result of a neurological or psychological state that you could benefit from fixing.)
Meanwhile I think you do, in fact, have this subjective measure. Not because you must for any philosophical reason but because your behaviour and descriptions indicate that you do subjectively care about your subjective value. Even thought you don’t think you do. To put it another way, your subjective values are objective facts about the state of the universe and your part thereof and I believe you are wrong about them.
Is there a sense in which you did not just say “The trick is to pretend that your subjective preference is really a statement about objective values”? If by “objectively better” you don’t mean “better according to a metric that doesn’t depend on subjective preferences”, then I think you may be talking past the problem.
By “objectively better” I mean that given an ordering called “better”, it is an objective fact that one state is “better” than another state. The ordering “better” is constructed from your own decision-making algorithm, you could say from subjective preference. This ordering however is not a matter of personal choice: you can’t decide what it is, you only decide given what it already happens to be. It is only “subjective” in the sense that different agents have different preference.
I can’t quite follow that description. “More prime” really is an objective description of a yardstick against which you can measure the world. So is “preferred by me”. But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (link to the thread—H/T to Wei Dai).
The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.
“Preferred by byrnema” is bad terminology because of another confusion, where she seems to assume that she knows what she really prefers. So, I could say “objectively more preferred by byrnema”, but that can be misinterpreted as “objectively more the way byrnema thinks it should be”, which is circular as the foundation for byrnema’s own decision-making, just as with a calculator Y that when asked “2+2=?” thinks of an answer in the form “What will calculator Y answer?”, and then prints out “42″, which thus turns out to be a correct answer to “What will calculator Y answer?”. By intermediary of the concept of “better”, it’s easier to distinguish what byrnema really prefers (but can’t know in detail), and what she thinks she prefers, or knows of what she really prefers (or what is “better”).
This comment probably does a better job at explaining the distinction, but it took a bigger set-up (and I’m not saying anything not already contained in Eliezer’s metaethics sequence).
See also:
Math is Subjunctively Objective
Where Recursive Justification Hits Bottom
No License To Be Human (some discussion of right vs. human-right terminology)
Metaethics sequence
It was in the post for asking Eliezer Questions for his video interview.
It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people’s more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.
Nope, can’t find my comments on this topic there.
I assure you that I’m speaking in good faith. If you see a way in which I’m talking past byrnema, help me to understand.
Is this the thread you’re referring to?
It is, thank you.
Ahh. I was thinking of the less wrong singularity article.
I don’t doubt that. I probably should consider my words more carefully so I don’t cause offence except when I mean to. Both because it would be better and because it is practical.
Assume I didn’t use the word ‘pretentious’ and instead stated that “when people go about saying people are wrong I expect them to have a higher standard of correctness while doing so than I otherwise would.” If you substituted “your thinking is insane” for “this is wrong” I probably would have upvoted.
I suspect it may be even more confusing if you pressed Vladmir into territory where his preferences did not match those of byrnema. I would then expect him to make the claim “You care about getting the world to be objectively , I care about getting the world objectively better, while a pebble sorter cares about getting the world to be objectively more prime”. But that line between ‘sharing’ better around and inventing words like booglewhatsit is often to be applied inconsistently so I cannot be sure on Vladmir’s take.
See also Doublethink
A free-floating belief system doesn’t have to be double-think. In fact, the whole point of it would be to fill gaps because you would like a coherent, consistent world view even when one isn’t given to you. I think that continuing to care about subjective value knowing that there is no objective value requires a disconcerting level of double-think.