What if the existence of a Framework of Objective Value wasn’t the only thing you were wrong about? What if you are also wrong in your belief that you need this Framework in order to care about the things that used to be meaningful to you? What if this was simply one of the many things that your old religious beliefs had fooled you about?
It is possible to be mistaken about one’s self, just as we can be mistaken about the rest of reality. I know it feels like you need a Framework, but this feeling is merely evidence, not mathematical proof. And considering the number of ex-believers who used to feel as you do and who now live a meaningful life, you have to admit that your feeling isn’t very strong evidence. Ask yourself how you know what you think you know.
I would be quite happy to be wrong. I can’t think of a single reason not to wish to be wrong. (Not even the sting of a drop in status; in my mind, it would improve my status to have presented a problem that actually has a solution instead of one that just leads in circles.)
Ask yourself how you know what you think you know.
Through the experiment of assimilating the ideas of Less Wrong over the course a year, I found my worldview changing and becoming more and more bereft of meaning as it seemed more and more logical that value is subjective. This means that no state of the universe is objectively any “better” than any other state, there’s no coherent notion of progress, etc. And I can actually feel that pretty well; right on the edge of my consciousness, an awareness that nothing matters, I’m just a program that’s running in some physical reality. I feel no loyalty or identity with this program, it just exists. And I find it hard to believe I ought to go there; some intuition tells me this isn’t what I’m supposed to be learning. I’ve lost my way somehow.
This reminds me of the labyrinth metaphor. Where the hell am I? Why am I the only one to find this particular dead end? Should I really listen to my friends on the walkie-talkie saying, ‘keep going, it’s not really a deep bottomless chasm!’, or shouldn’t I try and describe it better to make certain you know where I’m at?
When I first gave up the idea of objective morality I also plummeted into a weird sort of ambivalence. It lasted for a few years. Finally, I confronted the question of why I even bothered continuing to exist. I decided I wanted to live. I then decided I needed an arbitrary guiding principle in life to help me maintain that desire. I decided I wanted to live as interesting a life as possible. That was my only goal in life, and it was only there to keep me wanting to live.
I pursued that goal for a few years, rather half-heartedly. It was enough to keep me going, but not much more. Then, one day, rather suddenly, I fell completely in love. Really, blubberingly, stupidly in love. I was completely consumed and couldn’t have cared less if it was objectively meaningless. A week later, I found out the girl was also in love with me, and I promptly stopped loving her.
Meditating on the whole thing afterwards, I realized I hadn’t been in love, but had experienced some other, probably quite disgusting emotion. I had been pulled up from the abyss of subjectivity by the worst kind of garbage! It felt like the punchline of a zen koan. I realized that wallowing in ambivalence was just as worthless as embracing the stupidest purpose, and became ambivalent to the lack of objectivity itself.
After that I began rediscovering and embracing my natural desires. A few years of that and I finally settled down into what I consider a healthy person. But, to this day, I still occasionally feel the fuzzy awareness at the edge of my consciousness that everything is meaningless. And, when I do, I just don’t care. So what if everything I love is objectively worthless? Meaninglessness itself is meaningless, so screw it!
I realize this whole story is probably bereft of any sort of rational take away, but I thought I’d share anyway, in the hopes of at least giving you some hope. Failing that, it was at least nice to take a break from rationality to write about something totally irrational.
Why am I the only one to find this particular dead end?
You are not.
I cannot remember a time I genuinely believed in God, though I was raised Baptist by a fundamentalist believer. I don’t know why I didn’t succumb. When I was a teen, I didn’t really bother doing anything I didn’t want to do, except to avoid immediate punishment. All of my goals were basically just fantasies. Sometime during the 90s I applied Pascal’s Wager to objective morality and began behaving as though it existed, since it seemed clear that a more intelligent goal-seeking being than I might well discover some objective morality which I couldn’t understand the argument for, and that working toward an objective morality (which is the same thing as a universal top goal, since “morality” consists of statements about goals) requires that I attempt to maximize my ability to do so when it’s explained what it is. This is basically the same role you’re using God for, if I understand correctly.
Unfortunately, as my hope for a positive singularity dwindles, so does my level of caring about, basically, everything not immediately satisfying to me. I remind myself that the Wager still holds even with a very small chance, but very small chance persistently feels like zero chance.
Anyway, I don’t have a solution, but I wanted to point out that this problem is felt by at least some other people as well, and doesn’t necessarily have anything to do with God, per se. I suppose some might suggest that I’ve merely substituted a sufficiently intelligent goal-seeker for “God”...
If you’re still concerned about that after all the discussion about it, it might be a good idea to get some more one-on-one help. Off the top of my head I’d suggest finding a reputable Buddhist monk/master/whatever to work with: I know that meditation sometimes evokes the kind of problem you’re afraid of encountering, so they should have some way of dealing with that.
This means that no state of the universe is objectively any “better” than any other state, there’s no coherent notion of progress, etc.
This is wrong. Some states are really objectively better than other states. The trick is, “better” originates from your own preference, not God-given decree. You care about getting the world to be objectively better, while a pebble-sorter cares about getting the world to be objectively more prime.
Rather, it is using a different definition of ‘better’ (or, you could argue, ‘objectively’) than you are. Byrnema’s usage may not be sophisticated or the most useful way to carve reality but it is a popular usage and his intended meaning is clear.
Some states are really objectively better than other states. The trick is, “better” originates from your own preference, not God-given decree.
That is the framework I use. I agree that byrnema could benefit from an improved understanding of this kind of philosophy. Nevertheless, byrnema’s statement is a straightforward use of language that is easy to understand, trivially true and entirely unhelpful.
I’m pretty sure I can’t be confused about the real-world content of this discussion, but we are having trouble communicating. As a way out, you could suggest reasonable interpretations of “better” and “objectively” that make byrnema’s “no state of the universe is objectively any “better” than any other state” into a correct statement.
I’m pretty sure I can’t be confused about the real-world content of this discussion
You appear to have a solid understanding of the deep philosophy. Your basic claims in the twoancestors are wrong and trivially so at about the level of language parsing and logic.
It doesn’t work for most of any reasonable definition, because you’d need “better” to mean “absolute indifference”
Far from being required, “absolute indifference” is doesn’t even work as a meaning in the context: “No state of the universe is objectively any “absolute indifference” than any other state”. If you fixed the grammar to make the meaning fit it would make the statement wrong.
As a way out, you could suggest reasonable interpretations of “better” and “objectively” that make byrnema’s “no state of the universe is objectively any “better” than any other state” into a correct statement.
I’m not comfortable making any precise descriptions for a popular philosophy that I think is stupid (my way of thinking about the underlying concepts more or less matches yours). But it would be something along the lines of defining “objectively better” to mean “scores high in a description or implementation of betterness outside of the universe, not dependent on me, etc”. Then, if there is in fact no such ‘objectively better’ thingumy (God, silly half baked philosophy of universal morality, etc) people would say stuff like byrnema did and it wouldn’t be wrong, just useless.
“No state of the universe is objectively any “absolute indifference” than any other state”.
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
I’m not comfortable making any precise descriptions for a popular philosophy that I think is stupid
That “stupid” for me got identified as “incorrect”, not a way to correctly interpret the byrnema’s phrase to make it right (but a reasonable guess about the way the phrase came to be).
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
And this I think is why people find moral non-cognitivism so easy to misunderstand—people always try to parse it to understand which variety of moral realism you subscribe to.
“There is no final true moral standard.”
“Ah, so you’re saying that all acts are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral standards are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral judgements are equally good according to the final true moral standard?”
I like to use the word “transcendent”, as in “no transcendent morality”, where the word “transcendent” is chosen to sound very impressive and important but not actually mean anything.
However, you can still be a moral cognitivist and believe that moral statements have truth-values, they just won’t be transcendent truth-values. What is a “transcendent truth-value”? Shrugs.
It’s not like “transcedental morality” is a way the universe could have been but wasn’t.
Yes, I think that transcendent is a great adjective for this concept of morality I’m attached to. I like it because it makes it clear why I would label the attachment ‘theistic’ even though I have no attachment that I’m aware of to other necessarily ‘religious’ beliefs.
Since I do ‘believe in’ physical materialism, I expect science to eventually explain that morality can transcend the subjective/objective chasm in some way or that if morality does not, to identify whether this fact about the universe is consistent or inconsistent with my particular programming. (This latter component specifically is the part I was thinking you haven’t covered; I can only say this much now because the discussion had helped develop my thoughts quite a bit already.)
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
That is a description that you can get to using your definition of ‘better’ (approximately, depending on how you prefer to represent differences between human preferences). It still completely does away with the meaning Byrnema conveyed.
That “stupid” for me got identified as “incorrect”, not a way to correctly interpret the byrnema’s phrase to make it right (but a reasonable guess about the way the phrase came to be).
That was clear. But no matter how superior our philosophy we are still considering straw men if we parse common language with our own idiosyncratic variant. We must choose between translating from their language, forcing them to use ours, ignoring them or, well, being wrong a lot.
This thread between you and Vladimir_Nesov is fascinating, because you’re talking about exactly what I don’t understand. Allusions to my worldview being unsophisticated, not useful, stupid and incorrect fill me with the excitement of anticipation that there is a high probability of there being something to learn here.
Some comments:
(1) It appears that the whole issue of what I meant when I wrote, “no state of the universe is objectively any “better” than any other state,” has been resolved. We agree that it is trivially true, useless and on some level insane to be concerned with it.
(2) Vladimir_Nesov wrote, “You care about getting the world to be objectively better [in the way you define better], while a pebble-sorter cares about getting the world to be objectively more prime [the way he defines better].”
This is a good point to launch from. Suppose it is true that there is no objective ‘better’, so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don’t place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.
For what it’s worth, I understand that this value I’m lacking—to persist in caring about my subjective values even if they’re not objectively substantiated—is a subjective value. While I seem to lack it, you guys could very reasonably have this value in great measure.
So. Is this a value I can work on developing? Or is there some logical fallacy I’m making that would make this whole dilemma moot once I understood it?
This is connected to the Rebelling Within Nature post: have you considered that your criterion “you shouldn’t care about a value if it isn’t objective”, is another value that is particular to you as a human? A simple Paperclip Maximizer wouldn’t have the criterion “stop caring about paperclips if it turns out the goodness of paperclips isn’t written into the fabric of the universe”. (Nor would it have the criterion of respecting other agents’ moralities, another thing which you value.)
This is a good point to launch from. Suppose it is true that there is no objective ‘better’, so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don’t place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.
Have a look at Eliezer’s posts on morality and perhaps ‘subjectively objective’. (But also consider Adelene’s suggestion on looking into whether your dissociation is the result of a neurological or psychological state that you could benefit from fixing.)
For what it’s worth, I understand that this value I’m lacking—to persist in caring about my subjective values even if they’re not objectively substantiated—is a subjective value.
Meanwhile I think you do, in fact, have this subjective measure. Not because you must for any philosophical reason but because your behaviour and descriptions indicate that you do subjectively care about your subjective value. Even thought you don’t think you do. To put it another way, your subjective values are objective facts about the state of the universe and your part thereof and I believe you are wrong about them.
Some states are really objectively better than other states. The trick is, “better” originates from your own preference
Is there a sense in which you did not just say “The trick is to pretend that your subjective preference is really a statement about objective values”? If by “objectively better” you don’t mean “better according to a metric that doesn’t depend on subjective preferences”, then I think you may be talking past the problem.
By “objectively better” I mean that given an ordering called “better”, it is an objective fact that one state is “better” than another state. The ordering “better” is constructed from your own decision-making algorithm, you could say from subjective preference. This ordering however is not a matter of personal choice: you can’t decide what it is, you only decide given what it already happens to be. It is only “subjective” in the sense that different agents have different preference.
I can’t quite follow that description. “More prime” really is an objective description of a yardstick against which you can measure the world. So is “preferred by me”. But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (link to the thread—H/T to Wei Dai).
The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.
“Preferred by byrnema” is bad terminology because of another confusion, where she seems to assume that she knows what she really prefers. So, I could say “objectively more preferred by byrnema”, but that can be misinterpreted as “objectively more the way byrnema thinks it should be”, which is circular as the foundation for byrnema’s own decision-making, just as with a calculator Y that when asked “2+2=?” thinks of an answer in the form “What will calculator Y answer?”, and then prints out “42″, which thus turns out to be a correct answer to “What will calculator Y answer?”. By intermediary of the concept of “better”, it’s easier to distinguish what byrnema really prefers (but can’t know in detail), and what she thinks she prefers, or knows of what she really prefers (or what is “better”).
This comment probably does a better job at explaining the distinction, but it took a bigger set-up (and I’m not saying anything not already contained in Eliezer’s metaethics sequence).
Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (I can’t think of a search term, so no link).
It was in the post for asking Eliezer Questions for his video interview.
The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.
It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people’s more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.
It was in the post for asking Eliezer Questions for his video interview.
Nope, can’t find my comments on this topic there.
It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people’s more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.
I assure you that I’m speaking in good faith. If you see a way in which I’m talking past byrnema, help me to understand.
I don’t doubt that. I probably should consider my words more carefully so I don’t cause offence except when I mean to. Both because it would be better and because it is practical.
Assume I didn’t use the word ‘pretentious’ and instead stated that “when people go about saying people are wrong I expect them to have a higher standard of correctness while doing so than I otherwise would.” If you substituted “your thinking is insane” for “this is wrong” I probably would have upvoted.
But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
I suspect it may be even more confusing if you pressed Vladmir into territory where his preferences did not match those of byrnema. I would then expect him to make the claim “You care about getting the world to be objectively , I care about getting the world objectively better, while a pebble sorter cares about getting the world to be objectively more prime”. But that line between ‘sharing’ better around and inventing words like booglewhatsit is often to be applied inconsistently so I cannot be sure on Vladmir’s take.
What if the existence of a Framework of Objective Value wasn’t the only thing you were wrong about? What if you are also wrong in your belief that you need this Framework in order to care about the things that used to be meaningful to you? What if this was simply one of the many things that your old religious beliefs had fooled you about?
It is possible to be mistaken about one’s self, just as we can be mistaken about the rest of reality. I know it feels like you need a Framework, but this feeling is merely evidence, not mathematical proof. And considering the number of ex-believers who used to feel as you do and who now live a meaningful life, you have to admit that your feeling isn’t very strong evidence. Ask yourself how you know what you think you know.
I would be quite happy to be wrong. I can’t think of a single reason not to wish to be wrong. (Not even the sting of a drop in status; in my mind, it would improve my status to have presented a problem that actually has a solution instead of one that just leads in circles.)
Through the experiment of assimilating the ideas of Less Wrong over the course a year, I found my worldview changing and becoming more and more bereft of meaning as it seemed more and more logical that value is subjective. This means that no state of the universe is objectively any “better” than any other state, there’s no coherent notion of progress, etc. And I can actually feel that pretty well; right on the edge of my consciousness, an awareness that nothing matters, I’m just a program that’s running in some physical reality. I feel no loyalty or identity with this program, it just exists. And I find it hard to believe I ought to go there; some intuition tells me this isn’t what I’m supposed to be learning. I’ve lost my way somehow.
This reminds me of the labyrinth metaphor. Where the hell am I? Why am I the only one to find this particular dead end? Should I really listen to my friends on the walkie-talkie saying, ‘keep going, it’s not really a deep bottomless chasm!’, or shouldn’t I try and describe it better to make certain you know where I’m at?
When I first gave up the idea of objective morality I also plummeted into a weird sort of ambivalence. It lasted for a few years. Finally, I confronted the question of why I even bothered continuing to exist. I decided I wanted to live. I then decided I needed an arbitrary guiding principle in life to help me maintain that desire. I decided I wanted to live as interesting a life as possible. That was my only goal in life, and it was only there to keep me wanting to live.
I pursued that goal for a few years, rather half-heartedly. It was enough to keep me going, but not much more. Then, one day, rather suddenly, I fell completely in love. Really, blubberingly, stupidly in love. I was completely consumed and couldn’t have cared less if it was objectively meaningless. A week later, I found out the girl was also in love with me, and I promptly stopped loving her.
Meditating on the whole thing afterwards, I realized I hadn’t been in love, but had experienced some other, probably quite disgusting emotion. I had been pulled up from the abyss of subjectivity by the worst kind of garbage! It felt like the punchline of a zen koan. I realized that wallowing in ambivalence was just as worthless as embracing the stupidest purpose, and became ambivalent to the lack of objectivity itself.
After that I began rediscovering and embracing my natural desires. A few years of that and I finally settled down into what I consider a healthy person. But, to this day, I still occasionally feel the fuzzy awareness at the edge of my consciousness that everything is meaningless. And, when I do, I just don’t care. So what if everything I love is objectively worthless? Meaninglessness itself is meaningless, so screw it!
I realize this whole story is probably bereft of any sort of rational take away, but I thought I’d share anyway, in the hopes of at least giving you some hope. Failing that, it was at least nice to take a break from rationality to write about something totally irrational.
You are not.
I cannot remember a time I genuinely believed in God, though I was raised Baptist by a fundamentalist believer. I don’t know why I didn’t succumb. When I was a teen, I didn’t really bother doing anything I didn’t want to do, except to avoid immediate punishment. All of my goals were basically just fantasies. Sometime during the 90s I applied Pascal’s Wager to objective morality and began behaving as though it existed, since it seemed clear that a more intelligent goal-seeking being than I might well discover some objective morality which I couldn’t understand the argument for, and that working toward an objective morality (which is the same thing as a universal top goal, since “morality” consists of statements about goals) requires that I attempt to maximize my ability to do so when it’s explained what it is. This is basically the same role you’re using God for, if I understand correctly.
Unfortunately, as my hope for a positive singularity dwindles, so does my level of caring about, basically, everything not immediately satisfying to me. I remind myself that the Wager still holds even with a very small chance, but very small chance persistently feels like zero chance.
Anyway, I don’t have a solution, but I wanted to point out that this problem is felt by at least some other people as well, and doesn’t necessarily have anything to do with God, per se. I suppose some might suggest that I’ve merely substituted a sufficiently intelligent goal-seeker for “God”...
If you’re still concerned about that after all the discussion about it, it might be a good idea to get some more one-on-one help. Off the top of my head I’d suggest finding a reputable Buddhist monk/master/whatever to work with: I know that meditation sometimes evokes the kind of problem you’re afraid of encountering, so they should have some way of dealing with that.
This is wrong. Some states are really objectively better than other states. The trick is, “better” originates from your own preference, not God-given decree. You care about getting the world to be objectively better, while a pebble-sorter cares about getting the world to be objectively more prime.
Rather, it is using a different definition of ‘better’ (or, you could argue, ‘objectively’) than you are. Byrnema’s usage may not be sophisticated or the most useful way to carve reality but it is a popular usage and his intended meaning is clear.
That is the framework I use. I agree that byrnema could benefit from an improved understanding of this kind of philosophy. Nevertheless, byrnema’s statement is a straightforward use of language that is easy to understand, trivially true and entirely unhelpful.
It doesn’t work for most of any reasonable definition, because you’d need “better” to mean “absolute indifference”, which doesn’t rhyme.
No it wouldn’t. You are confused.
I’m pretty sure I can’t be confused about the real-world content of this discussion, but we are having trouble communicating. As a way out, you could suggest reasonable interpretations of “better” and “objectively” that make byrnema’s “no state of the universe is objectively any “better” than any other state” into a correct statement.
You appear to have a solid understanding of the deep philosophy. Your basic claims in the two ancestors are wrong and trivially so at about the level of language parsing and logic.
Far from being required, “absolute indifference” is doesn’t even work as a meaning in the context: “No state of the universe is objectively any “absolute indifference” than any other state”. If you fixed the grammar to make the meaning fit it would make the statement wrong.
I’m not comfortable making any precise descriptions for a popular philosophy that I think is stupid (my way of thinking about the underlying concepts more or less matches yours). But it would be something along the lines of defining “objectively better” to mean “scores high in a description or implementation of betterness outside of the universe, not dependent on me, etc”. Then, if there is in fact no such ‘objectively better’ thingumy (God, silly half baked philosophy of universal morality, etc) people would say stuff like byrnema did and it wouldn’t be wrong, just useless.
“According to a position of absolute indifference, no state of the universe is preferable to any other.”
That “stupid” for me got identified as “incorrect”, not a way to correctly interpret the byrnema’s phrase to make it right (but a reasonable guess about the way the phrase came to be).
And this I think is why people find moral non-cognitivism so easy to misunderstand—people always try to parse it to understand which variety of moral realism you subscribe to.
“There is no final true moral standard.”
“Ah, so you’re saying that all acts are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral standards are equally good according to the final true moral standard?”
“No, I’m saying that there is no final true moral standard.”
“Oh, so all moral judgements are equally good according to the final true moral standard?”
\whimper**
I like to use the word “transcendent”, as in “no transcendent morality”, where the word “transcendent” is chosen to sound very impressive and important but not actually mean anything.
However, you can still be a moral cognitivist and believe that moral statements have truth-values, they just won’t be transcendent truth-values. What is a “transcendent truth-value”? Shrugs.
It’s not like “transcedental morality” is a way the universe could have been but wasn’t.
Yes, I think that transcendent is a great adjective for this concept of morality I’m attached to. I like it because it makes it clear why I would label the attachment ‘theistic’ even though I have no attachment that I’m aware of to other necessarily ‘religious’ beliefs.
Since I do ‘believe in’ physical materialism, I expect science to eventually explain that morality can transcend the subjective/objective chasm in some way or that if morality does not, to identify whether this fact about the universe is consistent or inconsistent with my particular programming. (This latter component specifically is the part I was thinking you haven’t covered; I can only say this much now because the discussion had helped develop my thoughts quite a bit already.)
Er, did you actually read the Metaethics sequence?
That is a description that you can get to using your definition of ‘better’ (approximately, depending on how you prefer to represent differences between human preferences). It still completely does away with the meaning Byrnema conveyed.
That was clear. But no matter how superior our philosophy we are still considering straw men if we parse common language with our own idiosyncratic variant. We must choose between translating from their language, forcing them to use ours, ignoring them or, well, being wrong a lot.
This thread between you and Vladimir_Nesov is fascinating, because you’re talking about exactly what I don’t understand. Allusions to my worldview being unsophisticated, not useful, stupid and incorrect fill me with the excitement of anticipation that there is a high probability of there being something to learn here.
Some comments:
(1) It appears that the whole issue of what I meant when I wrote, “no state of the universe is objectively any “better” than any other state,” has been resolved. We agree that it is trivially true, useless and on some level insane to be concerned with it.
(2) Vladimir_Nesov wrote, “You care about getting the world to be objectively better [in the way you define better], while a pebble-sorter cares about getting the world to be objectively more prime [the way he defines better].”
This is a good point to launch from. Suppose it is true that there is no objective ‘better’, so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don’t place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.
For what it’s worth, I understand that this value I’m lacking—to persist in caring about my subjective values even if they’re not objectively substantiated—is a subjective value. While I seem to lack it, you guys could very reasonably have this value in great measure.
So. Is this a value I can work on developing? Or is there some logical fallacy I’m making that would make this whole dilemma moot once I understood it?
This is connected to the Rebelling Within Nature post: have you considered that your criterion “you shouldn’t care about a value if it isn’t objective”, is another value that is particular to you as a human? A simple Paperclip Maximizer wouldn’t have the criterion “stop caring about paperclips if it turns out the goodness of paperclips isn’t written into the fabric of the universe”. (Nor would it have the criterion of respecting other agents’ moralities, another thing which you value.)
Have a look at Eliezer’s posts on morality and perhaps ‘subjectively objective’. (But also consider Adelene’s suggestion on looking into whether your dissociation is the result of a neurological or psychological state that you could benefit from fixing.)
Meanwhile I think you do, in fact, have this subjective measure. Not because you must for any philosophical reason but because your behaviour and descriptions indicate that you do subjectively care about your subjective value. Even thought you don’t think you do. To put it another way, your subjective values are objective facts about the state of the universe and your part thereof and I believe you are wrong about them.
Is there a sense in which you did not just say “The trick is to pretend that your subjective preference is really a statement about objective values”? If by “objectively better” you don’t mean “better according to a metric that doesn’t depend on subjective preferences”, then I think you may be talking past the problem.
By “objectively better” I mean that given an ordering called “better”, it is an objective fact that one state is “better” than another state. The ordering “better” is constructed from your own decision-making algorithm, you could say from subjective preference. This ordering however is not a matter of personal choice: you can’t decide what it is, you only decide given what it already happens to be. It is only “subjective” in the sense that different agents have different preference.
I can’t quite follow that description. “More prime” really is an objective description of a yardstick against which you can measure the world. So is “preferred by me”. But to use “objectively better” as a synonym for “preferred by byrnema” seems to me to invite confusion.
Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (link to the thread—H/T to Wei Dai).
The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.
“Preferred by byrnema” is bad terminology because of another confusion, where she seems to assume that she knows what she really prefers. So, I could say “objectively more preferred by byrnema”, but that can be misinterpreted as “objectively more the way byrnema thinks it should be”, which is circular as the foundation for byrnema’s own decision-making, just as with a calculator Y that when asked “2+2=?” thinks of an answer in the form “What will calculator Y answer?”, and then prints out “42″, which thus turns out to be a correct answer to “What will calculator Y answer?”. By intermediary of the concept of “better”, it’s easier to distinguish what byrnema really prefers (but can’t know in detail), and what she thinks she prefers, or knows of what she really prefers (or what is “better”).
This comment probably does a better job at explaining the distinction, but it took a bigger set-up (and I’m not saying anything not already contained in Eliezer’s metaethics sequence).
See also:
Math is Subjunctively Objective
Where Recursive Justification Hits Bottom
No License To Be Human (some discussion of right vs. human-right terminology)
Metaethics sequence
It was in the post for asking Eliezer Questions for his video interview.
It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people’s more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.
Nope, can’t find my comments on this topic there.
I assure you that I’m speaking in good faith. If you see a way in which I’m talking past byrnema, help me to understand.
Is this the thread you’re referring to?
It is, thank you.
Ahh. I was thinking of the less wrong singularity article.
I don’t doubt that. I probably should consider my words more carefully so I don’t cause offence except when I mean to. Both because it would be better and because it is practical.
Assume I didn’t use the word ‘pretentious’ and instead stated that “when people go about saying people are wrong I expect them to have a higher standard of correctness while doing so than I otherwise would.” If you substituted “your thinking is insane” for “this is wrong” I probably would have upvoted.
I suspect it may be even more confusing if you pressed Vladmir into territory where his preferences did not match those of byrnema. I would then expect him to make the claim “You care about getting the world to be objectively , I care about getting the world objectively better, while a pebble sorter cares about getting the world to be objectively more prime”. But that line between ‘sharing’ better around and inventing words like booglewhatsit is often to be applied inconsistently so I cannot be sure on Vladmir’s take.