Because in spite of everything, you still want it.
Maybe I wouldn’t. There have been times in my life when I’ve had to struggle to feel attached to reality, because it didn’t feel objectively real. Now if value isn’t objectively real, I might find myself again feeling indifferent, like one part of myself is carrying on eating and driving to work, perhaps socially moral, perhaps not, while another part of myself is aware that nothing actually matters. I definitely wouldn’t feel integrated.
I don’t want to burden anyone with what might be idiosyncratic sanity issues, but I do mention them because I don’t think they’re actually all that idiosyncratic.
I thought this was a good question, so I took some time to think about it. I am better at recognizing good definitions than generating them, but here goes:
‘Objective’ and ‘subjective’ are about the relevance of something across contexts.
Suppose that there is some closed system X. The objective value of X is its value outside X. The subjective value of X is its value inside X.
For example, if I go to a party and we play a game with play money, then the play money has no objective value. I might care about the game, and have fun playing it with my friends, but it would be a choice whether or not to place any subjective attachment to the money; I think that I wouldn’t and would be rather equanimous about how much money I had in any moment. If I went home and looked carefully at the money to discover that it was actually a foreign currency, then it turns out that the money had objective value after all.
Regarding my value dilemma, the system X is myself. I attach value to many things in X. Some of this attachment feels like a choice, but I hazard that some of this attachment is not really voluntary. (For example, I have mirror neurons.) I would call these attachments ‘intellectual’ and ‘visceral’ respectively.
Generally, I do not have much value for subjective experience. If something only has value in ‘X’, then I have a tendency to negate that as a motivation. I’m not altruistic, I just don’t feel like subjective experience is very important. Upon reflection, I realize that re: social norms, I actually act rather selfishly when I think I’m pursuing something with objective value.
If there’s no objective value, then at the very least I need to do a lot of goal reorganization; losing my intellectual attachments unless they can be recovered as visceral attachments. At the worst, I might feel increasingly like I’m a meaningless closed system of self-generated values. At this point, though, I doubt I’m capable of assimilating an absence of objective value on all levels—my brain might be too old—and for now I’m just academically interested in how self-validation of value works without feeling like its an illusion.
I know this wasn’t your main point, but money doesn’t have objective value, either, by that definition. It only has value in situations where you can trade it for other things. It’s extremely common to encounter such situations, so the limitation is pretty ignorable, but I suspect you’re at least as likely to encounter situations where money isn’t tradeable for goods as you are to encounter situations where your own preferences and values aren’t part of the context.
I used the money analogy because it has a convenient idea of value.
While debating about the use of that analogy, I had already considered it ironic that the US dollar hasn’t had “objective” value since it was disconnected from the value of gold in 1933. Not that gold has objective value unless you use it to make a conductor. But at the level, I start losing track of what I mean by ‘value’. Anyway, it is interesting that the value of the US dollar is exactly an example of humans creating value, echoing Alicorn’s comment.
Real money does have objective value relative to the party, since you can buy things on your way home, but no objective value outside contexts where the money can be exchanged for goods.
If you are a closed system X, and something within system X only has objective value inasmuch as something outside X values it, then does the fact that other people care about you and your ability to achieve your goals help? They are outside X, and while their first-order interests probably never match yours perfectly, there is a general human tendency to care about others’ goals qua others’ goals.
then does the fact that other people care about you and your ability to achieve your goals help?
If you mean that I might value myself and my ability to achieve my goals more because I value other people valuing that, then it does not help. My valuation of their caring is just as subjective as any other value I would have.
On the other hand, perhaps you were suggesting that this mutual caring could be a mechanism for creating objective value, which is kind of in line with what I think. For that matter, I think that my own valuation of something, even without the valuation of others, does create objective value—but that’s a FOOM. I’m trying to imagine reality without that.
If you mean that I might value myself and my ability to achieve my goals more because I value other people valuing that, then it does not help. My valuation of their caring is just as subjective as any other value I would have.
That’s not what I mean. I don’t mean that their caring about you/your goals makes things matter because you care if they care. I mean that if you’re a closed system, and you’re looking for a way outside of yourself to find value in your interests, other people are outside you and may value your interests (directly or indirectly). They would carry on doing this, and this would carry on conferring external value to you and your interests, even if you didn’t give a crap or didn’t know anybody else besides you existed—how objective can you get?
On the other hand, perhaps you were suggesting that this mutual caring could be a mechanism for creating objective value
I don’t think it’s necessary—I think even if you were the only person in the universe, you’d matter, assuming you cared about yourself—and I certainly don’t think it has to be really mutual. Some people can be “free riders” or even altruistic, self-abnegating victims of the scheme without the system ceasing to function. So this is a FOOV? So now it looks like we don’t disagree at all—what was I trying to convince you of, again?
So this is a FOOV? So now it looks like we don’t disagree at all—what was I trying to convince you of, again?
I guess I’m really not sure. I’ll have to think about it a while. What will probably happen is that next time I find myself debating with someone asserting there is no Framework of Objective Value, I will ask them about this case; if minds can create objective value by their value-ing. I will also ask them to clarify what they mean by objective value.
I’m either not sure what you’re trying to do or why you’re trying to do it. What do you mean by FOOM here? Why do you want to imagine reality without it? How does people caring about each other fall into that category?
Maybe I wouldn’t. There have been times in my life when I’ve had to struggle to feel attached to reality, because it didn’t feel objectively real. Now if value isn’t objectively real, I might find myself again feeling indifferent, like one part of myself is carrying on eating and driving to work, perhaps socially moral, perhaps not, while another part of myself is aware that nothing actually matters. I definitely wouldn’t feel integrated.
Yeah, I think I can relate to that. This edges very close to an affective death spiral, however, so watch the feedback loops.
The way I argued myself out of mine was somewhat arbitrary and I don’t have it written up yet. The basic idea was taking the concepts that I exist and that at least one other thing exists and, generally speaking, existence is preferred over non-existence. So, given that two things exist and can interact and both would rather be here than not be here, it is Good to learn the interactions between the two so they can both continue to exist. This let me back into accepting general sensory data as useful and it has been a slow road out of the deep.
I have no idea if this is relevant to your questions, but since my original response was a little off maybe this is closer?
The way I argued myself out of mine was somewhat arbitrary and I don’t have it written up yet.
This paragraph (showing how you argued yourself out of some kind of nihilism) is completely relevant, thanks. This is exactly what I’m looking for.
The basic idea was taking the concepts that I exist and that at least one other thing exists and, generally speaking, existence is preferred over non-existence.
What do you mean by, “existence is preferred over non-existence”? Does this mean that in the vacuum of nihilism, you found something that you preferred, or that it’s better in some objective sense?
My situation is that if I try to assimilate the hypothesis that there is no objective value (or, rather, I anticipate trying to do so), then immediately I see that all of my preferences are illusions. It’s not actually any better if I exist or don’t exist, or if the child is saved from the tracks or left to die. It’s also not better if I choose to care subjectively about these things (and be human) or just embrace nihilism, if that choice is real. I understand that caring about certain sorts of these things is the product of evolution, but without any objective value, I also have no loyalty to evolution and its goals—what do I care about the values and preferences it instilled in me?
The question is; how has evolution actually designed my brain; in the state ‘nihilism’ does my brain (a) abort intellectual thinking (there’s no objective value to truth anyway) and enter a default mode of material hedonism that acts based on preferences and impulses just because they exist and that’s what I’m programmed to do or (b) does it cling to its ability to think beyond that level of programming, and develop this separate identity as a thing that knows that nothing matters?
Perhaps I’m wrong, but your decision to care about the preference of existence over non-existence and moving on from there appears to be an example of (a). Or perhaps a component (b) did develop and maintain awareness of nihilism, but obviously that component couldn’t be bothered posting on LW, so I heard a reply from the part of you that is attached to your subjective preferences (and simply exists).
Perhaps I’m wrong, but your decision to care about the preference of existence over non-existence and moving on from there appears to be an example of (a). Or perhaps a component (b) did develop and maintain awareness of nihilism, but obviously that component couldn’t be bothered posting on LW, so I heard a reply from the part of you that is attached to your subjective preferences (and simply exists).
Well, my bit about existence and non-existence stemmed from a struggle with believing that things did or did not exist. I have never considered nihilism to be a relevant proposal: It doesn’t tell me how to act or what to do. It also doesn’t care if I act as if there is an objective value attached to something. So… what is the point in nihilism?
To me, nihilism seems like a trap for other philosophical arguments. If those arguments and moral ways lead them to a logical conclusion of nihilism, than they cannot escape. They are still clinging to whatever led them there but say they are nihilists. This is the death spiral: Believing that nothing matters but acting as if something does.
If I were to actually stop and throw away all objective morality, value, etc than I would except a realization that any belief in nihilism would have to go away too. At this point I my presuppositions about the world reset and… what? It is this behavior that is similar to my struggles with existence.
The easiest summation of my belief that existence is preferred over non-existence is that existence can be undone and non-existence is permanent. If you want more I can type it up. I don’t know how helpful it will be against nihilism, however.
This edges very close to an affective death spiral,
Agreed. I find that often it isn’t so much that I find the thought process intrinsically pleasurable (affective), but that in thinking about it too much, I over-stimulate the trace of the argument so that after a while I can’t recall the subtleties and can’t locate the support. After about 7 comments back and forth, I feel like a champion for a cause (no objective values RESULTS IN NIHILISM!!) that I can’t relate to anymore. Then I need to step back and not care about it for a while, and maybe the cause will spontaneously generate again, or perhaps I’ll have learned enough weighting in another direction that the cause never takes off again.
Feel free to tell me to mind my own business, but I’m curious. That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
Jack also wrote, “The next question is obviously “are you depressed?” But that also isn’t any of my business so don’t feel obligated to answer.”
I appreciate this sensitivity, and see where it comes from and hy its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
However, I don’t feel like its so personal and I will explain why. My goals here are to understand how the value validation system works outside FOOM. I come from the point of view that I can’t do this very naturally, and most people I know also could not. I try to identify where thought gets stuck and try to find general descriptions of it that aren’t so personal. I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
To answer your question, a while ago, I thought my answer to your question would be a definitive, “no, this awareness wouldn’t feel any motivation to change anything”. I had written in my journal that even if there was a child laying on the tracks, this part of myself would just look on analytically. However, I felt guilty about this after a while and I’ve seen repressed the experience of this hypothetical awareness so that its more difficult to recall.
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
I appreciate this sensitivity, and see where it comes from and why its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
Actually, I practically never see it as invasive; I’m just aware that other people sometimes do, and try to act accordingly. I think this is a common mindset, actually: It’s easier to put up a disclaimer that will be ignored 90-99% of the time than it is to deal with someone who’s offended 1-10% of the time, and generally not worth the effort of trying to guess whether any given person will be offended by any given question.
I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
I’m not sure how you came to that conclusion—the other sentences in that paragraph didn’t make much sense to me. (For one thing, one of us doesn’t understand what ‘FOOM’ means. I’m not certain it’s you, though.) I think I know what you’re describing, though, and it doesn’t appear to be a common response to becoming an atheist or embracing rationality (I’d appreciate if others could chime in on this). It also doesn’t necessarily mean you’re going insane—my normal brain-function tends in that direction, and I’ve never seen any disadvantage to it. (This old log of mine might be useful, on the topic of insanity in general. Context available on request; I’m not at the machine that has that day’s logs in it at the moment. Also, disregard the username, it’s ooooold.)
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
My Buddhist friends would agree with that. Actually, I pretty much agree with it myself (and I’m not depressed, and I don’t think it’s horrible that I don’t see death as horrible, at any level of recursion). What most people seem to forget, though, is that the absence of a reason to do something isn’t the same as the presence of a reason not to do that thing. People who’ve accepted that there’s no objective value in things still experience emotions, and impulses to do various things including acting compassionately, and generally have no reason not to act on such things. We also experience the same positive feedback from most actions that theists do—note how often ‘fuzzies’ are explicitly talked about here, for example. It does all add back up to normality, basically.
Thank you. So maybe I can look towards Buddhist philosophy to resolve some of my questions. In any case, it’s really reassuring that others can form these beliefs about reality, and retain things that I think are important (like sanity and moral responsibility.)
Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because
they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.
Maybe I wouldn’t. There have been times in my life when I’ve had to struggle to feel attached to reality, because it didn’t feel objectively real. Now if value isn’t objectively real, I might find myself again feeling indifferent, like one part of myself is carrying on eating and driving to work, perhaps socially moral, perhaps not, while another part of myself is aware that nothing actually matters. I definitely wouldn’t feel integrated.
I don’t want to burden anyone with what might be idiosyncratic sanity issues, but I do mention them because I don’t think they’re actually all that idiosyncratic.
Can you pick apart what you mean by “objectively”? It seems to be a very load-bearing word here.
I thought this was a good question, so I took some time to think about it. I am better at recognizing good definitions than generating them, but here goes:
‘Objective’ and ‘subjective’ are about the relevance of something across contexts.
Suppose that there is some closed system X. The objective value of X is its value outside X. The subjective value of X is its value inside X.
For example, if I go to a party and we play a game with play money, then the play money has no objective value. I might care about the game, and have fun playing it with my friends, but it would be a choice whether or not to place any subjective attachment to the money; I think that I wouldn’t and would be rather equanimous about how much money I had in any moment. If I went home and looked carefully at the money to discover that it was actually a foreign currency, then it turns out that the money had objective value after all.
Regarding my value dilemma, the system X is myself. I attach value to many things in X. Some of this attachment feels like a choice, but I hazard that some of this attachment is not really voluntary. (For example, I have mirror neurons.) I would call these attachments ‘intellectual’ and ‘visceral’ respectively.
Generally, I do not have much value for subjective experience. If something only has value in ‘X’, then I have a tendency to negate that as a motivation. I’m not altruistic, I just don’t feel like subjective experience is very important. Upon reflection, I realize that re: social norms, I actually act rather selfishly when I think I’m pursuing something with objective value.
If there’s no objective value, then at the very least I need to do a lot of goal reorganization; losing my intellectual attachments unless they can be recovered as visceral attachments. At the worst, I might feel increasingly like I’m a meaningless closed system of self-generated values. At this point, though, I doubt I’m capable of assimilating an absence of objective value on all levels—my brain might be too old—and for now I’m just academically interested in how self-validation of value works without feeling like its an illusion.
I know this wasn’t your main point, but money doesn’t have objective value, either, by that definition. It only has value in situations where you can trade it for other things. It’s extremely common to encounter such situations, so the limitation is pretty ignorable, but I suspect you’re at least as likely to encounter situations where money isn’t tradeable for goods as you are to encounter situations where your own preferences and values aren’t part of the context.
I used the money analogy because it has a convenient idea of value.
While debating about the use of that analogy, I had already considered it ironic that the US dollar hasn’t had “objective” value since it was disconnected from the value of gold in 1933. Not that gold has objective value unless you use it to make a conductor. But at the level, I start losing track of what I mean by ‘value’. Anyway, it is interesting that the value of the US dollar is exactly an example of humans creating value, echoing Alicorn’s comment.
Real money does have objective value relative to the party, since you can buy things on your way home, but no objective value outside contexts where the money can be exchanged for goods.
If you are a closed system X, and something within system X only has objective value inasmuch as something outside X values it, then does the fact that other people care about you and your ability to achieve your goals help? They are outside X, and while their first-order interests probably never match yours perfectly, there is a general human tendency to care about others’ goals qua others’ goals.
If you mean that I might value myself and my ability to achieve my goals more because I value other people valuing that, then it does not help. My valuation of their caring is just as subjective as any other value I would have.
On the other hand, perhaps you were suggesting that this mutual caring could be a mechanism for creating objective value, which is kind of in line with what I think. For that matter, I think that my own valuation of something, even without the valuation of others, does create objective value—but that’s a FOOM. I’m trying to imagine reality without that.
That’s not what I mean. I don’t mean that their caring about you/your goals makes things matter because you care if they care. I mean that if you’re a closed system, and you’re looking for a way outside of yourself to find value in your interests, other people are outside you and may value your interests (directly or indirectly). They would carry on doing this, and this would carry on conferring external value to you and your interests, even if you didn’t give a crap or didn’t know anybody else besides you existed—how objective can you get?
I don’t think it’s necessary—I think even if you were the only person in the universe, you’d matter, assuming you cared about yourself—and I certainly don’t think it has to be really mutual. Some people can be “free riders” or even altruistic, self-abnegating victims of the scheme without the system ceasing to function. So this is a FOOV? So now it looks like we don’t disagree at all—what was I trying to convince you of, again?
I guess I’m really not sure. I’ll have to think about it a while. What will probably happen is that next time I find myself debating with someone asserting there is no Framework of Objective Value, I will ask them about this case; if minds can create objective value by their value-ing. I will also ask them to clarify what they mean by objective value.
Truthfully, I’ve kind of forgotten what this issue I raised is about, probably for a few days or a week.
I’m either not sure what you’re trying to do or why you’re trying to do it. What do you mean by FOOM here? Why do you want to imagine reality without it? How does people caring about each other fall into that category?
Yeah, I think I can relate to that. This edges very close to an affective death spiral, however, so watch the feedback loops.
The way I argued myself out of mine was somewhat arbitrary and I don’t have it written up yet. The basic idea was taking the concepts that I exist and that at least one other thing exists and, generally speaking, existence is preferred over non-existence. So, given that two things exist and can interact and both would rather be here than not be here, it is Good to learn the interactions between the two so they can both continue to exist. This let me back into accepting general sensory data as useful and it has been a slow road out of the deep.
I have no idea if this is relevant to your questions, but since my original response was a little off maybe this is closer?
This paragraph (showing how you argued yourself out of some kind of nihilism) is completely relevant, thanks. This is exactly what I’m looking for.
What do you mean by, “existence is preferred over non-existence”? Does this mean that in the vacuum of nihilism, you found something that you preferred, or that it’s better in some objective sense?
My situation is that if I try to assimilate the hypothesis that there is no objective value (or, rather, I anticipate trying to do so), then immediately I see that all of my preferences are illusions. It’s not actually any better if I exist or don’t exist, or if the child is saved from the tracks or left to die. It’s also not better if I choose to care subjectively about these things (and be human) or just embrace nihilism, if that choice is real. I understand that caring about certain sorts of these things is the product of evolution, but without any objective value, I also have no loyalty to evolution and its goals—what do I care about the values and preferences it instilled in me?
The question is; how has evolution actually designed my brain; in the state ‘nihilism’ does my brain (a) abort intellectual thinking (there’s no objective value to truth anyway) and enter a default mode of material hedonism that acts based on preferences and impulses just because they exist and that’s what I’m programmed to do or (b) does it cling to its ability to think beyond that level of programming, and develop this separate identity as a thing that knows that nothing matters?
Perhaps I’m wrong, but your decision to care about the preference of existence over non-existence and moving on from there appears to be an example of (a). Or perhaps a component (b) did develop and maintain awareness of nihilism, but obviously that component couldn’t be bothered posting on LW, so I heard a reply from the part of you that is attached to your subjective preferences (and simply exists).
Well, my bit about existence and non-existence stemmed from a struggle with believing that things did or did not exist. I have never considered nihilism to be a relevant proposal: It doesn’t tell me how to act or what to do. It also doesn’t care if I act as if there is an objective value attached to something. So… what is the point in nihilism?
To me, nihilism seems like a trap for other philosophical arguments. If those arguments and moral ways lead them to a logical conclusion of nihilism, than they cannot escape. They are still clinging to whatever led them there but say they are nihilists. This is the death spiral: Believing that nothing matters but acting as if something does.
If I were to actually stop and throw away all objective morality, value, etc than I would except a realization that any belief in nihilism would have to go away too. At this point I my presuppositions about the world reset and… what? It is this behavior that is similar to my struggles with existence.
The easiest summation of my belief that existence is preferred over non-existence is that existence can be undone and non-existence is permanent. If you want more I can type it up. I don’t know how helpful it will be against nihilism, however.
Agreed. I find that often it isn’t so much that I find the thought process intrinsically pleasurable (affective), but that in thinking about it too much, I over-stimulate the trace of the argument so that after a while I can’t recall the subtleties and can’t locate the support. After about 7 comments back and forth, I feel like a champion for a cause (no objective values RESULTS IN NIHILISM!!) that I can’t relate to anymore. Then I need to step back and not care about it for a while, and maybe the cause will spontaneously generate again, or perhaps I’ll have learned enough weighting in another direction that the cause never takes off again.
Feel free to tell me to mind my own business, but I’m curious. That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
Jack also wrote, “The next question is obviously “are you depressed?” But that also isn’t any of my business so don’t feel obligated to answer.”
I appreciate this sensitivity, and see where it comes from and hy its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
However, I don’t feel like its so personal and I will explain why. My goals here are to understand how the value validation system works outside FOOM. I come from the point of view that I can’t do this very naturally, and most people I know also could not. I try to identify where thought gets stuck and try to find general descriptions of it that aren’t so personal. I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
To answer your question, a while ago, I thought my answer to your question would be a definitive, “no, this awareness wouldn’t feel any motivation to change anything”. I had written in my journal that even if there was a child laying on the tracks, this part of myself would just look on analytically. However, I felt guilty about this after a while and I’ve seen repressed the experience of this hypothetical awareness so that its more difficult to recall.
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
Actually, I practically never see it as invasive; I’m just aware that other people sometimes do, and try to act accordingly. I think this is a common mindset, actually: It’s easier to put up a disclaimer that will be ignored 90-99% of the time than it is to deal with someone who’s offended 1-10% of the time, and generally not worth the effort of trying to guess whether any given person will be offended by any given question.
I’m not sure how you came to that conclusion—the other sentences in that paragraph didn’t make much sense to me. (For one thing, one of us doesn’t understand what ‘FOOM’ means. I’m not certain it’s you, though.) I think I know what you’re describing, though, and it doesn’t appear to be a common response to becoming an atheist or embracing rationality (I’d appreciate if others could chime in on this). It also doesn’t necessarily mean you’re going insane—my normal brain-function tends in that direction, and I’ve never seen any disadvantage to it. (This old log of mine might be useful, on the topic of insanity in general. Context available on request; I’m not at the machine that has that day’s logs in it at the moment. Also, disregard the username, it’s ooooold.)
My Buddhist friends would agree with that. Actually, I pretty much agree with it myself (and I’m not depressed, and I don’t think it’s horrible that I don’t see death as horrible, at any level of recursion). What most people seem to forget, though, is that the absence of a reason to do something isn’t the same as the presence of a reason not to do that thing. People who’ve accepted that there’s no objective value in things still experience emotions, and impulses to do various things including acting compassionately, and generally have no reason not to act on such things. We also experience the same positive feedback from most actions that theists do—note how often ‘fuzzies’ are explicitly talked about here, for example. It does all add back up to normality, basically.
Thank you. So maybe I can look towards Buddhist philosophy to resolve some of my questions. In any case, it’s really reassuring that others can form these beliefs about reality, and retain things that I think are important (like sanity and moral responsibility.)
Sorry! FOOV: Framework Of Objective Value!
Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Ah. That makes more sense.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.