Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because
they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.
Sorry! FOOV: Framework Of Objective Value!
Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Ah. That makes more sense.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.