Feel free to tell me to mind my own business, but I’m curious. That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
Jack also wrote, “The next question is obviously “are you depressed?” But that also isn’t any of my business so don’t feel obligated to answer.”
I appreciate this sensitivity, and see where it comes from and hy its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
However, I don’t feel like its so personal and I will explain why. My goals here are to understand how the value validation system works outside FOOM. I come from the point of view that I can’t do this very naturally, and most people I know also could not. I try to identify where thought gets stuck and try to find general descriptions of it that aren’t so personal. I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
To answer your question, a while ago, I thought my answer to your question would be a definitive, “no, this awareness wouldn’t feel any motivation to change anything”. I had written in my journal that even if there was a child laying on the tracks, this part of myself would just look on analytically. However, I felt guilty about this after a while and I’ve seen repressed the experience of this hypothetical awareness so that its more difficult to recall.
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
I appreciate this sensitivity, and see where it comes from and why its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
Actually, I practically never see it as invasive; I’m just aware that other people sometimes do, and try to act accordingly. I think this is a common mindset, actually: It’s easier to put up a disclaimer that will be ignored 90-99% of the time than it is to deal with someone who’s offended 1-10% of the time, and generally not worth the effort of trying to guess whether any given person will be offended by any given question.
I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
I’m not sure how you came to that conclusion—the other sentences in that paragraph didn’t make much sense to me. (For one thing, one of us doesn’t understand what ‘FOOM’ means. I’m not certain it’s you, though.) I think I know what you’re describing, though, and it doesn’t appear to be a common response to becoming an atheist or embracing rationality (I’d appreciate if others could chime in on this). It also doesn’t necessarily mean you’re going insane—my normal brain-function tends in that direction, and I’ve never seen any disadvantage to it. (This old log of mine might be useful, on the topic of insanity in general. Context available on request; I’m not at the machine that has that day’s logs in it at the moment. Also, disregard the username, it’s ooooold.)
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
My Buddhist friends would agree with that. Actually, I pretty much agree with it myself (and I’m not depressed, and I don’t think it’s horrible that I don’t see death as horrible, at any level of recursion). What most people seem to forget, though, is that the absence of a reason to do something isn’t the same as the presence of a reason not to do that thing. People who’ve accepted that there’s no objective value in things still experience emotions, and impulses to do various things including acting compassionately, and generally have no reason not to act on such things. We also experience the same positive feedback from most actions that theists do—note how often ‘fuzzies’ are explicitly talked about here, for example. It does all add back up to normality, basically.
Thank you. So maybe I can look towards Buddhist philosophy to resolve some of my questions. In any case, it’s really reassuring that others can form these beliefs about reality, and retain things that I think are important (like sanity and moral responsibility.)
Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because
they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.
Feel free to tell me to mind my own business, but I’m curious. That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
Jack also wrote, “The next question is obviously “are you depressed?” But that also isn’t any of my business so don’t feel obligated to answer.”
I appreciate this sensitivity, and see where it comes from and hy its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
However, I don’t feel like its so personal and I will explain why. My goals here are to understand how the value validation system works outside FOOM. I come from the point of view that I can’t do this very naturally, and most people I know also could not. I try to identify where thought gets stuck and try to find general descriptions of it that aren’t so personal. I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
To answer your question, a while ago, I thought my answer to your question would be a definitive, “no, this awareness wouldn’t feel any motivation to change anything”. I had written in my journal that even if there was a child laying on the tracks, this part of myself would just look on analytically. However, I felt guilty about this after a while and I’ve seen repressed the experience of this hypothetical awareness so that its more difficult to recall.
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
Actually, I practically never see it as invasive; I’m just aware that other people sometimes do, and try to act accordingly. I think this is a common mindset, actually: It’s easier to put up a disclaimer that will be ignored 90-99% of the time than it is to deal with someone who’s offended 1-10% of the time, and generally not worth the effort of trying to guess whether any given person will be offended by any given question.
I’m not sure how you came to that conclusion—the other sentences in that paragraph didn’t make much sense to me. (For one thing, one of us doesn’t understand what ‘FOOM’ means. I’m not certain it’s you, though.) I think I know what you’re describing, though, and it doesn’t appear to be a common response to becoming an atheist or embracing rationality (I’d appreciate if others could chime in on this). It also doesn’t necessarily mean you’re going insane—my normal brain-function tends in that direction, and I’ve never seen any disadvantage to it. (This old log of mine might be useful, on the topic of insanity in general. Context available on request; I’m not at the machine that has that day’s logs in it at the moment. Also, disregard the username, it’s ooooold.)
My Buddhist friends would agree with that. Actually, I pretty much agree with it myself (and I’m not depressed, and I don’t think it’s horrible that I don’t see death as horrible, at any level of recursion). What most people seem to forget, though, is that the absence of a reason to do something isn’t the same as the presence of a reason not to do that thing. People who’ve accepted that there’s no objective value in things still experience emotions, and impulses to do various things including acting compassionately, and generally have no reason not to act on such things. We also experience the same positive feedback from most actions that theists do—note how often ‘fuzzies’ are explicitly talked about here, for example. It does all add back up to normality, basically.
Thank you. So maybe I can look towards Buddhist philosophy to resolve some of my questions. In any case, it’s really reassuring that others can form these beliefs about reality, and retain things that I think are important (like sanity and moral responsibility.)
Sorry! FOOV: Framework Of Objective Value!
Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Ah. That makes more sense.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.