We should do something about this meme that “meaning” has to be associated with “believing in ontologically basic mental things”.
That is not easy. The meaning of a sentence equals the intent of the speaker. “The meaning of life” means something somewhere making our lives with an intent, for a purpose. It does not have to be ontologically basic—if it turned out we live in a simulation, brains in vats, in a grand experiment, that would at least explain an immediate intent. However then we would worry about the meaning of the lives of the experimenters themselves, trying to reduce it to some intent outside them. So at the end of the day, the only satisfactory meaning-of-life would be an ontologically basic intent from an ontologically basic mind. Divorcing these two from each other is not easy.
Quite frankly I don’t have an optimistic solution. I am a natural pessimitist and felt validated when I realized all this. No, there is no meaning to life at all because nothing has ontologically basic intent out there. The optimistic existentialist stuff, that we can “find” meaning in life is equally invalid, we cannot find something that does not exist. We can try to “put” meaning into life i.e. have intents, have goals, but it will never feel as powerful as the people who believe in an intent outside them (providence, fate, karma) feel about it. Let’s just suck it up, there is really no solution here.
The only optimists in this regard are the people who are glad about this all because it means freedom for them. How they decide if something is important still beats me, perhaps they have an internal value function that does not need to borrow terminal goals from an external source.
I wouldn’t feel like the “meaning of life” would be answered if we turned out to be brains in vats or whatever created with a specific purpose in mind. Notwithstanding the fact that they obviously have a lot of power to punish or reward me, why should I even care what they think?
To me it seems the answer would have to be an independent ontologically basic intent, but then I’m not entirely sure I should care about even that. So a satisfactory “meaning of life” might be impossible even in principle, which I think gives a lot more credibility to the existentialists.
The only optimists in this regard are the people who are glad about this all because it means freedom for them. How they decide if something is important still beats me, perhaps they have an internal value function that does not need to borrow terminal goals from an external source.
An answer I wrote in response to a related question
Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that’s enough, right?)
was:
No. Atheism does remove one set of symbol-behavior-chains in your mind, yes. But a complex mind will most likely lock into another better grounded set of symbol-behavior-chains that is not nihilistic but—depending on your emotional setup—somehow connected to terminal values and acting on that.
In a way you compartmentalize the though if missing meaning away as kind of unhelpful noise (that’s how I phrased it on the LWCW). This is not unreasonable (ahem) - after all the search for meaning is itself meaningless for a conscious process that has evolved in this meaningless environment.
Well, this locking does not really seem to work well for me. I know that ideal terminal values should be along the lines of wanting other people to be happy, but I really struggle to go from the fact that some signals in some brains are labelled happiness to the value that these signals matter. Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found. The struggle is largely that if certain brain signals like happiness are not inherently marked with little XML tags “yes you should care about this” where does the should, the value come from?
The closest thing I can get is something similar to nationalism extended over all humankind—we all are 22nd cousins or something so let’s be allies and face this cold cruel lifeless universe together or something similarly sentimental. But it isn’t a terminal value, it is more like a bit of a feeling of affection. A true utilitarian would even care about a sentient computer being happy, or a sentient computer suffering or dying, and I just cannot figure out why.
Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found.
Well. Thinking about it I realize that for your kind of personality a falling back to carng and following goals indeed doesn’t seem necessary. On the other hand the arbitrariness of nihilism isn’t that different from the passivity from depression—so in a way maybe you already did lock back into the same pattern anyway?
Even materalists crave some things that are traditionally associated with religion.
FTFY.
We should do something about this meme that “meaning” has to be associated with “believing in ontologically basic mental things”.
In his keynote at the LWCW Val of CFAR made the point that caring is an essential part of rationality.
Eliezer also speaks about the value of having something to protect. It’s Harry’s “The power The Dark Lord knows not”.
That is not easy. The meaning of a sentence equals the intent of the speaker. “The meaning of life” means something somewhere making our lives with an intent, for a purpose. It does not have to be ontologically basic—if it turned out we live in a simulation, brains in vats, in a grand experiment, that would at least explain an immediate intent. However then we would worry about the meaning of the lives of the experimenters themselves, trying to reduce it to some intent outside them. So at the end of the day, the only satisfactory meaning-of-life would be an ontologically basic intent from an ontologically basic mind. Divorcing these two from each other is not easy.
Quite frankly I don’t have an optimistic solution. I am a natural pessimitist and felt validated when I realized all this. No, there is no meaning to life at all because nothing has ontologically basic intent out there. The optimistic existentialist stuff, that we can “find” meaning in life is equally invalid, we cannot find something that does not exist. We can try to “put” meaning into life i.e. have intents, have goals, but it will never feel as powerful as the people who believe in an intent outside them (providence, fate, karma) feel about it. Let’s just suck it up, there is really no solution here.
The only optimists in this regard are the people who are glad about this all because it means freedom for them. How they decide if something is important still beats me, perhaps they have an internal value function that does not need to borrow terminal goals from an external source.
I wouldn’t feel like the “meaning of life” would be answered if we turned out to be brains in vats or whatever created with a specific purpose in mind. Notwithstanding the fact that they obviously have a lot of power to punish or reward me, why should I even care what they think?
To me it seems the answer would have to be an independent ontologically basic intent, but then I’m not entirely sure I should care about even that. So a satisfactory “meaning of life” might be impossible even in principle, which I think gives a lot more credibility to the existentialists.
An answer I wrote in response to a related question
was:
In a way you compartmentalize the though if missing meaning away as kind of unhelpful noise (that’s how I phrased it on the LWCW). This is not unreasonable (ahem) - after all the search for meaning is itself meaningless for a conscious process that has evolved in this meaningless environment.
Well, this locking does not really seem to work well for me. I know that ideal terminal values should be along the lines of wanting other people to be happy, but I really struggle to go from the fact that some signals in some brains are labelled happiness to the value that these signals matter. Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found. The struggle is largely that if certain brain signals like happiness are not inherently marked with little XML tags “yes you should care about this” where does the should, the value come from?
The closest thing I can get is something similar to nationalism extended over all humankind—we all are 22nd cousins or something so let’s be allies and face this cold cruel lifeless universe together or something similarly sentimental. But it isn’t a terminal value, it is more like a bit of a feeling of affection. A true utilitarian would even care about a sentient computer being happy, or a sentient computer suffering or dying, and I just cannot figure out why.
Well. Thinking about it I realize that for your kind of personality a falling back to carng and following goals indeed doesn’t seem necessary. On the other hand the arbitrariness of nihilism isn’t that different from the passivity from depression—so in a way maybe you already did lock back into the same pattern anyway?