I hope that my reply does not in any way discourage Richard Kennaway’s reply. I am curious about different responses. But mine: rationalism intends to find better ways to satisfy values, but finds in the process that values are negated, or that it would be more rational to modify values.
Some time ago, I had grand hopes that as a human being embedded in reality, I could just look around and think about things and with some steady effort I might find a world view—at least an epistemology—that would bring everything together, or that I could be involved in a process of bringing things together. Kind of the way religion would do, if it was believable and not a bunch of nonsense. However, the continued application of thought and reason to life just seems to negate the value of life.
Intellectually, I’m in a place where life presents as meaningless. While I can’t “go back” to religious thinking—in fact, I suspect I was never actually there, I’ve only ever been looking for a comprehensive paradigm—I think religions have the right idea; they are wise to the fact that intellectualism/objectivity is not the way to go when it comes to experiencing “cosmic meaning”.
Many people never think about the double think that is required in religion. But I suspect many more people have thought about things both ways … a lifetime is a long time, with space for lots of thoughts … and found that “intellectualism” requires double think as well (compartmentalization) but in a way that is immensely less satisfying. In the latter, you intellectually know that “nothing matters” but that you are powerless to experience and apply this viscerally due to biology. Viscerally, you continue to seek comfort and avoid pain, while your intellect tells you there’s no purpose to your movements.
A shorter way of saying all of this: Being rational is supposed to help humans pursue their values. But it’s pretty obvious that having faith is something that humans value.
Although this comment is already long, it seems a concrete example is needed. Culturally, it appears that singularitarians value information (curiosity) and life (immortality). Suppose immortality was granted: we upload our brains to something replicable and durable so that we can persist forever without any concerns. What in the world would we be motivated to do? What would be the value of information? So what if the digits of pi strung endlessly ahead of me?
Some time ago, I had grand hopes that as a human being embedded in reality, I could just look around and think about things and with some steady effort I might find a world view—at least an epistemology—that would bring everything together, or that I could be involved in a process of bringing things together. Kind of the way religion would do, if it was believable and not a bunch of nonsense. However, the continued application of thought and reason to life just seems to negate the value of life.
I think the “mental muscles” model I use is helpful here. We have different ways of thinking that are useful for different things—mental muscles, if you will.
But, the muscles used in critical thinking are, well, critical. They involve finding counterexamples and things that are wrong. While this is useful in certain contexts, it has negative side effects on one’s direct quality of life, just as using one physical muscle to the exclusion of all others would create problems.
Some of the mental muscles used by religion, OTOH, are appreciation, gratitude, acceptance, awe, compassion… all of which have more positive direct effects on quality of life.
In short, even though reason has applications that indirectly lead to improved circumstances of living, its overuse is directly detrimental to the quality of experience that occurs in that life. And while exclusive use of certain mental muscles used in religion can indirectly lead to worsened circumstances of living, they nonetheless contribute directly to an improved quality of experience.
I’ve pretty much always felt that the problem with LessWrong is that it consists of an effort by people who are already overusing their critical faculties, seeking to improve their quality of experience, by employing those faculties even more.
In your case, the search for a comprehensive world view is an example of this: i.e., believing that if your critical faculty was satisfied, then you would be happy. Instead, you’ve discovered that using the critical faculty simply produces more of the same dissatisfaction that using the critical faculty always produces. In a very real sense, the emotion of dissatisfaction is the critical faculty.
In fact, I got the idea of mental muscles from Minsky’s book The Emotion Machine, wherein he proposes mental “resources” organized into larger activation patterns by emotion. That is, he proposes that emotions are actually modes of thought, that determine which resources (muscles) are activated or suppressed in relation to the topic. Or in other words, he proposes that emotions are a form of functional metacognition.
(While Minksy calls the individual units “resources”, I prefer the term “muscles”, because as with physical muscles they can be developed with training, some are more appropriate for some tasks than others, etc. So it’s more vivid and suggestive when training to either engage or “relax” specific “muscle groups”.)
Anyway… tl;dr version: emotions and thinking faculties are linked, so how you think is how you feel and vice versa, and your choice of which ones to use has non-trivial and inescapable side-effects on your quality of life. Choose wisely. ;-)
I’ve always suspected that introspection was tied to negative emotions. It’s more of a tool to help figure out solutions to problems rather than a happy state like ‘being in flow’. People can get addicted to introspection because it feels productive, but remains depressing if no positive action is taken from it.
Do you think this is related to the mental muscles model?
I agree and this is insightful: thinking in certain types of ways results in specific predictable emotions. The way I feel about reality is the result of the state of my mind, which is a choice. However, exercising the other set of muscles does not seem to be epistemically neutral. They generate thoughts that my critical faculty would be .. critical of.
Some of the mental muscles used by religion, OTOH, are appreciation, gratitude, acceptance, awe, compassion… all of which have more positive direct effects on quality of life.
For me, many of these muscles seem to require some extent of magical thinking. They generate a belief in a presence that is taking care of me or at least a feeling for the interconnectedness and self-organization of reality. Is this dependency unusual? Am I mistaken about the dependence?
Consider a concrete example: enjoying the sunshine. Enjoyment seems neutral. However, if I want to feel grateful, it seems I feel grateful towards something. I can personify the sun itself, or reality. It seems silly to personify the sun, but I find it quite natural to personify reality. I currently repress personifying reality with my critical muscles, after a while I suspect it would also feel silly.
I’m not sure what I mean by ‘personify’, but while false (or silly) it also seems harmless. Being grateful for the sun never caused me to make—say—a biased prediction about future experience with the sun. But while I’ve argued a few times here that one should be “allowed” false beliefs if they increase quality of life without penalty, I find that I am currently in a mode of preferring “rational” emotions over allowing impressions that would feel silly.
Nope. The idea that your brain’s entire contents need to be self-consistent is just the opinion of the part of you that finds inconsistencies and insists they’re bad. Of course they are… to that part of your brain.
I teach people these questions for noticing and redirecting mental muscles:
What am I paying attention to? (e.g. inconsistencies)
Is that useful? (yes, if you’re debugging a program, doing an engineering task, etc. -- no if you’re socializing or doing something fun)
What would it be useful for me to pay attention to?
Consider a concrete example: enjoying the sunshine. Enjoyment seems neutral. However, if I want to feel grateful, it seems I feel grateful towards something. I can personify the sun itself, or reality. It seems silly to personify the sun, but I find it quite natural to personify reality.
Is that really necessary? I have not personally observed that gratitude must be towards something in particular, or that it needs to be personified. One can be grateful in the abstract—thank luck or probability or the Tegmark level IV multiverse if you must. Or “thank Bayes!”. ;-)
For me, many of these muscles seem to require some extent of magical thinking. They generate a belief in a presence that is taking care of me or at least a feeling for the interconnectedness and self-organization of reality. Is this dependency unusual? Am I mistaken about the dependence?
Sure, there’s a link. I think that Einstein’s question about whether the universe is a friendly place is related. I also think that this is the one place where an emphasis on epistemic truth and decompartmentalization is potentially a serious threat to one’s long-term quality of life.
I think that our brains and bodies more or less have an inner setting for “how friendly/hostile is my environment”—and believing that it’s friendly has enormous positive impact, which is why religious people who believe in a personally caring deity score so high on various quality of life measures, including recovery from illness.
So, this is one place where you need to choose carefully about which truths you’re going to pay attention to, and worry much more about whether you’re going to let too much critical faculty leak over into your basic satisfaction with and enjoyment of life.
Much more than you should worry about whether your uncritical enjoyment is going to leak over and ruin your critical thinking.
Trust me, if you’re worrying about that, then it’s a pretty good sign that the reverse is the problem. (i.e., your critical faculty already has too much of an upper hand!)
This is one reason I say here that I’m an instrumentalist: it’s more important for me to believe things that are useful, than things that are true. And I can (now, after quite a lot of practice) switch off my critical faculties enough to learn useful things from people who have ridiculously-untrue theories about how they work.
For example, “law of attraction” people believe all sorts of stupidly false things… that are nonetheless very useful to believe, or at least to act as if they were true. But I avoid epistemic conflict by viewing such theories as mnemonic fuel for intuition pumps, rather than as epistemically truthful things.
In fact, I pretty much assume everything is just a mnemonic/intuition pump, even the things that are currently considered epistemically “true”. If you’ll notice, over the long term such “truths” of one era get revised to be “less wrong”, even though the previous model usually worked just fine for whatever it was being used for, up to a certain point. (e.g. Newtonian physics)
(Sadly, as models become “less wrong”, they have a corresponding tendency to be less and less useful as mnemonics or intuition pumps, and require outside tools or increased conscious cognition to become useful. (e.g. Einsteinian physics and quantum mechanics.))
Without really being able to make a case that I have successfully done so, I believe it’s possible to improve my life by thinking accurately and making wise choices. It’s hard to think clearly about areas of painful failure, and it’s hard to motivate myself to search for invalidating experiences, rather than self-protectively circumscribing my efforts, but on the other hand I love the feeling of facing and knowing reality.
I hope that my reply does not in any way discourage Richard Kennaway’s reply. I am curious about different responses. But mine: rationalism intends to find better ways to satisfy values, but finds in the process that values are negated, or that it would be more rational to modify values.
Some time ago, I had grand hopes that as a human being embedded in reality, I could just look around and think about things and with some steady effort I might find a world view—at least an epistemology—that would bring everything together, or that I could be involved in a process of bringing things together. Kind of the way religion would do, if it was believable and not a bunch of nonsense. However, the continued application of thought and reason to life just seems to negate the value of life.
Intellectually, I’m in a place where life presents as meaningless. While I can’t “go back” to religious thinking—in fact, I suspect I was never actually there, I’ve only ever been looking for a comprehensive paradigm—I think religions have the right idea; they are wise to the fact that intellectualism/objectivity is not the way to go when it comes to experiencing “cosmic meaning”.
Many people never think about the double think that is required in religion. But I suspect many more people have thought about things both ways … a lifetime is a long time, with space for lots of thoughts … and found that “intellectualism” requires double think as well (compartmentalization) but in a way that is immensely less satisfying. In the latter, you intellectually know that “nothing matters” but that you are powerless to experience and apply this viscerally due to biology. Viscerally, you continue to seek comfort and avoid pain, while your intellect tells you there’s no purpose to your movements.
A shorter way of saying all of this: Being rational is supposed to help humans pursue their values. But it’s pretty obvious that having faith is something that humans value.
Although this comment is already long, it seems a concrete example is needed. Culturally, it appears that singularitarians value information (curiosity) and life (immortality). Suppose immortality was granted: we upload our brains to something replicable and durable so that we can persist forever without any concerns. What in the world would we be motivated to do? What would be the value of information? So what if the digits of pi strung endlessly ahead of me?
I think the “mental muscles” model I use is helpful here. We have different ways of thinking that are useful for different things—mental muscles, if you will.
But, the muscles used in critical thinking are, well, critical. They involve finding counterexamples and things that are wrong. While this is useful in certain contexts, it has negative side effects on one’s direct quality of life, just as using one physical muscle to the exclusion of all others would create problems.
Some of the mental muscles used by religion, OTOH, are appreciation, gratitude, acceptance, awe, compassion… all of which have more positive direct effects on quality of life.
In short, even though reason has applications that indirectly lead to improved circumstances of living, its overuse is directly detrimental to the quality of experience that occurs in that life. And while exclusive use of certain mental muscles used in religion can indirectly lead to worsened circumstances of living, they nonetheless contribute directly to an improved quality of experience.
I’ve pretty much always felt that the problem with LessWrong is that it consists of an effort by people who are already overusing their critical faculties, seeking to improve their quality of experience, by employing those faculties even more.
In your case, the search for a comprehensive world view is an example of this: i.e., believing that if your critical faculty was satisfied, then you would be happy. Instead, you’ve discovered that using the critical faculty simply produces more of the same dissatisfaction that using the critical faculty always produces. In a very real sense, the emotion of dissatisfaction is the critical faculty.
In fact, I got the idea of mental muscles from Minsky’s book The Emotion Machine, wherein he proposes mental “resources” organized into larger activation patterns by emotion. That is, he proposes that emotions are actually modes of thought, that determine which resources (muscles) are activated or suppressed in relation to the topic. Or in other words, he proposes that emotions are a form of functional metacognition.
(While Minksy calls the individual units “resources”, I prefer the term “muscles”, because as with physical muscles they can be developed with training, some are more appropriate for some tasks than others, etc. So it’s more vivid and suggestive when training to either engage or “relax” specific “muscle groups”.)
Anyway… tl;dr version: emotions and thinking faculties are linked, so how you think is how you feel and vice versa, and your choice of which ones to use has non-trivial and inescapable side-effects on your quality of life. Choose wisely. ;-)
I’ve always suspected that introspection was tied to negative emotions. It’s more of a tool to help figure out solutions to problems rather than a happy state like ‘being in flow’. People can get addicted to introspection because it feels productive, but remains depressing if no positive action is taken from it.
Do you think this is related to the mental muscles model?
Yep—Minsky actually uses something like it as an example.
I agree and this is insightful: thinking in certain types of ways results in specific predictable emotions. The way I feel about reality is the result of the state of my mind, which is a choice. However, exercising the other set of muscles does not seem to be epistemically neutral. They generate thoughts that my critical faculty would be .. critical of.
For me, many of these muscles seem to require some extent of magical thinking. They generate a belief in a presence that is taking care of me or at least a feeling for the interconnectedness and self-organization of reality. Is this dependency unusual? Am I mistaken about the dependence?
Consider a concrete example: enjoying the sunshine. Enjoyment seems neutral. However, if I want to feel grateful, it seems I feel grateful towards something. I can personify the sun itself, or reality. It seems silly to personify the sun, but I find it quite natural to personify reality. I currently repress personifying reality with my critical muscles, after a while I suspect it would also feel silly.
I’m not sure what I mean by ‘personify’, but while false (or silly) it also seems harmless. Being grateful for the sun never caused me to make—say—a biased prediction about future experience with the sun. But while I’ve argued a few times here that one should be “allowed” false beliefs if they increase quality of life without penalty, I find that I am currently in a mode of preferring “rational” emotions over allowing impressions that would feel silly.
Is this conflict “real”?
Nope. The idea that your brain’s entire contents need to be self-consistent is just the opinion of the part of you that finds inconsistencies and insists they’re bad. Of course they are… to that part of your brain.
I teach people these questions for noticing and redirecting mental muscles:
What am I paying attention to? (e.g. inconsistencies)
Is that useful? (yes, if you’re debugging a program, doing an engineering task, etc. -- no if you’re socializing or doing something fun)
What would it be useful for me to pay attention to?
Is that really necessary? I have not personally observed that gratitude must be towards something in particular, or that it needs to be personified. One can be grateful in the abstract—thank luck or probability or the Tegmark level IV multiverse if you must. Or “thank Bayes!”. ;-)
Sure, there’s a link. I think that Einstein’s question about whether the universe is a friendly place is related. I also think that this is the one place where an emphasis on epistemic truth and decompartmentalization is potentially a serious threat to one’s long-term quality of life.
I think that our brains and bodies more or less have an inner setting for “how friendly/hostile is my environment”—and believing that it’s friendly has enormous positive impact, which is why religious people who believe in a personally caring deity score so high on various quality of life measures, including recovery from illness.
So, this is one place where you need to choose carefully about which truths you’re going to pay attention to, and worry much more about whether you’re going to let too much critical faculty leak over into your basic satisfaction with and enjoyment of life.
Much more than you should worry about whether your uncritical enjoyment is going to leak over and ruin your critical thinking.
Trust me, if you’re worrying about that, then it’s a pretty good sign that the reverse is the problem. (i.e., your critical faculty already has too much of an upper hand!)
This is one reason I say here that I’m an instrumentalist: it’s more important for me to believe things that are useful, than things that are true. And I can (now, after quite a lot of practice) switch off my critical faculties enough to learn useful things from people who have ridiculously-untrue theories about how they work.
For example, “law of attraction” people believe all sorts of stupidly false things… that are nonetheless very useful to believe, or at least to act as if they were true. But I avoid epistemic conflict by viewing such theories as mnemonic fuel for intuition pumps, rather than as epistemically truthful things.
In fact, I pretty much assume everything is just a mnemonic/intuition pump, even the things that are currently considered epistemically “true”. If you’ll notice, over the long term such “truths” of one era get revised to be “less wrong”, even though the previous model usually worked just fine for whatever it was being used for, up to a certain point. (e.g. Newtonian physics)
(Sadly, as models become “less wrong”, they have a corresponding tendency to be less and less useful as mnemonics or intuition pumps, and require outside tools or increased conscious cognition to become useful. (e.g. Einsteinian physics and quantum mechanics.))
Without really being able to make a case that I have successfully done so, I believe it’s possible to improve my life by thinking accurately and making wise choices. It’s hard to think clearly about areas of painful failure, and it’s hard to motivate myself to search for invalidating experiences, rather than self-protectively circumscribing my efforts, but on the other hand I love the feeling of facing and knowing reality.