In the book Happiness: Lessons from a New Science by Richard Layard, the author goes into detail about how mood is strongly correlated with differential activation in the two hemispheres of the brain. The left forebrain is more strongly activated than the right forebrain when a person is happy, and the right forebrain is more strongly activated when a person is sad. (Ramachandran mentions that stroke victims with left brain damage frequently become depressed, while ones with right brain damage don’t.)
If the left brain interprets data through the perspective of current theories and the right brain forces theory revision, and left brain activation is associated with happiness and right brain activation is associated with unhappiness, what does that say about happiness and rationality?
CronoDAS may already have said this, but just to elaborate a bit: one might wonder if sadness increases useful theory revision, and thereby increases aspects of rationality. And one might conversely wonder if the modes of thinking that prompt useful theory-revision, rather than speeches for coherent social posturing, tend to directly increase sadness. (Not because they cause us to notice sad things, but because hanging out in those modes of thought is itself a sadness-associated activity, like frowning.)
By way of analogy: happiness causes smiling, actively working on projects, and perhaps socializing, and forcing yourself to smile, to start projects, or to socialize probably increases happiness. (I’ve seen studies backing up the active work effect also, but I can’t find them.)
I’m not so sure myself. It seems to me like “theory revision” is fun. However, I suppose it depends on the precise sort of theory revision we’re talking about. When I’m revising my theories because I’m wrong/losing, it’s a bit more negatively charged of a state than just idly speculating. However, that mood doesn’t necessarily last long, and is quickly replaced by the pleasure of an “aha”.
A long time ago, my wife and I learned to refer to these situations as “growth opportunities”—said with an ironic look and a bit of a groan—but viewing them as such definitely improved our moods in dealing with them.
Thus, I find it difficult to believe in a hardwired causal connection from revision->sadness, even though it’s easy for me to believe in a connection going the other direction.
Personally, I think the “Lisa Simpson Happiness Theory” (negative correlation between happiness and intelligence) arises from the mistaken tendency of intelligent people to assume that their “shoulds” exist in the “territory” (and not just in their own map), because they can come up with better arguments for their shoulds than for those of others. Intelligence is at least moderately correlated with this phenomenon, and this phenomenon is then highly correlated with people not wanting to be around you, which in turn is at least moderately correlated with phenomena such as “not having a life” and being unhappy. ;-)
I agree brainstorming-type idea change is fun. Exploration, and considering new avenues and potential projects, is plausibly happiness-inducing in general; such exploration and initiative (or successfully engaging others with the projects, and the hopes around the projects?) may be one of the core functions of happiness.
I also agree that deep personal change can be deeply satisfying and can have good aesthetics, bring fresh air and happiness, etc. And I agree that the negatively charged state of wincing at a mistake need not pervade through most belief-revision.
That said, I’d still assign significant (perhaps 40%) probability to there being a kind of thinking that is useful for parts of rationality and that directly causes sadness (not by an impossibly strong route—you can frown and still be happy—but that causes a force in that direction). Perhaps a kind of thinking that’s involved in cutting through one’s own social bluster to take an honest look at one’s own abilities, or at the symmetry between one’s own odds of being right, or of succeeding, and those of others in like circumstances. Or perhaps a kind of thinking that’s involved in critically analyzing the strengths and weaknesses of particular theories or beliefs, rather than in exploring.
The evidence that’s moving my belief is roughly: (a) correlation between unhappiness and willingness to actually update, among my non-OB acquaintance; (b) a prior (from other studies) that most effects of particular emotions can also be causes of those same emotions; (c) a vague notion that happiness might be for social interaction and enrolling others in one’s ostensibly sure-to-succeed projects, while unhappiness might be for re-assessing. (What else might they be for? Rewards/punishments to motivate behavior doesn’t work as an evolutionary theory for happiness and sadness; moods have pervasive, and so potentially costly, effects on behavior for long periods of time, in a way that brief, intense pain/pleasure doesn’t. Those pervasive effects have to be part of what evolution is after.)
Interesting. Well, my experience, based on personal and student observation, is that contemplating “facing the truth” about a situation is painful, but actually facing it is a relief. It’s almost as if evolution “wants” us to avoid facing the truth until the last possible moment… but once we do, there’s no point in having bad feelings about it any more. (After all, you need to get busy being happy about your new theories, so you can convince everyone it’s going to be okay!)
So unhappiness may result from merely considering the possibility that things aren’t fitting your theories… while remaining undecided about whether to drop the old theories and change.
In other words, while the apologist and the revolutionary are in conflict, you suffer. But as soon as the apologist gives up and lets the revolutionary take over, the actual suffering goes away.
This seems to me like a testable hypothesis: I propose that, given a person who is unhappy about some condition in their life, an immediate change of affect could be brought about by getting the person to explicitly admit to themselves whatever they are afraid is happening or going to happen, especially any culpability they believe they personally hold in relation to it. The process of admitting these truths should create an immediate sensation of relief in most people, most of the time.
I feel pretty confident about this, actually, because it’s the first step of a technique I use, called “truth loops”. The larger technique is more than just fixing the unhappiness (it goes on to “admitting the truth” about other things besides the current negative situation), so I wasn’t really thinking about it in this limited way before.
Meanwhile, although I do accept that, in general, affect-effects can also be affect-causes, I don’t think there’s as universal or simple a correlation between them as some people imply. For example, smiling does bias you towards happiness… but if you’re doing it because you’re being pestered to, it won’t stop you from also being pissed off! And if you’re doing it because you know you’re sad and just want to be happy, you may also feel stupid or fake. Our emotional states aren’t really that simple; we easily can (and frequently do!) have “mixed feelings”.
I propose that, given a person who is unhappy about some condition in their life, an immediate change of affect could be brought about by getting the person to explicitly admit to themselves whatever they are afraid is happening or going to happen, especially any culpability they believe they personally hold in relation to it.
Well, I’m narrowing the hypothesis a bit: I’m stating that instead of talking to a math professor for some period of time, I’m guessing that you could cut the process a lot shorter by just getting straight to the damaging admissions. ;-)
Of course, there is also good evidence that simply writing about such things is beneficial, such as the study showing that 2 minutes of writing/day (about a personal trauma) improves your health.
I’m just seeing if we can narrow down to a more precisely-defined variable with greater correlation to positive results. That is, that the specific thing that needs to be included in the writing or talking is the admission of a problem and one’s worst-case fears about it.
(a) correlation between unhappiness and willingness to actually update, among my non-OB acquaintance
Both of these could proceed from low status/self-esteem. In that case, I would expect the correlation to be stronger with updating in response to other’s opinions than to new info or self-generated ideas. I can’t tell about that, although I think I notice the same correlation, in myself as well as others. On the other hand, genuinely depressed people are (at least stereotypically, but in my limited experience actually) unwilling to update regarding the (nominal) reasons for their sadness.
What’s also interesting about that idea, is that it might also be that chronically unhappy rationalists are contemplating the idea that rationality leads to unhappiness, while failing to accept it as a fact.
I mean, most of the people who go around saying that intelligence isn’t correlated with happiness, are NOT saying this to mean, “therefore I will stop being so damn intelligent.” (Certainly I wasn’t, when I thought that.)
What they’re really doing—or at least what I was doing—is using their unhappiness to prove their intelligence. That is, “look, I have a useful quality that should be acknowledged, and by the way, I’m making a big sacrifice for all of you by giving up my own happiness in search of the Truth—you can thank me later”. The supposed lamentation is really just a disguised bid for status and approval.
However, if they were to emotionally accept that their theories are not working (as I eventually did, after enough pain) then they’d start being unhappy a bit less often.
Another interesting hypothesis to test, even if it’s not as much fun as squirting cold water in somebody’s ear. ;-)
Personally, I think the “Lisa Simpson Happiness Theory” (negative correlation between happiness and intelligence) arises from the mistaken tendency of intelligent people to assume that their “shoulds” exist in the “territory” (and not just in their own map) . . .
Is that the same as the “Charlie Gordon Happiness Theory”, where intelligence leads to arrogance as well as other people not knowing what you’re talking about, which both lead to alienation, leading to being unhappy?
Actually, it just sounds like when we’re unhappy, we’re more likely to be willing to revise our theories. It doesn’t say anything about the rationality of the theories we used before, or the ones we’re about to have.
Well, I suppose you could say it means the previous theories didn’t produce a good result, but that’s not necessarily correlated with the rationality of the theories. If a theory doesn’t work the first time you try it, it doesn’t necessarily make it wrong.
In any event, the person using “true” rationality will have fewer occasions of unhappiness over the long haul, since they will not have as many “opportunities” to revise their theories, due to low correlation with relevant realities.
(Of course, I happen to think that If you’ll really be happier over the course of your life believing something false, then great, go for it. I just also believe that the probability of that actually being the case is very low… especially when compared to the greater pain of discovering the falsehood later.)
This reminded me of something.
In the book Happiness: Lessons from a New Science by Richard Layard, the author goes into detail about how mood is strongly correlated with differential activation in the two hemispheres of the brain. The left forebrain is more strongly activated than the right forebrain when a person is happy, and the right forebrain is more strongly activated when a person is sad. (Ramachandran mentions that stroke victims with left brain damage frequently become depressed, while ones with right brain damage don’t.)
If the left brain interprets data through the perspective of current theories and the right brain forces theory revision, and left brain activation is associated with happiness and right brain activation is associated with unhappiness, what does that say about happiness and rationality?
CronoDAS may already have said this, but just to elaborate a bit: one might wonder if sadness increases useful theory revision, and thereby increases aspects of rationality. And one might conversely wonder if the modes of thinking that prompt useful theory-revision, rather than speeches for coherent social posturing, tend to directly increase sadness. (Not because they cause us to notice sad things, but because hanging out in those modes of thought is itself a sadness-associated activity, like frowning.)
By way of analogy: happiness causes smiling, actively working on projects, and perhaps socializing, and forcing yourself to smile, to start projects, or to socialize probably increases happiness. (I’ve seen studies backing up the active work effect also, but I can’t find them.)
I’m not so sure myself. It seems to me like “theory revision” is fun. However, I suppose it depends on the precise sort of theory revision we’re talking about. When I’m revising my theories because I’m wrong/losing, it’s a bit more negatively charged of a state than just idly speculating. However, that mood doesn’t necessarily last long, and is quickly replaced by the pleasure of an “aha”.
A long time ago, my wife and I learned to refer to these situations as “growth opportunities”—said with an ironic look and a bit of a groan—but viewing them as such definitely improved our moods in dealing with them.
Thus, I find it difficult to believe in a hardwired causal connection from revision->sadness, even though it’s easy for me to believe in a connection going the other direction.
Personally, I think the “Lisa Simpson Happiness Theory” (negative correlation between happiness and intelligence) arises from the mistaken tendency of intelligent people to assume that their “shoulds” exist in the “territory” (and not just in their own map), because they can come up with better arguments for their shoulds than for those of others. Intelligence is at least moderately correlated with this phenomenon, and this phenomenon is then highly correlated with people not wanting to be around you, which in turn is at least moderately correlated with phenomena such as “not having a life” and being unhappy. ;-)
I agree brainstorming-type idea change is fun. Exploration, and considering new avenues and potential projects, is plausibly happiness-inducing in general; such exploration and initiative (or successfully engaging others with the projects, and the hopes around the projects?) may be one of the core functions of happiness.
I also agree that deep personal change can be deeply satisfying and can have good aesthetics, bring fresh air and happiness, etc. And I agree that the negatively charged state of wincing at a mistake need not pervade through most belief-revision.
That said, I’d still assign significant (perhaps 40%) probability to there being a kind of thinking that is useful for parts of rationality and that directly causes sadness (not by an impossibly strong route—you can frown and still be happy—but that causes a force in that direction). Perhaps a kind of thinking that’s involved in cutting through one’s own social bluster to take an honest look at one’s own abilities, or at the symmetry between one’s own odds of being right, or of succeeding, and those of others in like circumstances. Or perhaps a kind of thinking that’s involved in critically analyzing the strengths and weaknesses of particular theories or beliefs, rather than in exploring.
The evidence that’s moving my belief is roughly: (a) correlation between unhappiness and willingness to actually update, among my non-OB acquaintance; (b) a prior (from other studies) that most effects of particular emotions can also be causes of those same emotions; (c) a vague notion that happiness might be for social interaction and enrolling others in one’s ostensibly sure-to-succeed projects, while unhappiness might be for re-assessing. (What else might they be for? Rewards/punishments to motivate behavior doesn’t work as an evolutionary theory for happiness and sadness; moods have pervasive, and so potentially costly, effects on behavior for long periods of time, in a way that brief, intense pain/pleasure doesn’t. Those pervasive effects have to be part of what evolution is after.)
Interesting. Well, my experience, based on personal and student observation, is that contemplating “facing the truth” about a situation is painful, but actually facing it is a relief. It’s almost as if evolution “wants” us to avoid facing the truth until the last possible moment… but once we do, there’s no point in having bad feelings about it any more. (After all, you need to get busy being happy about your new theories, so you can convince everyone it’s going to be okay!)
So unhappiness may result from merely considering the possibility that things aren’t fitting your theories… while remaining undecided about whether to drop the old theories and change.
In other words, while the apologist and the revolutionary are in conflict, you suffer. But as soon as the apologist gives up and lets the revolutionary take over, the actual suffering goes away.
This seems to me like a testable hypothesis: I propose that, given a person who is unhappy about some condition in their life, an immediate change of affect could be brought about by getting the person to explicitly admit to themselves whatever they are afraid is happening or going to happen, especially any culpability they believe they personally hold in relation to it. The process of admitting these truths should create an immediate sensation of relief in most people, most of the time.
I feel pretty confident about this, actually, because it’s the first step of a technique I use, called “truth loops”. The larger technique is more than just fixing the unhappiness (it goes on to “admitting the truth” about other things besides the current negative situation), so I wasn’t really thinking about it in this limited way before.
Meanwhile, although I do accept that, in general, affect-effects can also be affect-causes, I don’t think there’s as universal or simple a correlation between them as some people imply. For example, smiling does bias you towards happiness… but if you’re doing it because you’re being pestered to, it won’t stop you from also being pissed off! And if you’re doing it because you know you’re sad and just want to be happy, you may also feel stupid or fake. Our emotional states aren’t really that simple; we easily can (and frequently do!) have “mixed feelings”.
I believe this is both widely accepted and true.
See also Robyn Dawes and Robin Hanson on therapy, and Eliezer on Dawes.
Well, I’m narrowing the hypothesis a bit: I’m stating that instead of talking to a math professor for some period of time, I’m guessing that you could cut the process a lot shorter by just getting straight to the damaging admissions. ;-)
Of course, there is also good evidence that simply writing about such things is beneficial, such as the study showing that 2 minutes of writing/day (about a personal trauma) improves your health.
I’m just seeing if we can narrow down to a more precisely-defined variable with greater correlation to positive results. That is, that the specific thing that needs to be included in the writing or talking is the admission of a problem and one’s worst-case fears about it.
Both of these could proceed from low status/self-esteem. In that case, I would expect the correlation to be stronger with updating in response to other’s opinions than to new info or self-generated ideas. I can’t tell about that, although I think I notice the same correlation, in myself as well as others. On the other hand, genuinely depressed people are (at least stereotypically, but in my limited experience actually) unwilling to update regarding the (nominal) reasons for their sadness.
Agreed that direct reinforcement is unlikely, but there are other possible complex reasons; e.g. evolutionary approaches to depression.
What’s also interesting about that idea, is that it might also be that chronically unhappy rationalists are contemplating the idea that rationality leads to unhappiness, while failing to accept it as a fact.
I mean, most of the people who go around saying that intelligence isn’t correlated with happiness, are NOT saying this to mean, “therefore I will stop being so damn intelligent.” (Certainly I wasn’t, when I thought that.)
What they’re really doing—or at least what I was doing—is using their unhappiness to prove their intelligence. That is, “look, I have a useful quality that should be acknowledged, and by the way, I’m making a big sacrifice for all of you by giving up my own happiness in search of the Truth—you can thank me later”. The supposed lamentation is really just a disguised bid for status and approval.
However, if they were to emotionally accept that their theories are not working (as I eventually did, after enough pain) then they’d start being unhappy a bit less often.
Another interesting hypothesis to test, even if it’s not as much fun as squirting cold water in somebody’s ear. ;-)
Is that the same as the “Charlie Gordon Happiness Theory”, where intelligence leads to arrogance as well as other people not knowing what you’re talking about, which both lead to alienation, leading to being unhappy?
– Ayn Rand, The Fountainhead
Actually, it just sounds like when we’re unhappy, we’re more likely to be willing to revise our theories. It doesn’t say anything about the rationality of the theories we used before, or the ones we’re about to have.
Well, I suppose you could say it means the previous theories didn’t produce a good result, but that’s not necessarily correlated with the rationality of the theories. If a theory doesn’t work the first time you try it, it doesn’t necessarily make it wrong.
In any event, the person using “true” rationality will have fewer occasions of unhappiness over the long haul, since they will not have as many “opportunities” to revise their theories, due to low correlation with relevant realities.
(Of course, I happen to think that If you’ll really be happier over the course of your life believing something false, then great, go for it. I just also believe that the probability of that actually being the case is very low… especially when compared to the greater pain of discovering the falsehood later.)
Wait… indoctrination/fanatization techniques rely on making the person miserable, right ?
...this is getting really uncomfortable.
I’ve suspected that for a while, actually.