Meta analysis of Writing Therapy
Robin Hanson recently mentioned “writing therapy” as potentially having surprisingly large benefits. In the example he gives, recently unemployed engineers who write about their experience find jobs more quickly than those that did not.
The meta-analysis paper he links to was pretty lame, but I found another meta-analysis, “Experimental disclosure and its moderators: A meta-analysis”, on a somewhat broader topic of Experimental Disclosure that appears to be much better.
My judgment is non-expert, but it looks to me like a very high quality meta-analysis. The authors use a large number of studies (146) and include a large number of potential moderators, discuss their methodology in detail, and address publication bias intelligently.
The authors find small to moderate positive effects on measures of psychological health, physiological health and general life outcomes. They also find a number of interesting moderating factors.
- Personal information management by 11 Sep 2012 11:40 UTC; 27 points) (
- 10 Feb 2012 23:01 UTC; 16 points) 's comment on On Journaling by (
- 21 Aug 2012 0:27 UTC; 5 points) 's comment on [Link] Social interventions gone wrong by (
- 7 Oct 2014 10:48 UTC; 4 points) 's comment on Open thread, Oct. 6 - Oct. 12, 2014 by (
- 23 Jul 2013 0:44 UTC; 4 points) 's comment on For Happiness, Keep a Gratitude Journal by (
- 16 Jul 2012 21:12 UTC; 1 point) 's comment on Schools Proliferating Without Evidence by (
(Around page 22.) The psychological health, physical health, reported health were all around 0.05 weighted effect size or less, compared to 0.15 (3x); ‘health behaviors’ may’ve had a negative effect size. ‘General functioning/life outcomes’ was smaller, at 0.036 weighted (most from ‘work’ and ‘social relationships’).
So, from the sound of it, the writing therapy helped most with personal relationships, but people considerably overestimate how much it helps. Which is interesting. I was thinking that this wasn’t sounding terribly impressive, but the author cover that point:
The publication data is fun:
I was worried because early on it talked about the effect sizes being bigger in college students, but in the end:
Some useful advice:
So it sounds like for us, the basic idea should be to “write at home in private for non-distribution for half an hour every few days for a week about something you haven’t dealt with before”.
This actually sounds kind of similar to Alicorn’s Luminosity journalling stuff, going on my vague memories.
I had the idea that journalism can be helpful and beneficial to rationalists, and thought to write about it, but i first looked to see if someone did it already.
This post was really helpful, though i found it only through a different post since it didn’t have the keywords “journaling” and “diary”.
Here’s another benefit that isn’t discussed and relevant specifically to rationality:
We know that our memory is far from perfect, and that whenever we remember something we change the memory a little, and maybe add after the fact analysis to it.
long term journaling gives you a window into how you thought in the past, and how your thinking has changed.
I think it would also be very valuable for someone who’s just staring out with rationality, to start a journal and as part of it track his journey with rationality. We’re all families with how many of the ideas in the sequences feel obvious after sometime getting used to them. a journal would record and remind you how it was for you to first grapple with these new ideas.
Another benefit is noticing patterns and principles:
I noticed in my life that many times after some events (meaningful, happy, sad, important, etc.), i note a principle to myself, a lesson from it. but also many times when it happens, i notice that i already “noted” this principle to myself, but since i haven’t written it or talked with someone about it i forgot it. not making the smae mistakes over and over again is a large part of rationality, being able to review such moments and your thoughts on them at the moment would be very helpful.
This study seems rather peculiar.
Note that they weren’t asked to write about their values with respect to science. Perhaps the context increased the likelihood they did so, or perhaps there was a place dependence to the effect—the feelings of value got anchored to the location you felt them in?
Otherwise, I’d expect to see the result generalize far and wide to their lives. On the other hand, that a 15 minute writing assignment would have such far ranging effects in a person’s life seems rather unlikely to me. That it would have such a wide ranging effect in that one class seems miraculous in itself.
Hence, I find it all rather peculiar.
Notice how the men did significantly worse on their exam scores after values affirmation. What’s the explanation for that?
And “stereotype threat” just seems like a non sequitur here. How is that in any way related to the writing task? I see in the abstract that they found that “Benefits were strongest for women who tended to endorse the stereotype that men do better than women in physics.”
And the “control” is as much an experiment as the “treatment”. Why shouldn’t we conclude that the “control” had a large negative effect on women, and particularly women who believed the stereotype (and data) that men are better at math?
Maybe the women who believed the data that men were better at math showed the greatest jump because they believed in data, and so had greater aptitude thereby? Maybe those women were just more impressionable to the value of others, and so disheartened by contemplating things they didn’t value that other people did.
The raw data seems odd, and the interpretation even more dubious. Just peculiar all the way around. It certainly warrants further study, and I’d particularly like to see it controlled for each individual with a test of their aptitude/achievement going into the class.
What on earth are you talking about? Where do math exams come into the picture of jsalvatier’s linked meta-analysis?
Sorry. I was replying to a link to an article below.
Here’s the author’s summary of the moderators of effect size:
The overall weighted effect size was .063 (p. 834)
Due to a lack of focus I could not read the whole document, but it does look pretty good to my untrained eyes.
The moderating factors seem to be pretty important, I was unable to collect them all but they should sum up nicely to a how to do writing therapy guideline.
Here’s another study that shows significant effects for a particular type of writing therapy. I’m going to use this in my classes next semester.
This study seems rather peculiar.
Note that they weren’t asked to write about their values with respect to science. Perhaps the context increased the likelihood they did so, or perhaps there was a place dependence to the effect—the feelings of value got anchored to the location you felt them in?
Otherwise, I’d expect to see the result generalize far and wide to their lives. On the other hand, that a 15 minute writing assignment would have such far ranging effects in a person’s life seems rather unlikely to me. That it would have such a wide ranging effect in that one class seems miraculous in itself.
Hence, I find it all rather peculiar.
Notice how the men did significantly worse on their exam scores after values affirmation. What’s the explanation for that?
And “stereotype threat” just seems like a non sequitur here. How is that in any way related to the writing task? I see in the abstract that they found that “Benefits were strongest for women who tended to endorse the stereotype that men do better than women in physics.”
And the “control” is as much an experiment as the “treatment”. Why shouldn’t we conclude that the “control” had a large negative effect on women, and particularly women who believed the stereotype (and data) that men are better at math?
Maybe the women who believed the data that men were better at math showed the greatest jump because they believed in data, and so had greater aptitude thereby? Maybe those women were just more impressionable to the value of others, and so disheartened by contemplating things they didn’t value that other people did.
The raw data seems odd, and the interpretation even more dubious. Just peculiar all the way around. It certainly warrants further study, and I’d particularly like to see it controlled for each individual with a test of their aptitude/achievement going into the class.
That difference looks to me to be within the margin of error.
Among the stereotyped group that most believed the stereotype, there was the greatest divergence between the effects of the two writing exercise. Your suggestion should predict that all of the stereotype-believing group would improve equally. Also, “they believed in data, and so had greater aptitude thereby”? It would be a lot less embarrassing if you just figured out and stated your true rejection of this study.
I think that’s the mechanism the authors believe in; the place or context of the science classroom becomes less intimidating when the first thing they did in the semester is prime themselves with confidence.
You can defy the data if you like, but it seems pretty plausible to me.
What did they write that gave you that impression?