One long-held theory has been that people become socially isolated because of their poor social skills — and, presumably, as they spend more time alone, the few skills they do have start to erode from lack of use. But new research suggests that this is a fundamental misunderstanding of the socially isolated. Lonely people do understand social skills, and often outperform the non-lonely when asked to demonstrate that understanding. It’s just that when they’re in situations when they need those skills the most, they choke.
In a paper recently published in the journal Personality and Social Psychology Bulletin, Franklin & Marshall College professor Megan L. Knowles led four experiments that demonstrated lonely people’s tendency to choke when under social pressure. In one, Knowles and her team tested the social skills of 86 undergraduates, showing them 24 faces on a computer screen and asking them to name the basic human emotion each face was displaying: anger, fear, happiness, or sadness. She told some of the students that she was testing their social skills, and that people who failed at this task tended to have difficulty forming and maintaining friendships. But she framed the test differently for the rest of them, describing it as a this-is-all-theoretical kind of exercise.
Before they started any of that, though, all the students completed surveys that measured how lonely they were. In the end, the lonelier students did worse than the non-lonely students on the emotion-reading task — but only when they were told they were being tested on their social skills. When the lonely were told they were just taking a general knowledge test, they performed better than the non-lonely. Previous research echoes these new results: Past studies have suggested, for example, that the lonelier people are, the better they are at accurately reading facial expressions and decoding tone of voice. As the theory goes, lonely people may be paying closer attention to emotional cues precisely because of their ache to belong somewhere and form interpersonal connections, which results in technically superior social skills.
But like a baseball pitcher with a mean case of the yips or a nervous test-taker sitting down for an exam, being hyperfocused on not screwing up can lead to over-thinking and second-guessing, which, of course, can end up causing the very screwup the person was so bent on avoiding. It’s largely a matter of reducing that performance anxiety, in other words, and Knowles and her colleagues did manage to find one way to do this for their lonely study participants, though, admittedly, it is maybe not exactly applicable outside of a lab. The researchers gave their volunteers an energy-drink-like beverage and told them that any jitters they felt were owing to the caffeine they’d just consumed. (In actuality, the beverage contained no caffeine, but no matter — the study participants believed that it did.) They then did the emotion-reading test, just like in the first experiment. Compared to scores from that first experiment, there was no discernible difference in scores for the non-lonely, but the researchers did see improvement among the lonely participants — even when the task had been framed as a social-skills test.
It may be difficult to trick yourself into believing your nerves are from caffeine and not the fact that you really, really, really want to make a good impression in some social setting, but there are other ways to change your own thinking about anxiety. One of my recent favorites is from Harvard Business School’s Alison Wood Brooks, who found that when she had people reframe their nerves as excitement, they subsequently performed better on some mildly terrifying task, like singing in public. At the very least, this current research presents a fairly new way to think about lonely people. It’s not that they need to brush up on the basics of social skills — that they’ve likely already got down. Instead, lonely people may need to focus more on getting out of their own heads, so they can actually use the skills they’ve got to form friendships and begin to find a way out of their isolation.
I imagine such behavior could happen if someone had a bad experience in the past, that they were disproportionally punished in some social situation. The punishment didn’t even have to be a predictable logical consequence; maybe they just had a bad luck and met some psycho. Or maybe they were bullied at school, etc.
If their social skills are otherwise okay, they may intellectually understand what is usually the best response, but in real life they are overwhelmed by fear and their behavior is dominated by avoiding the thing that “caused” the bad response in the past. For example, if the bad thing happened after saying “hello” to a stranger, they may be unable to speak with strangers, even if they know from observing others that this is a good thing to do.
Then the framing of the test could make students think either about “what is generelly the right approach?” or “what would I do?”
21 people per group (86/4) is not a strong result unless it’s a large effect size which I doubt. I wouldn’t put hardly any faith in this paper. Maybe raise your prior by 3% but it’s hard to be that precise with beliefs.
I’d like to see fewer low quality scientific criticisms here. Instead of speculating on effect sizes without reading the paper, and bloviating on sample sizes without doing the relevant power calculations, perhaps try looking at the results section?
With respect to this paper, the results were consistent and significant across three tasks—an eye task, a facial expression task, and a vocal tone task. They did a non-social task (an anagram task) and found no significant effect (though that wasn’t the purpose of doing the task, its a bit more complicated than that). They also did an interesting caffeine experiment to see if they could relieve social anxiety by convincing participants that the anxiety was due to a (fake) caffeinated drink.
Anyways, as with any research in this area, it’s too soon to be confident of what the results mean. But armchair uninformed scientific criticism will not advance knowledge.
(In hindsight this is a bit of an overreaction, but I’ve seen too many poor criticisms of papers and too much speculation particularly on Reddit, but also here and on several blogs, and not nearly enough careful reading)
I would like to see fewer low quality science papers posted. FB put in way more work than was justified. My new policy is to down vote every psychology paper posted without any discussion of the endemic problems in psychology research and why that paper might not be pure noise.
Are all psychology papers garbage? And if only some are, how do you tell which is which if you don’t read past the first line of the abstract? (which FB didn’t, because he was unaware that more than one experiment was conducted).
We have to filter the papers somehow and the the people who do the filtering have to read them. But that doesn’t mean that the people doing the filtering should be people on LW. Username relied on a journalist for filtering. This does filter for interesting topics, but not for quality. That Username did not link the actual paper suggests that he did not read it. Thus my prior is that it is of median quality and pure noise. Even if psychology papers were all perfectly accurate, there are way too many that get coverage and it is unlikely that one getting coverage this month is worth reading.
There are standard places to look for filters: review articles and books.
Perhaps you didn’t notice, but the paper is gated. It’s not possible for me or most people to check the paper. The description doesn’t mention the other two studies. The study described doesn’t sound like a strong result. I never suggested it wasn’t statistically significant. If it wasn’t, it shouldn’t be used to adjust one’s views at all. I assumed it had achieved significance.
It’s also odd for you to criticize me and then ultimately come to a conclusion that could be interpreted as identical to my own or close to it. What do you mean by “too soon to be confident of what the results mean?”. That could be interpreted as adjust your prior by 3% which was my interpretation. If you think a number higher than 15% is warranted then that’s an odd phrasing to choose which makes it sound like we’re not that far apart. Given that I was going by one study and you have three to look at it, it shouldn’t be surprising that you would recommend a greater adjustment of ones prior. Going by just the facial expression study, what adjustment would you recommend? Do you think this adjustment is large enough for most people to know what to do with it? What adjustment to ones prior do you recommend after reviewing all three?
While the scientific publication paywall is a pain (and inappropriate especially for publically funded research) it is not an impossibility to get the article—and as pianoforte611 already mentioned, secondary citations or descriptions to primary sources may not provide enough information to evaluate the source.
How to get articles: I’ve seen numerous cases here at LW where a request for a copy of a paywalled publication is quickly met with a link or an email from someone who has access.
The twitter hashtag #icanhazpdf also serves this purpose: tweet with the hashtag including a link or DOI to the article you are requesting, include your email address in the tweet, and delete your request after you get the pdf. You can use a temporary read-only email address (e.g. slippery.email) if you are concerned about anonymity/privacy.
On this instance feel free to send me a private message with your contact details and I will send you a pdf—I already downloaded a copy.
Edited to add: it’s also entirely legitimate to email the author of a published article and request an electronic copy of the article. There’s no need to explain why you want it and you need not be an academic “insider”, just be clear which article you are requesting. This is an example I received yesterday: “Dear {author}, I am interested in your recent article {full citation} but do not have subscription access. Would you be able to send me an electronic copy? Many thanks”
Sorry for assuming you had easy access to the paper. Given that you don’t, you are of course free to decide whether the pop science report warrants further investigation. However to authoritatively criticize and speculate on the details of a paper you haven’t read, I think lowers the quality of discussion here.
I’m not a Bayesian but nevertheless, I don’t agree that my conclusion is similar to yours. Prima facie, the effect itself seems fairly robust across the five experiments, but their theory as to why (which they did go reasonably far to test), still needs more experiments to be established. This is not a bug, and that does not make it a low quality paper. This is how science works. There may be more subtle problems that I (not being a statistician, or a psychologist) may have missed, but those can’t be known without delving into the details.
I’m not sure what the correlation is between prominence and paper quality. At any rate, he-s a co-author; not the main author. Co-authors can sometimes have very little to do with the actual paper.
Why Lonely People Stay Lonely
I imagine such behavior could happen if someone had a bad experience in the past, that they were disproportionally punished in some social situation. The punishment didn’t even have to be a predictable logical consequence; maybe they just had a bad luck and met some psycho. Or maybe they were bullied at school, etc.
If their social skills are otherwise okay, they may intellectually understand what is usually the best response, but in real life they are overwhelmed by fear and their behavior is dominated by avoiding the thing that “caused” the bad response in the past. For example, if the bad thing happened after saying “hello” to a stranger, they may be unable to speak with strangers, even if they know from observing others that this is a good thing to do.
Then the framing of the test could make students think either about “what is generelly the right approach?” or “what would I do?”
21 people per group (86/4) is not a strong result unless it’s a large effect size which I doubt. I wouldn’t put hardly any faith in this paper. Maybe raise your prior by 3% but it’s hard to be that precise with beliefs.
I’d like to see fewer low quality scientific criticisms here. Instead of speculating on effect sizes without reading the paper, and bloviating on sample sizes without doing the relevant power calculations, perhaps try looking at the results section?
With respect to this paper, the results were consistent and significant across three tasks—an eye task, a facial expression task, and a vocal tone task. They did a non-social task (an anagram task) and found no significant effect (though that wasn’t the purpose of doing the task, its a bit more complicated than that). They also did an interesting caffeine experiment to see if they could relieve social anxiety by convincing participants that the anxiety was due to a (fake) caffeinated drink.
Anyways, as with any research in this area, it’s too soon to be confident of what the results mean. But armchair uninformed scientific criticism will not advance knowledge.
(In hindsight this is a bit of an overreaction, but I’ve seen too many poor criticisms of papers and too much speculation particularly on Reddit, but also here and on several blogs, and not nearly enough careful reading)
I would like to see fewer low quality science papers posted. FB put in way more work than was justified. My new policy is to down vote every psychology paper posted without any discussion of the endemic problems in psychology research and why that paper might not be pure noise.
Are all psychology papers garbage? And if only some are, how do you tell which is which if you don’t read past the first line of the abstract? (which FB didn’t, because he was unaware that more than one experiment was conducted).
We have to filter the papers somehow and the the people who do the filtering have to read them. But that doesn’t mean that the people doing the filtering should be people on LW. Username relied on a journalist for filtering. This does filter for interesting topics, but not for quality. That Username did not link the actual paper suggests that he did not read it. Thus my prior is that it is of median quality and pure noise. Even if psychology papers were all perfectly accurate, there are way too many that get coverage and it is unlikely that one getting coverage this month is worth reading.
There are standard places to look for filters: review articles and books.
Okay that’s very fair.
Perhaps you didn’t notice, but the paper is gated. It’s not possible for me or most people to check the paper. The description doesn’t mention the other two studies. The study described doesn’t sound like a strong result. I never suggested it wasn’t statistically significant. If it wasn’t, it shouldn’t be used to adjust one’s views at all. I assumed it had achieved significance.
It’s also odd for you to criticize me and then ultimately come to a conclusion that could be interpreted as identical to my own or close to it. What do you mean by “too soon to be confident of what the results mean?”. That could be interpreted as adjust your prior by 3% which was my interpretation. If you think a number higher than 15% is warranted then that’s an odd phrasing to choose which makes it sound like we’re not that far apart. Given that I was going by one study and you have three to look at it, it shouldn’t be surprising that you would recommend a greater adjustment of ones prior. Going by just the facial expression study, what adjustment would you recommend? Do you think this adjustment is large enough for most people to know what to do with it? What adjustment to ones prior do you recommend after reviewing all three?
While the scientific publication paywall is a pain (and inappropriate especially for publically funded research) it is not an impossibility to get the article—and as pianoforte611 already mentioned, secondary citations or descriptions to primary sources may not provide enough information to evaluate the source.
How to get articles: I’ve seen numerous cases here at LW where a request for a copy of a paywalled publication is quickly met with a link or an email from someone who has access.
The twitter hashtag #icanhazpdf also serves this purpose: tweet with the hashtag including a link or DOI to the article you are requesting, include your email address in the tweet, and delete your request after you get the pdf. You can use a temporary read-only email address (e.g. slippery.email) if you are concerned about anonymity/privacy.
On this instance feel free to send me a private message with your contact details and I will send you a pdf—I already downloaded a copy.
Edited to add: it’s also entirely legitimate to email the author of a published article and request an electronic copy of the article. There’s no need to explain why you want it and you need not be an academic “insider”, just be clear which article you are requesting. This is an example I received yesterday: “Dear {author}, I am interested in your recent article {full citation} but do not have subscription access. Would you be able to send me an electronic copy? Many thanks”
Choking Under Social Pressure: Social Monitoring Among the Lonely, Megan L. Knowles, Gale M. Lucas, Roy F. Baumeister, and Wendi L. Gardner
Most people in the general population can’t check the paper but on LW, I don’t think that’s the case. If you don’t have access to a university network http://lesswrong.com/lw/ji3/lesswrong_help_desk_free_paper_downloads_and_more/ explores a variety of ways to access papers.
This link is often useful for obtaining paywalled papers.
Sorry for assuming you had easy access to the paper. Given that you don’t, you are of course free to decide whether the pop science report warrants further investigation. However to authoritatively criticize and speculate on the details of a paper you haven’t read, I think lowers the quality of discussion here.
I’m not a Bayesian but nevertheless, I don’t agree that my conclusion is similar to yours. Prima facie, the effect itself seems fairly robust across the five experiments, but their theory as to why (which they did go reasonably far to test), still needs more experiments to be established. This is not a bug, and that does not make it a low quality paper. This is how science works. There may be more subtle problems that I (not being a statistician, or a psychologist) may have missed, but those can’t be known without delving into the details.
Shouldn’t the authors be aware of this? (I think one of them is even fairly well known in psychology circles.)
I am sure the authors are more informed about their work than anyone who has not read it.
I’m not sure what the correlation is between prominence and paper quality. At any rate, he-s a co-author; not the main author. Co-authors can sometimes have very little to do with the actual paper.