In The Futility of Emergence Eliezer names a current theory which is as flawed as the theories of flogiston and vitalism. Theories that give mysterious answers which just appears to explain the phenomena while not actually resolving the confusion.
I think I can name another one. A theory which also happen to have much more influence in LessWrong agecent communities than emergence. Status seeking and signalling as an explanation for human behavior.
II
When I first heard about signalling it felt as a useful concept. Indeed some people seem to do things not because they actually believe that this is the best course of action but because they want to appear as someone who does. Because some sort of behaviour is associated with highter status and they want to be high status.
But on a reflection and due to seeing how people actually use this concept I noticed that it’s extremely unhelpful. When people talk about signalling they do not distinguish between true signalling—when a signal actually correspond to the belief, and a false signalling—when it doesn’t. Both of these very different cases are just put in the same category which creates more confusion instead of resolving it.
Similarly with status seeking. At first it felt, dare I say, reductive. Now we can explain all kind of different behaviours by the same intrinsic drive. But then I noticed that we can explain literally any behaviour by this same intrinsic drive. People who spend money on luxury items are doing it for the sake of status signalling. People who do not… well they do it for counter signalling reasons, they are still seeking status they just associate it with not having luxury items. At this point this is just like with phlogiston that saturated the air thus the fire ceased. We can make all kind of post-hoc explanations. But can we actually predict anything?
III
At best, these theories just do not bring much new to the table. At worst they are actively misleading. Knowledge about them can make it even harder to communicate about objective level problems, derailing the conversation to the meta level. I’ve seen the examples of it multiple times.
Person A claims that we should do some policy X.
Person B accuses A of virtue signalling.
And now the conversation shifted to the nature of signalling, despite the fact that it’s completely irrelivant to the objective question of whether X is a good policy.
Or
Person A talks about a problem in Person B reasoning.
Person B accuses A of doing it for the status reasons.
And now the conversation is shifted towards the nature of status games, regardless of whether there was a problem in Person B reasoning or not.
It’s irritating that Person B accusation is, in some sense, always correct. Every public action is a signal, and people who participate in public arguments probably do associate being right with highter status. It’s all just completely irrelevant. There is nothing to argue about. And yet it can be very hard to switch the topic back to the objective level.
What was supposed to be a tool for making the conversations clearer and allowing people to communicate what they actually mean while exposing bad actors who say things for different reasons, is used as a tool to obfuscate conversations and accuse anybody of arguing in bad faith.
IV
I’ve wanted to write about it for quite some time, but today I finally got an extra bump of motivation due to reading comments to this post. It turned out that a lot of commenters were discomforted by this statement.
Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don’t support fraud done in the service of effective altruism.
As far as I understand, everyone agreed on the objective level that we do not support fraud done in the service of EA. Neither there were disagreements on whether it’s a good idea to make this true belief to be publicly known. Yet this still felt wrong for many people.
I think this can be an important case study on Why Our Kind Can’t Cooperate. We want LessWrong to be a place where people can clearly communicate about ideas on the objective level. As a result we do not want LessWrong to be just like social media, where, as it is well known people are playing their meaningful status games and doing false signalling on a high simulacrum level. And to achieve that, some of us have created a heuristic of not doing stuff that feels… too “signalingly” or “statusy”. If something looks like it’s from social media it can’t be on LessWrong.
This seems like an obvious mistake to me. It leads to being able to communicate about even less things on objective level. And I think people make it because of how confusing and unhelpful categories of status seeking and signaling are. Where there is no clear cut way to distinguish false signalling from true one we have to default to our intuitions. And they seem to be quite flawed.
The Futility of Status and Signalling
I
In The Futility of Emergence Eliezer names a current theory which is as flawed as the theories of flogiston and vitalism. Theories that give mysterious answers which just appears to explain the phenomena while not actually resolving the confusion.
I think I can name another one. A theory which also happen to have much more influence in LessWrong agecent communities than emergence. Status seeking and signalling as an explanation for human behavior.
II
When I first heard about signalling it felt as a useful concept. Indeed some people seem to do things not because they actually believe that this is the best course of action but because they want to appear as someone who does. Because some sort of behaviour is associated with highter status and they want to be high status.
But on a reflection and due to seeing how people actually use this concept I noticed that it’s extremely unhelpful. When people talk about signalling they do not distinguish between true signalling—when a signal actually correspond to the belief, and a false signalling—when it doesn’t. Both of these very different cases are just put in the same category which creates more confusion instead of resolving it.
Similarly with status seeking. At first it felt, dare I say, reductive. Now we can explain all kind of different behaviours by the same intrinsic drive. But then I noticed that we can explain literally any behaviour by this same intrinsic drive. People who spend money on luxury items are doing it for the sake of status signalling. People who do not… well they do it for counter signalling reasons, they are still seeking status they just associate it with not having luxury items. At this point this is just like with phlogiston that saturated the air thus the fire ceased. We can make all kind of post-hoc explanations. But can we actually predict anything?
III
At best, these theories just do not bring much new to the table. At worst they are actively misleading. Knowledge about them can make it even harder to communicate about objective level problems, derailing the conversation to the meta level. I’ve seen the examples of it multiple times.
Person A claims that we should do some policy X.
Person B accuses A of virtue signalling.
And now the conversation shifted to the nature of signalling, despite the fact that it’s completely irrelivant to the objective question of whether X is a good policy.
Or
Person A talks about a problem in Person B reasoning.
Person B accuses A of doing it for the status reasons.
And now the conversation is shifted towards the nature of status games, regardless of whether there was a problem in Person B reasoning or not.
It’s irritating that Person B accusation is, in some sense, always correct. Every public action is a signal, and people who participate in public arguments probably do associate being right with highter status. It’s all just completely irrelevant. There is nothing to argue about. And yet it can be very hard to switch the topic back to the objective level.
What was supposed to be a tool for making the conversations clearer and allowing people to communicate what they actually mean while exposing bad actors who say things for different reasons, is used as a tool to obfuscate conversations and accuse anybody of arguing in bad faith.
IV
I’ve wanted to write about it for quite some time, but today I finally got an extra bump of motivation due to reading comments to this post. It turned out that a lot of commenters were discomforted by this statement.
As far as I understand, everyone agreed on the objective level that we do not support fraud done in the service of EA. Neither there were disagreements on whether it’s a good idea to make this true belief to be publicly known. Yet this still felt wrong for many people.
I think this can be an important case study on Why Our Kind Can’t Cooperate. We want LessWrong to be a place where people can clearly communicate about ideas on the objective level. As a result we do not want LessWrong to be just like social media, where, as it is well known people are playing their meaningful status games and doing false signalling on a high simulacrum level. And to achieve that, some of us have created a heuristic of not doing stuff that feels… too “signalingly” or “statusy”. If something looks like it’s from social media it can’t be on LessWrong.
This seems like an obvious mistake to me. It leads to being able to communicate about even less things on objective level. And I think people make it because of how confusing and unhelpful categories of status seeking and signaling are. Where there is no clear cut way to distinguish false signalling from true one we have to default to our intuitions. And they seem to be quite flawed.