We can all be high status
Extension of Give praise.
This is part analysis, part a heartfelt story of my engagement with the LW/EA community. I usually don’t like to kick up dust, but I’ve decided to write an honest representation of my feelings, here and there possibly sacrificing accuracy. Adjust your interpretation accordingly.
Starting out and reading about LW and EA, I was psyched to jump in with the movement and get stuff to happen. There was a romantic vista of joining the ranks of *reasonable* people, that *would* understand my unconventional ideas, and thus actually *get real shit done*. The prospect was liberating.
I subscribe to the idea of human needs, in the sense that our utility function is an addition of sigmoids that project amounts of resources to allocate to the fulfillment of each need. Each sigmoid would have a different offset, meaning that we only start caring about the second need when the first one is mostly satisfied. Happiness would be a function of satisfaction, with 0 happiness corresponding to a state where a special subset of “deficiency” needs are satisfied, and nothing else. In that sense, I think Maslow’s hierarchy was mostly right. Not in the sense of the specific needs he proposed, or their order, but in the sense of the underlying logic.
One of these deficiency needs is status, which I define as having some level of influence on social reality. I hold this need to approach fulfillment as it approaches a level of influence that is the same as the highest status person around. Complete fulfillment happens if you’re equal or higher status than everyone else around you. As a corollary, everyone is fulfilled if and only if everyone is equal.
Additionally, these needs are all represented by subagents that are ‘activated’ (i.e. run a process in the background) if the need is not fulfilled. We have limited processing resources, and so a lack of status lowers one’s IQ.
EA is NOT status balanced, and it’s been eating me up. I feel threatened, have burned myself out multiple times, and expect this to be the major source of the mental health epidemic that is plaguing us.
I’ve worked with volunteers. No problem from an impact perspective. We all want to save the world, right?
Of the approximately 50 people that signed up, only 2 have stayed around until now, and only about 4 stayed around for longer than a week. Why are people so flaky?
Well let’s say you find yourself at the bottom of this hierarchy, but at least these people are reasonable so you’re willing to jump on the bandwagon. You look around, join some meetups, sign up to some volunteering jobs, but none of it is sufficient to be taken seriously, so you keep trying new things.
I call these low status convulsions. You find yourself in the darkness trying to fix yourself, trying to level up, so that one day you might impress people enough to be listened to, to be seen, talked about. To be given a say in the makeup of your social environment, up there in the green with Yudkowsky and Hanson and Soares and whatnot.
So you find yourself in this volunteering opportunity with some EA’s and they tell you some stuff you can do, and you do it, and you’re left in the dark again. Is this going to steer you into safe waters? Should you do more? Impress more? Maybe spend more time on that Master’s degree to get grades that set you apart, maybe that’ll get you invited with the cool kids? Maybe write some cool posts on LessWrong in the hopes of getting lots of upvotes? Let’s just do them all at once because there doesn’t seem to be any other way out...
...But then reality hits you and you find yourself overburdened, so you flake on some of your bandaid promises to make some room for the low-risk low-hope stuff, and suck up your tears because no one cares anyway. They’re all way too busy with their own scrambles for recognition.
It’s fucking grim.
I’d be a bad rationalist if I didn’t examine the other side of the story.
Some people are really more intelligent than others, and the differences are even larger in the tail of the distribution. From a meritocratic perspective, a small portion of the community should really make most of the decisions.
There is a consensus among some EA institutions that the expected value of a random EA project is approximately 0, because there are as many that might harm, as there are projects that help.
I can’t judge whether this is true, for I haven’t examined the thinking behind it, but can you believe that the mean member of a movement of hugely talented people which identifies with *measuring* impact, can’t do better than chance? I’m going to go with the charitable assumption that this is true. Can you believe how hard it is to do the right thing, and how bad things would be if we didn’t have a hierarchy?
In short, tearing down the status hierarchy might make some people happy, but it interferes with impact. The thing we’re doing it for in the first place.
There is a way out.
Hope is an anticipation that things will work out if you do X, plus an anticipation that X is doable, plus a resolve to do it. It effectively shuts up the background process that is activated by the anticipation of a thing not working out, much like when the thing actually works out.
For example, being 5 goals ahead in a football match feels the same as having already won. Seeing an oasis in the desert feels the same as already drinking the water. Learning of a positive weather prediction may curb your doubt of going on that holiday. The relief happens not when the need is satisfied, but when the need is going to be satisfied beyond a reasonable doubt. That’s when the suffering ends.
As it stands, there are no guarantees for most people that they will ever reach the level of status they need in EA to be relieved from their suffering and the foggy cognition that comes with it.
Making a promise helps a little, for a short while. That’s why we’re flaky. Signing up as a convulsion.
This is contrast with many other places. Companies have clear hierarchies and performance ratings. Healthy social groups have mechanisms for keeping people at roughly the same level of status. Academia has tenure tracks (and those that aren’t on it are struggling). Even Buddhist temples, having been optimized for promoting mental health for millennia (and in my experience being way more effective than the fledgling western tradition of psychotherapy), have a clear explicit hierarchy.
With heroic effort I’m no longer suffering as much from an uncertain future, but my former self would have instantly be imbued with the hope, motivation, optimism that he came for in the first place, had he been put on a predictable path to full personhood right at the start. It would still be a relief right now. Halfway isn’t as good as all the way, and I have barely any indications that anything in the crushing pile of work I’m doing is any good.
On the other side of the coin, my former self would have been happy to move on if it was clear that he couldn’t possibly climb the hierarchy even if he tried. The struggle is in the uncertainty.
So what I want to propose is that we define much more clearly what it takes to be taken seriously around here.
One way is to define a precise membership test that successfully puts one in a narrative of inclusion. This is what I was trying to convey in The league of rationalists. That’s what initiation rituals are for.
Another way might be to simply carve out more slots. There is not one status hierarchy, but an intertwined set of hierarchies with at the top positions that guarantee membership. Membership is guaranteed when you have a convincing narrative that you are a necessary part of the group. When you have leverage.
This might be done by specializing in a skill that makes you a worthy asset. Who is the best EA software developer? Who is the best ops person?
Another way is to properly define what we value, and affirm when someone has indeed reached a point of being respectable. For example: shoutout to those people that have sticked with volunteering for RAISE after committing to it, and showing proper consideration when they couldn’t continue. That’s Veerle de Goederen, Remmelt Ellen, Lewis Hammond, Roland Pihlakas and myself. Well done.
Edit: Two more excellent ideas from a comment by ricraz:
there’s another sense in which “we can all be high-status”: within our respective local communities. I’m curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.
On a broader level, one actionable idea I’ve been thinking about is to talk less about existential risk being “talent constrained”, so that people who can’t get full-time jobs in the field don’t feel like they’re not talented. A more accurate term in my eyes is “field-building constrained”.
And some related discussion on Facebook, here and here and here
Whatever we do, something needs to be done. Right now a large portion of the movement lives with an underdefined identity, a lack of status, feeling like they have to work so much harder to be seen, and it’s never enough. It’s not just hurting them, it’s making them unproductive and prone to corruption. How well do you think if you’re thirsty? That’s the reality for most of us.
This post was written with the support of the EA Hotel.
Here are two relevant links.
1) Julia Galef comments on a post by Jeff Kaufman:
Jeff Kaufman:
2) Katja Grace writes, relatedly:
A much earlier post by David Friedman makes similar points:
Thing I’ve noticed about status/prestige.
When I first started doing parkour I didn’t have any friends in the hobby. I also didn’t watch parkour videos on youtube. I was mostly dicking about on my own, and whenever a non-parkour person would compliment me on my skill it would feel great and I’d revel in it. Later, there was a phase were I still had no friends doing parkour, but I was watching some of the best athletes in the world on youtube. At that time, whenever someone complimented me, it felt like I was cheating, and/or secretly low-status and they just hadn’t gotten the memo.
The past two years, I’ve found a group of friends to train parkour with, and I’m at the top of the skill ladder in that group. Now a days I notice I don’t feel the same pangs of “I’m cheating”.
This makes me think that what exactly your “local community” is can be finnicky. My local community went from no one, the whole world, to 9 people. What’s interesting is that it’s not as if I forgot that there’s a whole world of professional parkour athletes. It seems like I was able to feel like I had more status because my local community had more “weight” than the rest of the world.
My experience would generate the advice: the more you interact with a larger global group where you are outclassed/low-status, the more you need to interact with a smaller local group where you are high-status.
I’d like to point out that high status does not automatically equate with happiness/high life satisfaction. Status comes with responsibility, including dealing with people nitpicking all of your public decisions and statements, and even some private aspects of your life.
If anyone is trying to get status in order to feel safe and comfortable, they are confused about how status works.
I’ll note that I got significantly higher status after deciding to ignore what all the “high-status” people were telling me to do and instead did an ambitious project that most of them told me was doomed to fail. And maybe it is doomed to failure and I just don’t see it yet. But I think that if you’re trying to become high status by checking all the boxes that the current high status people tell you to, it’s not likely to work.
That said, I don’t think everyone should go rogue and start projects that high status folks think are doomed (0 or negative expected value). I also don’t think doing so has made me especially high status, just slightly above average rather than a fair bit below.
Status works like OP describes, when going from “dregs” to “valued community member”. Social safety is a very basic need, and EA membership undermines that for many people by getting them to compare themselves to famous EAs, rather than to a more realistic peer group. This is especially true in regions with a lower density of EAs, or where all the ‘real’ EAs pack up and move to higher density regions.
I think the OP meant “high” as a relative term, compared to many people who feel like dregs.
My sense is that increasing the amount of time and attention that we pay to status and related dynamics is extremely negative; I don’t expect it to help and I think that issues related to these situations get significantly much worse when people are consciously targeting them.
As C.S. Lewis said in his excellent talk “The Inner Ring”:
And later:
This is essentially my view. I do not think it is generally productive to concern yourself with being In or High-Status or Getting Invited to the Right Parties or Being Talked About; I think it is productive to focus on the work that actually builds and contributes to the project, and let the parties and invitations and all that come as they may (or may not).
There’s a possibility for corruption here, as I briefly mentioned, if people get so deprived that they will sacrifice their other needs or values for the sake of status alone.
I considered that to be obvious in writing this. I’m not necessarily talking about the problem of getting status regardless of everything else. I’m also not talking about how to get status as an individual. I’m rather talking about getting the whole community a sense of status while keeping our other values intact.
“Focus on creating value” might be a great individual solution if you’re talented enough. People recognize you’re not goodharting as much and they’re promoting you accordingly. But it doesn’t help everyone. It doesn’t scale. If it works for you that just means you’ve been able to win these competitions so far. Good for you.
As for the collective version: judging from the fact that we’ve taken some meaningful progress with this at LW Netherlands, there’s clearly more traction to be made.
Yes, I think giving the community a “sense of status” has substantial risks of exacerbating the corruption that I mentioned earlier. In other words, I think recognizing achievements is nice, but making that recognition too systematic leads to significantly increased gaming of that system, Goodharting, etc.
This...seem like a very zero-sum perspective on value creation? Like, you’re tracking the zero-sum social credit assigned through recognition of the value creation but not the value creation itself.
Here’s a non-zero-sum perspective. People have intrinsic wants/needs, which are partially instinctive. Creating value consists of creating and distributing things that meet these wants/needs (material goods, information, communication protocols, environments, interactions, etc), and in reducing barriers to needs-satisfaction such as coercion. By this definition value creation is inherently scalable (the more of it happens, the more needs are met), and non-zero-sum. Status isn’t an intrinsic need, it’s a zero-sum social frame that acts as a Schelling point governing distribution of valuable things and coercion, and it confuses people into thinking that they intrinsically want it when they actually want some of those valuable things that it governs and also to be coerced less.
I mostly agree with your description of the problem, and I sympathise with your past self. However, I also think you understate the extent to which the EA and rationality communities are based around individual friendships. That makes things much messier than they might be in a corporation, and make definitions like the one you propose much harder.
On the other hand, it also means that there’s another sense in which “we can all be high-status”: within our respective local communities. I’m curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.
On a broader level, one actionable idea I’ve been thinking about is to talk less about existential risk being “talent constrained”, so that people who can’t get full-time jobs in the field don’t feel like they’re not talented. A more accurate term in my eyes is “field-building constrained”.
Yes, yes. All of this.
This is what we’ve built with LessWrong Netherlands. We call it the Home Bayes and it’s a group of 15ish people with tight bonds and formal membership. It works like a charm.
I’m glad someone else had this idea.
Coming from my own startup with plenty of talent around but so far not a lot of funding, I think the problem isn’t initiative. It’s getting the funding to the right initiatives. This is why 80K has listed grantmaking as one of their highest impact careers, because the money is there, but given the CEA assumption that random cause has 0 expected value, they have to single out the good ones, and that’s happening so slowly that a lot of ideas are stranding before they even got “whitelisted”.
Compare this to research or most jobs. People work in groups. People have goals and work towards that goal. How does this happen? Usually it’s because the group leader gets paid to do what they do, and they create a stable small community for people to work in over the mid to long term. Most people don’t need to be at the center of some huge status ponzi scheme, because they just work with the same group for years on end and that’s fine.
Empathy; I wish it was going differently for you.
That being said there’s some interesting stuff here that I would like to hear more about.
What does influence on the social environment look like to you?
I notice you don’t talk at all about the outcomes of the volunteering projects you did. What did you think of them, apart from the effect on status?
Does it seem to you like the EA volunteer efforts are organized to allow for the flakiness you describe, or does it seem like they are being impacted negatively?
Honestly, I’m hardly solving this for myself. Just trying to shape the community in such a way that others are doing a bit better. I’d expect a lot of good to come from that. So let’s not get into the frame of emotionally supporting me. That’s not the outcome I’m looking for.
What does influence on the social environment look like to you?
It’s fuzzy, but it means not being left in the dark when you’re in need in some way. People maybe checking in if you’ve been feeling bad. People paying attention to your opinion when you think there’s something that needs to change, and actually changing their behavior accordingly if they find themselves agreeing.
I think a key concept is leverage.
I suspect major progress would be made if someone managed to define this better. I think it’s the hamming problem of this issue.
I notice you don’t talk at all about the outcomes of the volunteering projects you did. What did you think of them, apart from the effect on status?
That’s a bit of a broad question. Not sure what you’re looking for. The project in question is this one. It’s moving forward, but quite a bit slower than anticipated.
Does it seem to you like the EA volunteer efforts are organized to allow for the flakiness you describe, or does it seem like they are being impacted negatively?
Except for organisational overhead, they’re relatively robust. Been running for a few months now, and this one guy has kept showing up, so that’s kept it going.
What’s left of altruism if we’re going to be defined in hierarchies? Isn’t the whole point of altruism that we do not benefit a lot from it ourselves?
The point is to help others, not to needlessly deprive ourselves. (Making each other feel better does not reduce the number of anti-malaria nets produced.)
Humans need rewards to shape their behavior. It is better (even for others) if I get rewarded for my altruistic actions and it helps me keep doing good deeds for long time, than if I emotionally “burn out” quickly.
The definition is debated, but most people in EA agree it’s about utilitarianism, which is essentially just counting up the happiness of everyone together, including yourself. There are different versions of it, but as far as I know none of them ignore your own happiness.
So buying yourself an ice cream may not be “altruistic” in the common sense, but it is utilitarian.
As a community, organising yourself as a hierarchy might be utilitarian when, despite the suffering it may cause, it resolves more suffering outside of the community than it causes. This is probably true to some extent because hierarchies might cause a community to get more done, with the smartest people making the decisions.
(To be clear, I don’t think naive hedonic utilitarianism is a very good idea, and represents human values very well, and I would not say that “most people in EA” believe otherwise. I think it’s somewhat of a schelling position, but that I would guess most people have one of a large variety of positions on the precise nature of human value)
I just think we have to be careful to not get lost in our own egos.
What do you mean with ego?
>We all want to save the world, right?
No. This is your first mistake, I think. You take the ideology’s authority for granted. You shouldn’t. Dropping altruism outside of self-based reciprocity was the single best decision I have ever made. The world is not worth saving. It’s not worth destroying either.
If you’re suffering from being low-status in the EA movement, you should not be a part of the EA movement. EA as an ideology has deep flaws, and as a social dynamic, it’s outright horrible. Politically, it’s parasitic.
The last part is the only part I still care about. I went through a curve from caring about making the world a better place and therefore supporting EA to wanting to make the world a better place but being skeptical about EA’s consequences to not wanting to make the world a better place.
If EAs weren’t politically parasitic, we would be free to simply ignore them, and this would be the correct answer. Unfortunately, we can’t ignore them, because they push policies and influence politics in a way that makes us worse off. This is why I’m willing to actively oppose their goals.
I distinguish two aspects of status. One is to feel good about being accepted by others. That’s nice, but I don’t think it’s central. There are many ways to feel good and many options to substitute for acceptance of any particular person or group.
The second aspect is “getting things done”. Unfortunately, we live in a world filled with people who can harm us. Coercing or convincing them not to do so is unfortunately an important practical necessity. This is why we can’t simply ignore the EA movement, or organized religion, or neonazis or any other ideology that wants to extract value from our lives or limit our personal choices.
I really do recommend that you stop supporting the EA movement. Nothing good will come of it.
I think there’s a different sort of conversation where this sort of comment might be helpful (I think there’s plenty of perspectives from which EA, or “A”, doesn’t make sense, that are worth talking about). But it feels a bit outside the scope of this conversation.
(Not 100% about toonalfrink’s goals for the conversation)
I have no idea what toonalfrink’s goals for the conversation are. But when someone writes something like,
>So you find yourself in this volunteering opportunity with some EA’s and they tell you some stuff you can do, and you do it, and you’re left in the dark again. Is this going to steer you into safe waters? Should you do more? Impress more? Maybe spend more time on that Master’s degree to get grades that set you apart, maybe that’ll get you invited with the cool kids?
then the only sensible option from my perspective is to take a step back and consider why you’re seeking status from this community in the first place. What motivations go into this behavior. At this point, I think it’s well worth reflecting
1) Why altruism in the first place?
2) Given 1, why EA?
3) Given 2, why seeking status?
Community norms tend to be self-reinforcing. It’s worth pointing out that there are people with a genuinely different perspective, and that this perspective has a reason.
I do think it makes sense to step back, but in the opposite order (you can’t rederive your entire ontology and goal structure every time something doesn’t make sense—it’s too much work and you’d never get anything done).
“Why am I seeking status?” and “Why is EA and/or EA-organizations the right way to go about A?” seem like plausible steps-backwards to take given the questions toon is raising here.
“Why altruism?” is a question every altruist should take seriously at least once, but none of the dilemmas raised in toon’s post seem like the sort of thing that warrants questioning the entire underpinning of your goal structure. (I realize if you think the entire structure is flawed, you’re going to disagree, but I think it’s strongly meta-level important for people to be able to think through problems within a given paradigm without every conversation being about re-evaluating that paradigm)
Happy to talk more in a different top-level post but not really interested in talking more in this particular comment-section