~5 months I formally quit EA (formally here means “I made an announcement on Facebook”). My friend Timothy was very curious as to why; I felt my reasons applied to him as well. This disagreement eventually led to a podcast episode, where he and I try convince each other to change sides on Effective Altruism- he tries to convince me to rejoin, and I try to convince him to quit.
Some highlights:
My story of falling in love, trying to change, and then falling out of love with Effective Altruism. That middle part draws heavily on past posts of mine, including EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem and Truthseeking is the ground in which other principles grow
Why Timothy still believes in EA
Spoilers: Timothy agrees leaving EA was right for me, but he wants to invest more in fixing it.
Thanks to my Patreon patrons for supporting my part of this work.
Some notes from the transcript:
Enjoyed this point—I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned. Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don’t see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.
Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.
Definitely think that on the margin, more “directly verifying base reality with your own eyes” would be good in EA circles. Eg at one point, I was very critical of those mission trips to Africa where high schoolers spend a week digging a well; “obviously you should just send cash!” But now I’m much more sympathetic.
This also stings a bit for Manifund; like 80% of what we fund is AI safety but I don’t really have much ability to personally verify that the stuff we funded is any good.
I think not enforcing an “in or out” boundary is big contributor to this degradation—like, majorly successful religions required all kinds of sacrifice and
It feels like AI safety is the best current candidate for this, though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what “Post EA” looks like.
Really liked this analogy!
I like this as a useful question to keep in mind, though I don’t think it’s totally explanatory. I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
I’ve been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA. (plus of course, we’d get to debate stuff like optimal voting systems and enfranchisement—my kind of catnip)
This could be part of it… but I think a hypothesis that does have to be kept in mind is that some people don’t care. They aren’t trying to follow action-policies that lead to good outcomes, they’re doing something else. Primarily, acting on an addiction to Steam. If a recruitment strategy works, that’s a justification in and of itself, full stop. EA is good because it has power, more people in EA means more power to EA, therefore more people in EA is good. Given a choice between recruiting 2 agents and turning them both into zombies, vs recruiting 1 agent and keeping them an agent, you of course choose the first one--2 is more than 1.
Mm I’m extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as “I want recruits at all costs”. I predict that if you talk to one and asking them about it, you’d find the same.
I do think that it’s easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy—but I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.
I haven’t grokked the notion of “an addiction to steam” yet, so I’m not sure whether I agree with that account, but I have a feeling that when you write “I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned” you are papering over real values differences.
Tons of EAs will tell you that honesty and integrity and truth-seeking are of course ‘important’, but if you observe their behavior they’ll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think there’s a difference in the culture and values between (on one hand) people around rationalist circles who worry a lot about how to give honest answers to things like ‘How are you doing today?‘, who hold themselves to the standards of intent to inform rather than simply whether they out and out lied, who will show up and have long arguments with people who have moral critiques of them, and (on the other hand) most of the people in the EA culture and positions of power who don’t do this, and so the latter can much more easily deceive and take advantage of people by funneling them into career paths which basically boil down to ‘devoting yourself to whatever whoever is powerful in EA thinks is a maybe-good idea this month’. Paths that people wouldn’t go down if they candidly were told up front what was going on.
I think it’s fair to say that many/most EAs (including those involved with student groups) don’t care about integrity and truth-seeking things very much, or at least not enough to bend them off the path of reward and momentum by the standards of the EA ideology / EA leaders & grantmakers when the path is going wrong, and I think this is a key reason why EA student groups are able to be like ponzi schemes. ‘Well-intentioned’ does not get you ‘has good values’ and it is not a moral defense of ponzi schemes to argue that everyone involved was “kind and well-intentioned”.
I agree it is hard to get feedback, but this doesn’t mean one cannot have good standards. A ton of my work involves maintaining of boundaries where I’m not quite sure what the concrete outputs will look like. I kind of think this is one of the main things people are talking about when we talk about values — what heuristics do you operate by in the world for most of the time when you’re mostly not going to get feedback?
Mm I basically agree that:
there are real value differences between EA folks and rationalists
good intentions do not substitute for good outcomes
However:
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
There are occasional bad people like SBF but that’s not what I’m talking about here. I’m talking about a lot of perfectly kind people who don’t hold the values of integrity and truth-seeking as part of who they are, and who couldn’t give a good account for why many rationalists value those things so much (and might well call rationalists weird and autistic if you asked them to try).
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don’t think that’s actually saying that much, and I believe there’s a lot of massively impactful difference in culture and values.
I think EA recruiters have repeatedly made decisions like choosing 2 zombies over 1 agent, and were I or Tsvi to look at the same set of options and information we would have made a different decision, because we’ve learned to care about candor and wholesomeness and respecting other people’s sense-making and a bunch of other things. I don’t think this makes them bad people. Having good values takes a lot of work by a lot of people to encapsulate and teach them, a good person should not be expected to re-derive an entire culture for themselves, and I think most of the world does not teach all of the values I care about to people by the age of 18, like lightness and argument and empiricism and integrity and courage and more. They don’t care about a number of the values that I hold, and as a result will make decisions counter to those values.
My best guess is something like a third of rationalists are also EAs, at least going by identification. (I’m being lazy for the moment and not cross checking “Identifies as Rationalist” against “Identifies as EA” but I can if you want me to and I’m like 85% sure the less-lazy check will bear that out.) My educated but irresponsible guess is something like 10% of EAs are rationalists. Last time I did a straw poll at an ACX meetup, more than half the people attending were also EAs. Whatever the differences are, it’s not stopping a substantial overlap on membership, and I don’t think that’s just at the level of random members but includes a lot of the notable members.
I’d be pretty open to a definition of ‘rationalist’ that was about more than self-identification, but to my knowledge we don’t have a workable definition better than that. It’s plausible to me that the differences matter as you lean on them a lot, but I think it’s more likely the two groups are aligned for most purposes.
Thanks for the data! I agree there’s a fair bit of overlap in clusters of people.
Two points:
I am talking about the cultural values more than simply the individuals. I think a person’s environment really brings very different things out of them. The same person(s) working at Amazon, DC politics, and a global-health non-profit, will get invited to live out different values and build quite different identities for themselves. The same person in-person and on Twitter can also behave as quite different people. I think LessWrong has a distinct culture from the EA Forum, and I think EAG has a distinct culture from ACX meetups.
Not every person in a scene strongly embodies the ideals and aspirations of that scene. Many people who come to rationalist meetups I have yet to get on the same page about with lots of values e.g. I still somewhat regularly have to give arguments against various reasons for why people sometimes endorse self-deception, even to folks who have been around for many years. The ideals of EA and LW are different.
So even though the two scenes have overlap in people, I still think the scenes live out and aspire to different values and different cultures, and this explains a lot of difference in outcomes.
I appreciate you drawing the distinction! The bit about “bad people” was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like “what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement”; I feel like I’m lowkey failing an ITT here...)
I can think about that question if it seems relevant, but the initial claim of Elizabeth’s was “I believe there are ways to recruit college students responsibly. I don’t believe the way EA is doing it really has a chance to be responsible”. So I was trying to give an account of the root cause there.
Also — and I recognize that I’m saying something relatively trivial here — the root cause of a problem in a system can of course be any seemingly minor part of it. Just because I’m saying one part of the system is causing problems (the culture’s values) doesn’t mean I’m saying that’s what’s primarily responsible for the output. The current cause of a software company’s current problems might be the slow speed with which PR reviews are happening, but this shouldn’t be mistaken for the claim that the credit allocation for the company’s success is primarily that it can do PR reviews fast.
So to repeat, I’m saying that IMO the root cause of irresponsible movement growth and ponzi-scheme-like recruitment strategies was a lack of IMO very important values like dialogue and candor and respecting other people’s sense-making and courage and so on, rather than an explanation more like ‘those doing recruitment had poor feedback loops so had a hard time knowing what tradeoffs to make’ (my paraphrase of your suggestion).
I would have to think harder about which specific values I believe caused this particular issue, but that’s my broad point.
Ben’s responses largely cover what I would have wanted to say. But on a meta note: I wrote specifically
I do also think the hypothesis is true (and it’s reasonable for this thread to discuss that claim, of course).
But the reason I said it that way, is that it’s a relatively hard hypothesis to evaluate. You’d probably have to have several long conversations with several different people, in which you successfully listen intensely to who they are / what they’re thinking / how they’re processing what you say. Probably only then could you even have a chance at reasonably concluding something like “they actually don’t care about X”, as distinct from “they know something that implies X isn’t so important here” or “they just don’t get that I’m talking about X” or “they do care about X but I wasn’t hearing how” or “they’re defensive in this moment, but will update later” or “they just hadn’t heard why X is important (but would be open to learning that)”, etc.
I agree that it’s a potentially mindkilly hypothesis. And because it’s hard to evaluate, the implicature of assertions about it is awkward—I wanted to acknowledge that it would be difficult to find a consensus belief state, and I wanted to avoid implying that the assertion is something we ought to be able to come to consensus about right now. And, more simply, it would take substantial work to explain the evidence for the hypothesis being true (in large part because I’d have to sort out my thoughts). For these reasons, my implied request is less like “let’s evaluate this hypothesis right now”, and more like “would you please file this hypothesis away in your head, and then if you’re in a long conversation, on the relevant topic with someone in the relevant category, maybe try holding up the hypothesis next to your observations and seeing if it explains things or not”.
In other words, it’s a request for more data and a request for someone to think through the hypothesis more. It’s far from perfectly neutral—if someone follows that request, they are spending their own computational resources and thereby extending some credit to me and/or to the hypothesis.
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow.
Was there ever a time where CEA was focusing on truth-alignment?
I doesn’t seem to me like they used to be truth-aligned and then they did recruiting in a way that caused a value shift is a good explanation of what happened. They always optimized for PR instead of optimizing for truth-alignment.
It’s quite a while since they edited out Leverage Research on the photos that they published with their website, but the kind of organization where people consider it reasonable to edit photos that way is far from truth-aligned.
Edit:
Julia Wise messaged me and made me aware that I confused CEA with the other CEA. The photo incident happened on the 80,000 hours website and the page talks about promoting CEA events like EA global and the local EA groups that CEA supports (at the time 80,000 hours was part of the CEA that’s now called EV). While I don’t think that this makes CEA completely innocent here, because they should see that people who promote their events under the banner of their organization name should behave ethically, I do think it gives a valid explanation for why this wouldn’t be make it central for the mistakes page of CEA and they want to focus the mistakes page on mistakes made by direct employees of the entity that’s now called CEA.
I feel ambivalent about this. On one hand, yes, you need to have standards, and I think EA’s move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with other options.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand… I wouldn’t say this to most people, but my model is you’d prefer I be this blunt… my understanding is Catholicism is about submission to the hierarchy, and if you’re not doing that or don’t actively believe they are worthy of that, you’re LARPing. I don’t think this is true of (most?) protestant denominations: working from books and a direct line to God is their jam. But Catholicism cares much more about authority and authorization.
I’d love for this to be true because I think AIS is EA’s most important topic. OTOH, I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta (which is more AIS). ETG was a much more viable role for GD than for AIS.
If you’re only as good as your last 3 months, no one can take time to rest and reflect, much less recover from burnout.
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
I think it’s good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you’re saying that believing ‘AI is an existential threat to our civilization’ is ‘crazy town’, I don’t really know what to say. I don’t believe it’s crazy town, and I don’t think that thinking it’s crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don’t understand and they’re getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you’re caught up worrying what people might’ve thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it’s ‘crazy town’ either haven’t engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it’s rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn’t exist and I’m crazy for thinking it, and so I’m naturally a bit scared by you asserting it as though it’s the default and correct position.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I reject the implication that AI town is the last stop on the crazy train.
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
Catholic EA: You have a leader you trust and respect, and defer to their judgement.
Sola Fide EA: You read 80k hours and Givewell, but you keep your own spreadsheet of EV calculations.
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don’t want to be closer, and that I don’t respect them), while being the sort of thing that requires leaders.
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]
This might be a bit off-topic, but I’m very confused by this. I was raised Catholic, and the Wikipedia description matches my understanding of Catholicism (compared to other Christian denominations)
Do you not know who the living Pope is, while still believing he’s the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?
Or do you disagree with the Wikipedia and the Catholic Church definitions of the core beliefs of Catholicism?
I’m confused by this as well. All the people I know who worked on those trips (either as an organiser or as a volunteer) don’t think it helped their epistemics at all, compared to e.g. reading the literature on development economics. I definitely think on the ground experience is extremely valuable (see this recent comment and this classic post) but I think watching vegan documentaries, visiting farms, and doing voluntourism are all bad ways to improve the accuracy of your map of actual reality.
I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don’t feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can’t name my senator or representative and barely know what Biden stands for, but I think I’m reasonably American.
I went on one of those trips as a middle schooler (to Mexico, not Africa). I don’t know that it helped my epistemics much, but I did get like, a visceral experience of what the life of someone in a third-world country would be like, that I wouldn’t have gotten otherwise and no amount of research literature reading would replicate.
I don’t literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!) I do think there’s an overreliance on consuming research and data, and an underreliance on just doing things and having reality give you feedback.
That makes sense, thanks. I would say that compared to Catholicism, in EA you have much less reason to care about the movement leaders, as them having authority to rule over EA is not part of its beliefs.
For what it’s worth, I’ve talked with several people I’ve met through EA who regularly “break” into factory farms[1] or who regularly work in developing countries.
It’s definitely possible that it should be more, but I would claim that the percentage of people doing this is much higher than baseline among people who know about EA, and I think it can have downsides for the reasons mentioned in ‘Against Empathy.’
They claim that they enter them without any breaking, I can’t verify that claim, but I can verify that they have videos of themselves inside factory farms.
Counterargument, I think there’s enough different streams of EA that this would not be especially helpful.
There exists a president of GiveWell. There exists a president of 80k Hours. There exists a president of Open Philanthropy. Those three organizations seem pretty close to each other, and there’s a lot of others further afield. I think there would be a lot of debating, some of it acrimonious, about who counted as ‘in the movement’ enough to vote on a president of EA, and it would be easy to wind up with a president that nobody with a big mailing list or a pile of money actually had to listen to.
(Commenting as myself, not representing any org)
Thanks Elizabeth and Timothy for doing this! Lots of valuable ideas in this transcript.
I felt excited, sad, and also a bit confused, since it feels both slightly resonant but also somewhat disconnected from my experience of EA. Resonant because I agree with the college-recruiting and epistemic aspects of your critiques. Disconnected, because while collectively the community doesn’t seem to be going in the direction that I would hope, I do see many individuals in EA leadership positions who I deeply respect and trust to have good individual views and good process and I’m sad you don’t see them (maybe they are people who aren’t at their best online, and mostly aren’t in the Bay).
I am pretty worried about the Forum and social media more broadly. We need better forms of engagement online—like this article + your other critiques. In the last few years, it’s become clearer and clearer to me that EA’s online strategy is not really serving the community well. If I knew what the right strategy was, I would try to nudge it. Regardless I still see lots of good in EA’s work and overall trajectory.
I dispute this. Maybe you just don’t see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn’t have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That’s why I’m pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn’t become apparent to you though.
Personally, I’m still struggling with my own relationship to EA. I’ve been on the EV board for a year+ - an influential role at the most influential meta org—and I don’t understand how to use this role to impact EA. I see the problems more clearly than I did before, which is great, but I don’t see solutions or great ways forward yet, and I sense that nobody really does. We’re mostly working on stuff to stay afloat rather than high level navigation.
I liked Zach’s recent talk/Forum post about EA’s commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.
I’ve repeatedly had interactions with ~leadership EA that asks me to assume there’s a shadow EA cabal (positive valence) that is both skilled and aligned with my values. Or puts the burden on me to prove it doesn’t exist, which of course I can’t do. And what you’re saying here is close enough to trigger the rant.
I would love for the aligned shadow cabal to be real. I would especially love if the reason I didn’t know how wonderful it was was that it was so hypercompetent I wasn’t worth including, despite the value match. But I’m not going to assume it exists just because I can’t definitively prove otherwise.
If shadow EA wants my approval, it can show me the evidence. If it decides my approval isn’t worth the work, it can accept my disapproval while continuing its more important work. I am being 100% sincere here, I treasure the right to take action without having to reach consensus- but this doesn’t spare you from the consequences of hidden action or reasoning.
I think I actually agree with Lincoln here and think he was saying a different thing than your comment here seems to be oriented around.
I don’t think Lincoln’s comment had much to do with assuming there was a shadow EA cabal that was aligned with your values. He said “your words are having an impact.”
Words having impacts just does actually take time. I updated from stuff Ben Hoffman said, but it did take 3-4 years or something for the update to fully happen (for me in particular), and when I did ~finish updating the amount I was going to update, it wasn’t exactly the way Ben Hoffman wanted. In the first 3 years, it’s not like I can show Ben Hoffman “I am ready for your approval”, or even that I’ve concretely updated any particular way, because it was a slow messy process and it wasn’t like I knew for sure how close to his camp I was going to land.
But, it wouldn’t have been true to say “his critiques dropped like a stone through water”. (Habryka has said they also affected him, and this seems generally to have actually reveberated a lot).
I don’t know whether or not your critiques have landed, but I think it is too soon to judge.
How much are you arguing about wording, vs genuinely believe and would bet money that in 3-5 years my work will have moved EA to something I can live with?
I definitely wouldn’t bet money that EA will have evolved into something you can live with (Neither EA nor the threads of rationality that he affeted evolved into things Ben Hoffman could live with)
But, I do think there is something important about the fact that, despite that, it is inaccurate to say “the critiques dropped like a stone through water” (or, what I interpret that poetry to mean, which is something like “basically nobody listened at all”. I don’t think I misunderstood that part but if I did then I do retract my claim)
The thing I would bet is “your ‘build a lifeboat for some people-like-you to move to somewhere other than EA’ plan will work at least a bit, and, one of the important mechanisms for it working will be those effortful posts you wrote.”
The problem is that Zach does not mention being truth-aligned as one of the core principles that we wants to uphold.
He writes “CEA focuses on scope sensitivity, scout mindset, impartiality, and the recognition of tradeoffs”.
If we take an act like deleting out inconvenient information like the phrase Leverage Research from a photo on the CEA website, it does violate the principle of being truth aligned but not any of the one’s that Zach mentioned.
If I would ask Zach whether he thinks releasing the people that CEA bars with nondisclosure agreements about that one episode with Leverage about which we unfortunately don’t know more than that there are nondisclosure agreements, I don’t think he would release them. A sign of being truth-aligned would be to release the information but none of the principles Zach points in the direction of releasing people from the nondisclosure agreements.
Saying that your principle is “impartiality” instead of saying that it is “understanding conflicts of interests and managing them effectively” seems to me like a bad sign.
When talking about kidney donation in the start he celebrates self-chosen sacrifice as example of great ethics. Kidney donation is extreme virtue signaling. I would rather have EA value honesty and accountability than celebrating self-sacrifice. Instead, of celebrating people for taking actions nobody would object to he could have celebrated Ben Hoffman for the courage to speak out about problems at GiveWell and facing social rejection for it.
I don’t know the details in the Leverage case, but usually the way this sort of non-disclosure works is that both parties in a dispute, including employees, have non-disclosure obligations. But one party isn’t able to release their (ex) employees unilaterally; the other party would need to agree as well.
That is, I suspect the agreements are structured such that CEA releasing people the way you propose (without Leverage also agreeing, which I doubt they would) would be a willful contract violation.
All the involved Leverage employees told me that they would be fine having the thing released, and that it was CEA who wanted to keep things private (this might be inaccurate, this situation sure involves a lot of people saying contradictory things).
I did talk with Geoff Anders about this. He told me that there’s no legal agreement between CEA and Leverage. However, there are Leverage employees that are ex-CEA and thus bound by legal agreement. Geoff himself said, that he would consider it positive for the information to be public but he would not want to pick another fight with CEA by publically talking about what happened.
That does sound like learned helplessness and that the EA leadership filters people out who would see ways forward.
Let me give you one:
If people in EA would consider her critiques to have real value, then the obvious step is to give Elizabeth money to write more. Given that she has a Patreon the way to give her money is pretty straightforward. If the writing influences what happens in EV board discussions, paying Elizabeth for the value she provides for the board would be straightforward.
If she would get paid decently, I would expect she would feel she’s making an impact.
Paying Elizabeth might not be the solution to all of EA’s problems, but it’s a way to signal priorities. Estimate the value she provides to EA and then pay her for that value and publically publish as EV a writeup that EV thinks that this is the amount of value she provides to EA and was paid by EV.
First of all, thank you, love it when people suggest I receive money. Timothy and I have talked about fundraising for a continued podcast. I would strongly prefer most of the funding be crowdfunding, for the reason you say. If we did this it would almost certainly be through Manifund. Signing up for Patreon and noting this as the reason also works, although for my own sanity this will always be a side project.
I should note that my work on EA up through May was covered by a Lightspeed grant, but I don’t consider that EA money.
Yes, giving money in form of a grant might not be the best way to fund good posts as it makes it harder to criticize the entity that funds you and decentralized crowdfunding is better.
Maybe, an EV blog post saying something like:
If the problem is as lincolnquirk, describes that in general they don’t have much ideas about how to do better and your writing had nontrivial impact by giving ideas about what to do better, that would be the straightforward way forward.
The desire for crowdfunding is less about avoiding bias[1] and more that this is only worth doing if people are listening, and small donors are much better evidence on that question than grants. If EV gave explicit instructions to donate to me it would be more like a grant than spontaneous small donors, although I in general agree people should be looking for opportunities they can beat GiveWell.
ETA: we were planning on waiting on this but since there’s interest I might as well post the fundraiser now.
I’m fortunate to have both a long runway and sources of income outside of EA and rationality. One reason I’ve pushed as hard as I have on EA is that I had a rare combination of deep knowledge of and financial independence from EA. If couldn’t do it, who could?
Why do you think that this is the case?
Reading this makes me feel really sad because I’d like to believe it, but I can’t, for all the reasons outlined in the OP.
I could get into more details, but it would be pretty costly for me for (I think) no benefit. The only reason I came back to EA criticism was that talking to Timothy feels wholesome and good, as opposed to the battery acid feeling I get from most discussions of EA.
I want to register high appreciation of Elizabeth for her efforts and intentions described here. <3
The remainder of this post is speculations about solutions. “If one were to try to fix the problem”, or perhaps “If one were to try to preempt this problem in a fresh community”. I’m agnostic about whether one should try.
Notes on the general problem:
I suspect lots of our kind of people are not enthusiastic about kicking people out. I think several people have commented, on some cases of seriously bad actors, that it took way too long to actually expel them.
Therefore, the idea of confronting someone like Jacy and saying “Your arguments are bad, and you seem to be discouraging critical thinking, so we demand you stop it or we’ll kick you out” seems like a non-starter in a few ways.
I guess one could have lighter policing of the form “When you do somewhat-bad things like that, someone will criticize you for it.” Sort of like Elizabeth arguing against Jacy. In theory, if one threw enough resources at this, one could create an environment where Jacy-types faced consistent mild pushback, which might work to get them to either reform or leave. However, I think this would take a lot more of the required resources (time, emotional effort) than the right people are inclined to give.
Those who enjoy winning internet fights… Might be more likely to be Jacy-types in the first place. The intersection of “happy to spend lots of time policing others’ behavior” and “not having what seem like more important things to work on” and “embodies the principles we hope to uphold” might be pretty small. The example that comes to mind is Reddit moderators, who have a reputation for being power-trippers. If the position is unpaid, then it seems logical to expect that result. So I conclude that, to a first approximation, good moderators must be paid.
Could LLMs help with this today? (Obviously this would work specifically for online written stuff, not in-person.) Identifying bad comments is one possibility; helping write the criticism is another.
Beyond that, one could have “passive” practices, things that everyone was in the habit of doing, which would tend to annoy the bad actors while being neutral (or, hopefully, positive) to the good actors.
(I’ve heard that the human immune system, in certain circumstances, does basically that: search for antibodies that (a) bind to the bad things and (b) don’t bind to your own healthy cells. Of course, one could say that this is obviously the only sensible thing to do.)
Reading the transcript, my brain generated the idea of having a norm that pushes people to do exercises of the form “Keep your emotions in check as you enumerate the reasons against your favored position, or poke holes in naive arguments for your favored position” (and possibly alternate with arguing for your side, just for balance). In this case, it would be “If you’re advocating that everyone do a thing always, then enumerate exceptions to it”.
Fleshing it out a bit more… If a group has an explicit mission, then it seems like one could periodically have a session where everyone “steelmans” the case against the mission. People sit in a circle, raising their hands (or just speaking up) and volunteering counterarguments, as one person types them down into a document being projected onto a big screen. If someone makes a mockery of a counterargument (“We shouldn’t do this because we enjoy torturing the innocent/are really dumb/subscribe to logical fallacy Y”), then other people gain status by correcting them (“Actually, those who say X more realistically justify it by …”): this demonstrates their intelligence, knowledge, and moral and epistemic strength. Same thing when someone submits a good counterargument: they gain status (“Ooh, that’s a good one”) because it demonstrates those same qualities.
Do this for at least five minutes. After that, pause, and then let people formulate the argument for the mission and attack the counterarguments.
It’s worth noting that Jacy was sort-of kicked out (see https://nonprofitchroniclesdotcom.wordpress.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/ )
To me, that will lead to an environment where people think that they are engaging with criticism without having to really engage with the criticism that actually matters.
From Scott’s Criticism Of Criticism Of Criticism:
If you frame the criticism as having to be about the mission of psychiatry, it’s easy for people to see “Is it ethical to charge poor patients three-digit fees for no-shows?” as off-topic.
In an organization like GiveWell people who criticize GiveWell’s mission in such a way, are unlikely to talk about the ways, in which GiveWell favors raising more donations over being more truthseeking, that Ben Hoffman described.
This is a possible outcome, especially if the above tactic were the only tactic to be employed. That tactic helps reduce ignorance of the “other side” on the issues that get the steelmanning discussion, and hopefully also pushes away low-curiosity tribalistic partisans while retaining members who value deepening understanding and intellectual integrity. There are lots of different ways for things to go wrong, and any complete strategy probably needs to use lots of tactics. Perhaps the most important tactic would be to notice when things are going wrong (ideally early) and adjust what you’re doing, possibly designing new tactics in the process.
Also, in judging a strategy, we should know what resources we assume we have (e.g. “the meetup leader is following the practice we’ve specified and is willing to follow ‘reasonable’ requests or suggestions from us”), and know what threats we’re modeling. In principle, we might sort the dangers by [impact if it happens] x [probability of it happening], enumerate tactics to handle the top several, do some cost-benefit analysis, decide on some practices, and repeat.
My understanding/guess is that “Is it ethical to charge poor patients three-digit fees for no-shows?” is an issue where the psychiatrists know the options and the impacts of the options, and the “likelihood of people actually coming to blows” comes from social signaling things like “If I say I don’t charge them, this shows I’m in a comfortable financial position and that I’m compassionate for poor patients”/”If I say I do charge them, this opens me up to accusations (tinged with social justice advocacy) of heartlessness and greed”. I would guess that many psychiatrists do charge the fees, but would hate being forced to admit it in public. Anyway, the problem here is not that psychiatrists are unaware of information on the issue, so there’d be little point in doing a steelmanning exercise about it.
That said, as you suggest, it is possible that people would spend their time steelmanning unimportant issues (and making ‘criticism’ of the “We need fifty Stalins” type). But if we assume that we have one person who notices there’s an important unaddressed issue, who has at least decent rapport with the meetup leader, then it seems they could ask for that issue to get steelmanned soon. That could cover it. (If we try to address the scenario where no one notices the unaddressed issue, that’s a pretty different problem.)
If I say that other psychiatrists at the conference are engaging in an ethical lapse when they charge late fees to poor people then I’m engaging in an uncomfortable interpersonal conflict. It’s about personal incentives that actually matter a lot to the day-to-day practice of psychiatry.
While the psychiatrists are certainly aware of them charging poor people, they are likely thinking about it normally as business as usual instead of considering it as an ethical issue.
If we take Scott’s example of psychiatrists talking about racism being a problem in psychiatry I don’t think the problem is that that racism is unimportant. The problem is rather that you can get points by virtue signaling talking about the problem and find common ground around the virtue signaling if you are willing to burn a few scapegoats while talking about the issues of charging poor people late fees is divisive.
Washington DC is one of the most liberal places in the US with people who are good at virtue signaling and pretending they care about “solving systematic racism” yet, they passed a bill to require college degrees for childcare services. If you apply the textbook definition of systematic racism, requiring college degrees for childcare services is about creating a system that prevents poor Black people to look after children.
Systematic racism that prevents poor Black people from offering childcare services is bad but the people in Washington DC are good at rationalising. The whole discourse about racism is of a nature where people score their points by virtue signaling about how they care about fighting racism. They practice steelmanning racism all the time and steelmanning the concept of systematic racism and yet they pass systematic racist laws because they don’t like poor Black people looking after their children.
If you tell White people in Washington DC who are already steelmanning systematic racism to the best of their ability that they should steelman it more because they are still inherently racist, they might even agree with you, but it’s not what’s going to make them change the laws so that more poor Black people will look after their children.
If you want to reduce ignorance of the “other side”, listening to the other side is better than trying to steelman the other side. Eliezer explained problems with steelmanning well in his interview with Lex Friedmann.
Yes, as far as resources go, you have to keep in mind that all people involved have their interests.
When it comes to thread modelling reading through Ben Hoffman’s critique of GiveWell based on his employment at it, give you a good idea of what you want to model.
Issues in transcript labeling (I’m curious how much of it was done by machine):
After 00:07:55, a line is unattributed to either speaker; looks like it should be Timothy.
00:09:43 is attributed to Timothy but I think must be Elizabeth.
Then the next line is unattributed (should be Timothy).
After 00:14:00, unattributed (should be Timothy).
After 00:23:38, unattributed (should be Timothy)
After 00:32:34, unattributed (probably Elizabeth)
Awesome, thank you! I’m not sure if we’re going to correct this; it’s a pain in the butt to fix, especially in the YouTube version, and Elizabeth (who has been doing all the editing herself) is sick right now.
I work at CEA, and I recently became the Interim EA Forum Project Lead. I’m writing this in a personal capacity. This does not necessarily represent the views of anyone else at CEA.
I’m responding partly because my new title implies some non-zero amount of “EA leadership”. I don’t think I’m the person anyone would think of when they think “EA leadership”, but I do in fact have a large amount of say wrt what happens on the EA Forum, so if you are seriously interested in making change I’m happy to engage with you. You’re welcome to send me a doc and ask me to comment, and/or if you want to have a video call with me, you can DM me and I’ll send you a link.
Hi Elizabeth. I wanted to start by saying that I’m sorry you feel betrayed by EA. I’m guessing I have not felt any betrayal that painful in my own life, and I completely understand if you never want to interact with EA again. I don’t think EA is right for everyone, and I have no desire to pressure anyone into doing something they would regret.
I have some thoughts and reactions to the things you (and Timothy) said. On a meta level, I want to say that you are very welcome not to engage with me at all. I will not judge you for this, nor should any readers judge you. I am not trying to burden you to prove yourself to me or to the public.
I have three main goals in writing this:
Because I am new to this role, I think I have a lot to learn about how to best run the EA Forum. It sounds like both of you have thought a lot about how the EA community can improve (or at least, how it has failed, in your eyes). Essentially, it seems like it is in both of our best interests to talk about this with each other.
You don’t know me, but in my opinion I take truth-seeking seriously. I feel confused when reading/listening to your account, because it seems like our experiences differ in some ways, and I’m not sure if one of us is factually incorrect, or if we agree on the facts and have different standards or definitions for terms like “integrity”, or some other thing I haven’t thought of. So another goal I have is to highlight places where I am concerned about there being inaccuracies, outdated information, or I just have a personal disagreement.
For example, there are relatively few times in the interview where Timothy questions or challenges your points. It’s possible that the goal of this interview was not to be truth-seeking, but instead just to communicate your perspective. If so, then I have no problem with that, but as a reader I would find that surprising and I would suggest that you be more explicit about the goals of the interview to avoid misleading others.I still think my statement above is factually correct, but I certainly don’t think Elizabeth is at fault for any of the host’s actions, and I would like to avoid implying that. I think it’s customary on LW to leave in edits as crossed out text so I’ll do that here.
I will emphasize again that, even though I am starting a conversation, you are very welcome to ignore my comments and move on with your life. :)
As referenced in another comment, under Zach, CEA is taking more of a stewardship role towards EA and I think CEA being more open is an important part of that. So in the spirit of feeling more responsible for EA, I think it is valuable for someone at CEA to engage with this publicly.
I listened to the video and read the transcript, so I’ll structure much of this as responding to quotes from the transcript.
RE: “recruiting heavily and dogmatically among college students”:
I’m certainly no expert, but my understanding is that, while this is a relatively accurate description of how things worked when there was FTX money available, things have been significantly different since then. For example, Jessica McCurdy is Head of Groups at CEA (and I believe she took this role after FTX) and wrote this piece about potential pitfalls in uni group organizing, which includes points about creating truth-seeking discussions and finding the right balance of openness to ideas. I would say that this is some evidence that currently, recruiting is more careful than you describe, because, as Head of Groups at CEA, her views are likely a significant influence on uni group culture.
I wasn’t involved with EA in college, but my own relevant experience is in Virtual Programs. I’ve participated in both the Intro and Advanced courses, plus facilitated the Intro course once myself. In my opinion, both myself and the other facilitators were very thoughtful about not being dogmatic, and not pressuring participants into thinking or acting in specific ways. I also talk with a fair number of younger people at conferences who are looking for advice, and something I have repeated many times is that young people should be really careful with how involved with EA they get, because it’s easy to accidentally get too involved (ex. all your friends are EAs). I’ve encouraged multiple young people not to take jobs at EA orgs. As I alluded to above, I really do not want to pressure anyone into doing something that they would regret.
RE: “the way EA is doing it can’t filter and inform the way healthy recruiting needs to”
I’d be really curious to hear more about what you mean by this, especially if it is unrelated to Jessica’s piece above.
RE: “if I believe that EA’s true values, whatever that means, are not like in high integrity or not aligned with the values I want it to have, then I’m not going to want to lend my name to the movement”
I agree with this. I certainly internally struggled with this following FTX. However, in my experience of meeting people in a variety of EA contexts, from different places around the world, I would say that they are far more aligned with my values than like, people on average are. This is particularly clear when I compare the norms of my previous work places with the norms of CEA. I’ll quote myself from this recent comment:
“When I compare my time working in for-profit companies to my time working at CEA, it’s pretty stark how much more the people at CEA care about communicating honestly. For example, in a previous for-profit company, I was asked to obfuscate payment-related changes to prevent customers from unsubscribing, and no one around me had any objection to this.”
Perhaps more importantly, in my opinion, speaking for no one else, I think Zach in particular shares these values. In a different recent comment, I was responding to a critical article about the Forum and tried to clarify how much staff time put towards the Forum costs. I thought my original comment was open/clear about these costs, but Zach felt that it was misleading, because it did not talk about the indirect overheads that CEA pays per employee, and this could lead readers to think that our team is more cost-effective than it actually is. You can read more in my EDIT section, which I added based on his suggestion. I personally think this is an example of Zach having high integrity and being truth-seeking, and after this exchange I personally updated towards being more optimistic about his leadership of CEA. Of course you can’t judge any person on a single action, so just like in any other context, you should only think of this as one data point.
RE: “I had had vague concerns about EA for years, but had never written them up because I couldn’t get a good enough handle on it. It wouldn’t have been crisp, and I had seen too many people go insane with their why I left EA backstories. I knew there were problems but couldn’t articulate them and was in, I think, a pretty similar state to where you are now. Then I found a crisp encapsulation where I could gather data and prove my point and then explain it clearly so everyone could see it.”
I would be very interested to read a crisp encapsulation. Apologies if I missed it, but I didn’t see any specific concerns about EA overall that rise to the level of like, preventing EA from reaching its potential for improving the world, either in your transcript or in your two linked articles in the video description. (Perhaps this is a misunderstanding on my part — perhaps you are highlighting problems that you don’t see as very severe for EA overall, but you left EA because of the lack of response to your writing rather than the severity of the problems?)
The two linked articles are:
EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem
This post seemed like a lot of work, and I appreciate that you did that. I skimmed it to remind myself of the contents, and I generally agree with it. I pretty strongly agree with our Forum norms which I feel have a similar spirit. Some example snippets:
Don’t mislead or manipulate.
Aim to inform, rather than persuade.
Clarity about what you believe, your reasons for believing it (this could be “I have this intuition for some reason I can’t quite track”), and what would cause you to change your mind.
Note that I am not vegan and I don’t know much about the base facts around veganism and nutrition.
I also don’t recall any time in which someone in an EA context pressured me to be vegan. The closest thing was a student at an EAGx was visibly disappointed when I told them I wasn’t vegan, which did make me feel bad.
I think veganism can get tied up in people’s identities and can come with baggage. This can make it particularly hard to be truth-seeking around. I think most topics discussed in EA contexts have significantly less baggage, so while I agree that this is concerning, I don’t view this specific example as being strong evidence of a severe broader problem.
I think it’s a very helpful flag that we should be keeping an eye on topics that have some baggage. My understanding is that the Forum’s moderation team does this to some extent, but they are relatively hands-off (for example, they are not required to read everything posted during their on-call week), and so perhaps we should revisit the responsibilities of this role and consider if they should be more hands-on. If you have thoughts about this please let me know.
From your transcript: “EA had failed in noticing these obvious lies that everyone knew”
I agree that, if this is happening, this is bad. I’ll say that I’m not a fan of this exaggerated framing, and I find it quite anti-truth-seeking. But to address the actual point: if you ask me “Sarah, how can you be sure this is not currently happening?” I would say that I certainly cannot prove that this is not happening, since I am privy to an insignificantly small percentage of communications that happen in EA contexts — like, I barely have time to read the Forum. I end up talking with a fair number of people in EA contexts for my job, and I personally haven’t noticed signs of anything in this category happening recently.
Actually, I have a specific suggestion that might address your concerns here. If you notice something in this category happening again, and you want someone to do something about it, you can reach out to me directly. For example, if you think “no one is saying this thing [on the Forum] because everyone is too scared”, well I am probably not too scared, so I can just post something about it with my account.
On the specific points, I think sometimes your bar is too high. For example, the “Ignore the arguments people are actually making” section really resonated with me, because I also find it annoying when people do that. But I think that’s just how humans work. In my experience, people are probably somewhat better at this than average in EA contexts, and people in rationalist contexts are not better at this than people in EA contexts. I certainly appreciate the reminder and encouragement to do better, and I think it’s always good to strive to do better, but IMO this is pretty intractable.
I don’t spend a ton of time on LW so perhaps those users are overall better at these things than the EA community is. However I still find your bar for the “acceptable level of truth-seeking” pretty vague, and I will say that my in-person interactions with rationalists have not impressed me[1] wrt their ability to be truth-seeking (like, they care about this more than most people, which I appreciate, but I’ve seen them make plenty of mistakes).
Truthseeking is the ground in which other principles grow
Our team actually recently curated this post on the Forum without knowing about your interview 🙂
I like and generally agree with the points you make about how to be truth-seeking, and about why that is vitally important (as does Will who curated the post).
I think it’s pretty sus for someone’s reaction to be “I already do all the things you suggest”, but I kinda want to say something in that direction (at least I want to say that I agree these are good and I strive to do them, though perhaps not in the same ways or to the same extent that you think people should). I don’t want to make this novel of a comment any longer by providing evidence of my actions, but I’m happy to dive into it if anyone is interested.
After re-reading your “EA Vegan Advocacy” post, my guess is that you don’t intend this “Truthseeking…” post to provide concrete evidence/data of your concerns about EA. Please let me know if I’m mistaken about this.
RE: “I would be delighted if that happened. But I think it gets harder to do that every year. As EA dilutes itself doubling with college students every two years.”
It depends on how you define who is “EA”, but based on recent data I have seen, the major growth was in 2022. Therefore, I think it’s broadly accurate to say that EA doubled between 2020 and 2022, but I don’t think it’s accurate to say that about any time after 2022. In particular, after FTX, growth has slowed pretty dramatically across basically anything you’d consider to be EA.
I think the claim that “the doubling is due to college students” needs evidence to support it. My understanding is that many EA groups got more resources and grew around 2022, not just university groups. And the marketing that drove a lot of the growth in 2022 was not aimed at college students, so I don’t see why, for example, the increase in EA Forum users would all be from college students.
RE: “but EA makes a lot of noise about arguing within itself and yet there’s a reason so much criticism on EA forum is anonymous And the criticism on LessWrong is all done under people’s stable pseudonym or real name.”
There are probably many reasons for this, and I think one is that there are less power structures within the rationalist community (I don’t think there is the equivalent of OP — maybe Eliezer?). The EA community has more major funders. I think people in the EA community also tend to be more risk-averse than LWers, so they are more likely to default to writing anonymously. And I believe that culturally, LW socially rewards blunt criticism more than the EA Forum does, so there is extra incentive to do that since it’s more likely to be good for your social status. On the other hand, my impression is that the EA community is more welcoming of criticizing people/orgs in power (in the community, not in the world) than LW is, so it’s possible that criticism on the EA Forum has more teeth. For example, there is a fair amount of criticism of OP and CEA that is well-received on the Forum, and I don’t know of a similar situation for Eliezer and Lightcone on LW (I’m less familiar with LW so definitely feel free to correct me if I’m mistaken here). So in some sense, I think more anonymity is a result of the fact that holding power in EA is more consequential, and it is more socially acceptable to criticize those with actual power in the community. To be clear, I’m not trying to encourage people to be scared of posting criticism with their real name, and I don’t know of any specific cases where someone was harmed from doing that, but I just think that it’s reasonable for a person who doesn’t know the consequences of posting criticism to default to doing so anonymously.
In my opinion, it’s not clear that, in an ideal world, all criticism would be written using someone’s real name. I feel pretty uncertain about this, but it seems to me that we should be supportive of people sharing criticism even if they have personal reasons for staying anonymous. Personally, I really appreciate receiving feedback in general, and I would prefer someone give it to me anonymously than not at all.
RE: “E: Which EA leaders do you most resonate with? T: …it’s a list of friends…most of my friends don’t actually want to be talked about in public. I think it speaks against the health of the EA movement right now”
I agree with this, which is why I feel optimistic about CEA putting significant resources towards EA communications work. My guess is that this will be some combination of, being more open and publicly communicating about CEA itself, and putting more effort into proactively writing about what EA is and what people in the community are doing, for a broader audience.
RE: “I would suggest that if you don’t care about the movement leaders who have any steering power. You’re, you’re not in that movement.“
I will say that I identified as an (aspiring) effective altruist for many years before I could name or identify any EA leaders (I only started learning names when I was hired at CEA). I simply found the core principles very compelling, and occasionally read some relevant articles, and donated money via GiveWell and some other places. You could argue that I wasn’t “in the movement”, but I do keep my identity small and the principles resonated with me enough to add that identity.
RE: “ea would be better served by the by having leadership that actually was willing to own their power more”I would say that under Zach, CEA as an institution is taking more of a leadership role than it previously had been (which was basically none), and people within CEA are more empowered to “own our power” (for example, the Forum team will likely do more steering than before, which again was minimal). EDIT: Based on responses, I’m a bit worried that this is misleading. So I will disconnect this point from the actual quote, and add some clarifications.
RE: “if you’re going to have a big enough movement, you actually want leaders who are. Not just leading in their shadow but are like hey, I feel called to lead which means like i’m gonna like Be the scapegoat and i’m gonna be the one like making big level observations and if you come to me with a problem I’m gonna try to align the integrity of our entire community around The truth.”
I do expect this means that CEA will be the scapegoat more often. I also expect CEA will put more resources towards high level observations about the EA community (for example, I referenced data about EA growth earlier, which came from a CEA project). I’m less sure about that last point, because EA is ultimately a framework (like the scientific method is), and we can’t be sure how many people are out there inspired by EA principles but do not, like, read the EA Forum. I guess you could just define “the entire community” as “people who read the EA Forum” to circumvent that issue. In which case, I expect CEA to do more of that going forward.
Concluding thoughts
I understand that this is an emotional topic for you; however, I was surprised at the amount of anti-truth-seeking I saw in your responses. (My bar for you is perhaps unreasonably high because you write about truth-seeking a lot, and I assume you hold yourself to the standards that you preach.) For example, I think the state of EA university group recruiting is a key factor in your beliefs, but it also seems like you do not have up-to-date information (nor did you attempt to find that information before stating your internal view as if it was fact). You often use exaggerated language (for example, “EA had failed in noticing these obvious lies that everyone knew”, which is certainly not literally true), which I think is actually quite harmful for truth-seeking. Outside of the “EA Vegan Advocacy” post, I see surprisingly few instances of you publicly thinking through ways in which you might be wrong, or even gesturing towards the possibility that you might be wrong, or at least hinting at what your confidence levels are. I genuinely want to engage with your concerns about EA, but I feel like this post (even together with your two posts linked above) is not epistemically legible[2] enough for me to do that. I can’t find a clear core claim to grasp onto.
“Integrity” is a concept that comes up a lot in the interview. I haven’t really addressed it in my comment so I figured I should do so here. Personally I have some complicated/unresolved feelings about what integrity actually means[3], so I don’t know to what extent I have it. I’m happy to dive into that if anyone is interested. If you want to test me and tell me objectively how much integrity I have, I’m open to that — that sounds like it would be helpful for me to know as well. :)
To close, I’ll just caveat that I spent a long time writing this comment, but because I wrote about so many things, I wouldn’t be surprised if I said something that’s wrong, or if I misunderstood something that was said, or if I change my mind about something upon further reflection. I’m generally happy to receive feedback, clarify anything I said that was unclear, and discuss these issues further. Specifically, I have significant influence over the EA Forum, so I would be particularly interested to discuss issues and improvements focused on that project.
Meaning, based on rationalist writings, I had higher expectations, so I was disappointed to find they did not meet those expectations.
I had forgotten that you were the person who coined this term — thank you for that, I find it very helpful!
For example, I think people sometimes mix up the concept of “having integrity” with the concept of “acting in the same way that I would” or even “acting in a way that I would find reasonable”, but my understanding is that they are distinct. I’m quite unsure about this though so I could certainly be wrong!
There’s a lot here and if my existing writing didn’t answer your questions, I’m not optimistic another comment will help[1]. Instead, how about we find something to bet on? It’s difficult to identify something both cruxy and measurable, but here are two ideas:
I see a pattern of:
1. CEA takes some action with the best of intentions
2. It takes a few years for the toll to come out, but eventually there’s a negative consensus on it.
3. A representative of CEA agrees the negative consensus is deserved, but since it occurred under old leadership, doesn’t think anyone should draw conclusions about new leadership from it.
4. CEA announces new program with the best of intentions.
So I would bet that within 3 years, a CEA representative will repudiate a major project occurring under Zach’s watch.
I would also bet on more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting.
Although you might like Change my mind: Veganism entails trade-offs, and health is one of the axes (the predecessor to EA Vegan Advocacy is not Truthseeking) and Truthseeking when your disagreements lie in moral philosophy and Love, Reverence, and Life (dialogues with a vegan commenter on the same post)
Thanks! I’m down to bet, though I don’t feel like it would make sense for me to take either of those specific bets. I feel pretty clueless about whether “a CEA representative will repudiate a major project occurring under Zach’s watch”. I guess I think it’s reasonable for someone who was just hired at CEA to not to be held personally responsible for projects that started and ended before they were hired (though I may be misunderstanding your proposed bet). I also have very little information about the current state of EA university group recruiting, so I wouldn’t be that surprised if “more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting”. TBH I’m still not clear on what we disagree about, or even whether we actually disagree about anything. 😅
Apologies if I wasn’t clear about this, but my main comment was primarily a summary of my personal perspective, which is based on a tiny fraction of all the relevant information. I’m very open to the possibility that, for example, EA university group recruiting is pressuring students more than I would find appropriate. It’s just that, based on the tiny fraction of information I have, I see no evidence of that and only see evidence of the opposite. I would be really interested to hear if you have done a recent investigation and have evidence to support your claims, because you would have a fair chance of convincing me to take some action.
Anyway, I appreciate you responding and no worries if you want to drop this. :) My offer to chat synchronously still stands, if you’re ever interested. Though since I’m in an interim position, I’m not sure how long I will have the “EA Forum Project Lead” title.
I think this would be a mistake (or more likely I think you and Elizabeth mean different things here.)
As you mention in other parts of your comment, most people who consider themselves aligned with EA don’t know or care much about CEA, and coupling their alignment with EA as principles with an alignment with CEA as an organization seems counterproductive.
Ah interesting, yeah it’s certainly possible that I misunderstood Elizabeth here. Apologies if that’s the case!
I’ll try to explain what I mean more, since I’m not sure I understand how my interpretation differs from Elizabeth’s original intent. So in the past, CEA’s general stance was one more like “providing services” to help people in the EA community improve the world. Under Zach, we are shifting in the direction of “stewardship of EA”. I feel that this implies CEA should be more proactive and take more responsibility for the trajectory of EA than it has in the past (to be clear, I don’t think this means we should try to be the sole leader, or give people orders, or be the only voice speaking for EA). One concrete example is about how much steering the Forum team does: in the past, I would have been more hesitant to steer discussions on the Forum, but now it feels more appropriate (and perhaps even necessary) for the Forum team to be more opinionated and steer discussions in that space.
Sorry, I don’t feel like I understand this point — could you expand on this, or rephrase?
As a personal example, I feel really aligned with EA principles[1], I feel much less sure about CEA as an organization.[2]
If the frame becomes “EA is what CEA does”, you would lose a lot of the value of the term “EA”, and I think very few people would find it useful.
See why effective altruism is always lowercase, and William MacAskill “effective altruism is not a package of particular views.”
My understanding is that you agree with me, while Elizabeth would want effective altruism to be uppercase in a sense, with a package of particular views that she can clearly agree or disagree with, and an EA Leader that says “this is EA” and “this is not EA.” (Apologies if I misunderstood your views)
“CEA as an institution is taking more of a leadership role” could be interpreted as saying that CEA is now more empowered to be the “EA Leader” that decides what is EA, but I think that’s not what you mean from the rest of your comment.
Does that make sense?
For me EA principles are these ones:
I think these are principles that most people disagree with, and most people are importantly wrong.
I think they are directionally importantly right in my particular social context (while of course they could be dangerous in other theoretical contexts)
Despite thinking that all people I’ve interacted with who work there greately care about those same principles.
Seeing my statements reflected back is helpful, thank you.
I think Effective Altruism is upper case and has been for a long time, in part because it aggressively recruited people who wanted to follow[1]. In my ideal world it both has better leadership and needs less of it, because members are less dependent.
I think rationality does a decent job here. There are strong leaders of individual fiefdoms, and networks of respect and trust, but it’s much more federated.
Which is noble and should be respected- the world needs more followers than leaders. But if you actively recruit them, you need to take responsibility for providing leadership.
Thanks, that’s very helpful! Yeah I believe you’ve correctly described my views. To me, EA is defined by the principles. I’ll update my original comment, since now it seems that bit is misleading.
(I still think there is something there that gestures in the direction that Elizabeth is going. When I say “CEA is taking more of a leadership role”, I simply mean that literally — like, previously CEA was not viewing itself as being in a leadership role, and now it is doing that a non-zero amount. I think it matters that someone views themselves as even slightly responsible for the trajectory of EA, and you can’t really be responsible without wielding some power. So that’s how I read the “willing to own their power more” quote.)
fwiw, I think it’d be helpful if this post had the transcript posted as part of the main post body.
I’m curious why this feels better, and for other opinions on this.
You could put it in a collapsible section, so that it’s easy to get to the comment section by-default.
I still consider myself to be EA, but I do feel like a lot of people calling themselves that and interacting with the EA forum aren’t what I would consider EA. Amusingly, my attempts to engage with people on the EA forum recently resulted in someone telling me that my views weren’t EA. So they also see a divide. What to do about two different groups wanting to claim the same movement? I don’t yet feel ready to abandon EA. I feel like I’m a grumpy old man saying “I was here first, and you young’uns don’t understand what the true EA is!”
A link to a comment I made recently on the EA forum: https://forum.effectivealtruism.org/posts/nrC5v6ZSaMEgSyxTn/discussion-thread-animal-welfare-vs-global-health-debate?commentId=bHeZWAGB89kDALFs3
Thoughts on how this might be done:
Interview a bunch of people who became disillusioned. Try to identify common complaints.
For each common complaint, research organizational psychology, history of high-performing organizations, etc. and brainstorm institutional solutions to address that complaint. By “institutional solutions”, I mean approaches which claim to e.g. fix an underlying bad incentive structure, so it won’t require continuous heroic effort to address the complaint.
Combine the most promising solutions into a charter for a new association of some kind. Solicit criticism/red-teaming for the charter.
Don’t try to replace EA all at once. Start small by aiming at a particular problem present in EA, e.g. bad funding incentives, criticism (it sucks too hard to both give and receive it), or bad feedback loops in the area of AI safety. Initially focus on solving that particular problem, but also build in the capability to scale up and address additional problems if things are going well.
Don’t market this as a “replacement for EA”. There’s no reason to have an adversarial relationship. When describing the new thing, focus on the specific problem which was selected as the initial focus, plus the distinctive features of the charter and the problems they are supposed to solve.
Think of this as an experiment, where you’re aiming to test one or more theses about what charter content will cause organizational outperformance.
I think it would be interesting if someone put together a reading list on high-performing organizations, social movement history, etc. etc. I suspect this is undersupplied on the current margin, compared with observing and theorizing about EA as it exists now. Without any understanding of history, you run the risk of being a “general fighting the last war”—addressing the problems EA has now, but inadvertently introduce a new set of problems. Seems like the ideal charter would exist in the intersection of “inside view says this will fix EA’s current issues” and “outside view says this has worked well historically”.
A reading list might be too much work, but there’s really no reason not to do an LLM-enabled literature review of some kind, at the very least.
I also think a reading list for leadership could be valuable. One impression of mine is that “EA leaders” aren’t reading books about how to lead, research on leadership, or what great leaders did.
I feel pretty uncertain to what extent I agree with your views on EA. But this podcast didn’t really help me decide because there wasn’t much discussion of specific evidence. Where is all of it written down? I’m aware of your post on vegan advocacy but unclear if there are lots more examples. I also heard a similar line of despair about EA epistemics from other long-time rationalists when hanging around Lighthaven this summer. But basically no one brought up specific examples.
It seems difficult to characterize the EA movement as a monolith in the way you’re trying to do. The case of vegan advocacy is mostly irrelevant to my experience of EA. I have little contact with vegan advocates and most of the people I hang around in EA circles seem to have quite good epistemics.
However I can relate to your other example, because I’m one of the “baby EAs” who was vegetarian and was in the Lightcone offices in summer 2022. But my experience provides something of a counter-example. In fact, I became vegetarian before encountering EA and mostly found out about the potential nutritional problems from other EAs. When you wrote your post, I got myself tested for iron deficiency and started taking supplements (although not for iron deficiency). I eventually stopped being vegetarian, instead offsetting my impact with donations to animal charities, even though this isn’t very popular in EA circles.
My model is that people exist on a spectrum of weirdness to normy-ness. The weird people are often willing to pay social costs to be more truthful. While the more normy people will refrain from saying and thinking the difficult truths. But most people are mostly fixed at a certain point on the spectrum. The truth-seeking weirdos probably made up a larger proportion of the early EA movement, but I’d guess in absolute terms the number of those sorts of people hanging around EA spaces has not declined, and their epistemics have not degraded—there just aren’t very many of them in the world. But these days there is a greater number of the more normy people in EA circles too.
And yes, it dilutes the density of high epistemics in EA. But that doesn’t seem like a reason to abandon the movement. It is a sign that more people are being influenced by good ideas and that creates opportunities for the movement to do bigger things.
When you want to have interesting discussions with epistemic peers, you can still find your own circles within the movement to spend time with, and you can still come to the (relative) haven of LessWrong. If LessWrong culture also faced a similar decline in epistemic standards I would be much more concerned, but it has always felt like EA is the applied, consumer facing product of the rationalist movement, that targets real-world impact over absolute truth-seeking. For example, I think most EAs (and also some rationalists) are hopelessly confused about moral philosophy, but I’m still happy there’s more people trying to live by utilitarian principles, who might otherwise not be trying to maximize value at all.
there are links in the description of the video
That was an interesting conversation.
I do have some worries about the EA community.
At the same, I’m excited to see that Zach Robison has taken the reins as CEA and I’m looking forward to seeing how things develop under his leadership. The early signs have been promising.
What concrete things did he change at CEA that are promising signs?
I thought that this post on strategy and this talk were well done. Obviously, I’ll have to see how this translates into practise.
The post basically says that the taking actions like “running EA global” is the “principles-first” approach as it is not “cause-first”. None of the actions he advocates as principle-first are about, rewarding people for upholding principles or holding people accountable for violating principles.
How can a strategy for “principle-first” that does not deal with the questions of how to set incentives for people to uphold principles be a good strategy?
If you read the discussion on this page with regards to university groups not upholding principles, there are issues. Zach’s proposed strategy sees funding them in the way they currently operate, as a good example of what he sees as principle-first because:
This suggests that Zach sees the current training for facilitators already as working well and not as something that should be changed. Suggesting that just because EA groups prioritize a variety of causes they are principles-first seems to me like appropriating the term principle-first to talk about something that’s not about principles.
When it comes to the actual principles, not seeing integrity, honesty, and thinking about incentives as important key principles also feels like a bad choice. One lesson from the whole FTX saga would be that those principles are important and that’s not a lesson that Zach draws.
If you think this is a good strategy, what would a bad “principle-first” strategy look like? What could Zach have done worse?
I recommend rereading his post. I believe his use of the term makes sense.
I did read his post. The question is not whether the term makes sense but whether it’s a good strategy.
It’s not about getting people to act according to principles but to rebrand what previously would be called cause-neutral as principle-first and continue to do the same thing CEA did in the past.
Sadly, cause-neutral was an even more confusing term, so this is better than the comparative. I also think that the two notions of principles-first are less disconnected than you think, but through somewhat indirect effects.
Even if the term would be an improvement, why would changing out one term for another make you say “early signs have been promising”. Promising in the sense that he will come up with new terms, because the core problems of EA is not having the right labels to speak about what EAs are doing?
I would find the perspective on EA where the biggest problem of EA is about it using the wrong labels, to be a quite strange perspective.
A good post about a strategy that attempts to produce indirect effects would lay out the theory of change through which the indirect effects would be created.
I would suggest adopting a different method of interpretation, one more grounded in what was actually said. Anyway, I think it’s probably best that we leave this thread here.
How many people in total were tested? From the Interim report, it looks like only six people got tested, so I assume you’re referencing something else.
There were ~20 in round 2, and I’ve gotten reports of other people being inspired by the post to get tested themselves that I estimate at least double that.
Nice! I really like that you did that work and am in agreement that too many vegans in general (not just EA vegans) suck at managing their diet. Of the four former vegans who I know/known personally, all of them stopped because of health reasons (though not necessarily health reasons induced by being vegan).
That said, I don’t see round 1 or round 2 as being particularly strong evidence of anything. The sample sizes seem too small to draw much inference from. There’s +7k people in the EA movement,[1] with around 46% of whom are vegan or vegetarian. Two surveys, one of six people from Lightcone, another of 20 people (also from Lightcone?) just don’t have enough participants to make strong claims on ˜3000 people. You say as much in the second post:
This seems at odds with what you claim in the podcast:[2]
Separately, it’s unclear to me how many people in the second survey actually are vegan / vegetarian rather than people with fatigue problems:
This was back in published back in 2021, so I expect the numbers to be even higher now.
A separate point/nitpick, this part of the transcript incorrectly attributes your words to Timothy:
see also: https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one
This post seems to be arguing that veganism involves trade offs (I didn’t read through the comments). I don’t disagree with that claim[1] (and am grateful for you taking the time to write it up). The part I take issue with is that the two surveys you conducted were strong evidence, which I don’t think they are.
Though I do lean towards thinking most people or even everyone should bite the bullet and accept the reduced health to spare the animals.
Comment cross-posted to the Effective Altruism Forum
Edit, 15 December 2024: I’m not sure why this comment has gotten so downvoted in only the couple hours since I posted it, though I could guess why. I wrote this comment off the cuff, so I didn’t put as much effort into writing it as clearly or succinctly as I could, or maybe should, have. So, I understand how it might read is as a long, meandering nitpick, of a few statements near the beginning of the podcast episode, without me having listened to the whole episode yet. Then, I call a bunch of ex-EAs naive idiots, like Elizabeth referred to herself as at least formerly being a naive idiot, and then say even future effective altruists will be proven to be idiots, and those still propagating EA after so long, like Scott Alexander, might be the most naive and idiotic of all. To be clear, I also included myself, so this reading would also imply that I’m calling myself a naive idiot.
That’s not what I meant to say. I would downvote that comment too. I’m saying that
If it’s true what Elizabeth is saying about her being a naive idiot, then it would seem to follow that a lot of current, and former, effective altruists, including many rationalists, would also be naive idiots for similar reasons.
If that were the case, then it’d be consistent with greater truth-seeking, and criticizing others for not putting enough effort into truth-seeking with integrity with regards to EA, to point out to those hundreds of other people that they either, at one point were, or maybe still are, naive idiots.
If Elizabeth or whoever wouldn’t do that, not only because they consider it mean, but moreover because they wouldn’t think it true, then they should apply the same standards to themselves, and reconsider that they were not, in fact, just naive idiots.
I’m disputing the “naive idiocy” hypothesis here as spurious, as it comes down to the question of
whether someone like Tim—and, by extension, someone like me in the same position, who has also mulled over quitting EA—are still being naive idiots, on account of not having updated yet to the conclusion Elizabeth has already reached.
That’s important because it’d seem to be one of the major cruxes of whether someone like Tim, or me, would update and choose to quit EA entirely, which is the point of this dialogue, so if that’s not a true crux of disagreement here, speculating about whether hundreds of current and former effective altruists have been naive idiots is a waste of time.
I’ve begun listening to this podcast episode. Only a few minutes in, I feel a need to clarify a point of contention over some of what Elizabeth said:
She also mentioned that she considers herself to have caused harm by propagating EA. It seems like she might be being too hard on herself. While she might consider being that hard on herself to be appropriate, the problem could be what her conviction implies. There are clearly still some individual, long-time effective altruists she still respects, like Tim, even if she’s done engaging with the EA community as a whole. If that wasn’t true, I doubt this podcast would’ve been launched in the first place. Having been so heavily involved in the EA community for so long, and still being so involved in the rationality community, she may know hundreds of people, friends, who either still are effective altruists now, or used to be effective altruists, but no longer. Regarding the sort of harm caused by EA propagating itself as a movement, she provides this as a main example.
Hearing that made me think about a criticism of the organization of EA groups for university students made last year by Dave Banerjee, former president of the student EA club at Columbia University. His was one of the most upvoted criticisms of such groups, and how they’re managed, ever posted to the EA Forum. While Dave apparently realized what are presumably some of the same conclusions as Elizabeth about the problems with evangelical university EA groups, he did so with a much quicker turnaround than her. He shifted towards such a major update while still a university student, while it took her several years. I don’t mention that so as to imply that she was necessarily more naive and/or idiotic than he was. From another angle, given that he was propagating a much bigger EA club than Elizabeth ever did, at a time when EA was being driven to grow much faster than when Elizabeth might’ve been more involved with EA movement/community building, Dave could have easily have been responsible for causing more harm. Therefore, perhaps he has perhaps been even a more naive idiot than she ever was.
I’ve known other university students who were formerly effective altruists helping build student EA clubs, who quit because they also felt betrayed by EA as a community. Given that it’s not like EA will be changing overnight, in spite of whoever considers it imperative some of it movement-building activities stop, there will be teenagers in the future, coming months, who may come through EA with a similar experience. Their teenagers who may be chewed up and spit out, feeling ashamed of their complicity in causing harm through propagating EA as well. They may not have even graduated high school yet, and within a year or two, they may also be(come) those effective altruists, then former effective altruists, who Elizabeth is anticipating and predicting that she would call naive idiots. Yet those are the very young people Elizabeth would seek to prevent from befalling harm themselves by joining EA in the first place. It’s not evident that there’s any discrete point at which they cease being those who should heed her warning in the first place, and instead become naive idiots to chastise.
Elizabeth also mentions how she became introduced to EA in the first place.
As of a year ago, Scott Alexander wrote a post entitled In Continued Defense of Effective Altruism. While I’m aware he made some later posts responding to some criticisms of that one he made, I’m guessing he hasn’t abandoned that thesis of that post in its entirety. Meanwhile, as one of, if not the, most popular blog associated with either the rationality or EA communities, one way or another, Scott Alexander may still be drawing more people into the EA community than almost any other writer. If that means he may be causing more harm by propagating EA than almost any other rationalist still supportive of EA, then, at least in that particular way Elizabeth has in mind, Scott may right now continue to be one of the most naive idiots in the rationality community. The same may be true of so many effective altruists Elizabeth got to know in Seattle.
What I’m aware is a popular refrain among rationalists is: speak truth, even if your voice trembles. Never mind on the internet, Elizabeth could literally go meet hundreds of effective altruists or rationalists she has known in either the Bay Area, and Seattle, and tell them that for years they, too, were also naive idiots, or that they’re still being naive idiots. Doing so could be how Elizabeth could prevent them from causing harm. In not being willing to say so, she may counterfactually be causing so much more harm by saying or doing so much less to stop EA from propagating than she knows that she can.
Whether it be Scott Alexander, or so many of her friends who have been or still are in EA, or those who’ve helped propagate university student groups like Dave Banerjee, or those young adults who will come and go through EA university groups by the year 2026, there are hundreds of people Elizabeth should be willing to call, to their faces, naive idiots. It’s not a matter of whether she, or anyone, expects that’d work as some sort of convincing argument. That’s the sort of perhaps cynical and dishonest calculation she, and others, rightly criticize in EA. She should tell all of them that, if she believes it, even if her voice trembles. If she doesn’t believe that, that merits an explanation of how she considers herself to have been a naive idiot, but so many of them to not have been. If she can’t convincingly justify, not just to herself, but others, why she was exceptional in her naive idiocy, then perhaps she should reconsider her belief that even she was a naive idiot.
In my opinion she, or so many other former effective altruists, were not just naive idiots. Whatever mistakes they made, epistemically or practically, I doubt the explanation is that simple. The operationalization here of “naive idiocy” doesn’t seem like a decently measurable function of, say, how long it took before it was just how much harm someone was causing by propagating EA, and how much harm they did cause in that period of time. “Naive idiocy” here doesn’t seem to be all that coherent an explanation for why so many effective altruists got so much, so wrong, for so long.
I suspect there’s a deeper crux of disagreement here, one that hasn’t been pinpointed yet, by Elizabeth or Tim. It’s one I might be able to discern if I put in the effort, though I don’t have a sense of what it might’ve been either. I could, given that I still consider myself an effective altruist, though I ceased to be an EA group organizer myself last year too, on account of me not being confident in helping grow the EA movement further, even if I’ve continued participating in it for what I consider its redeeming qualities.
If someone doesn’t want to keep trying to change EA for the better, and instead opts to criticize it to steer others away from it, it may not be true that they were just naive idiots before. If they can’t substantiate their formerly naive idiocy, then to refer to themselves as having only been naive idiots, and by extension imply so many others they’ve known still are or were naive idiots too, is neither true nor useful. In that case, if Elizabeth would still consider herself to have been a naive idiot, that isn’t helpful, and maybe it is also a matter of her, truly, being too hard on herself. If you’re someone who has felt similarly, but you couldn’t bring yourself to call so many friends you made in EA a bunch of naive idiots to their faces because you’d consider that false or too hard on them, maybe you’re being too hard on yourself too. Whatever you want to see happen with EA, us being too hard on ourselves like that isn’t helpful to anyone.