I believe there are ways to recruit college students responsibly. I don’t believe the way EA is doing it really has a chance to be responsible. I would say, the way EA is doing it can’t filter and inform the way healthy recruiting needs to. And they’re funneling people, into something that naivete hurts you in. I think aggressive recruiting is bad for both the students and for EA itself.
Enjoyed this point—I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned. Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don’t see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.
Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.
Seattle EA watched a couple of the animal farming suffering documentaries. And everyone was of course horrified But, not everyone was ready to just jump on, let’s give this up entirely forever. So we started doing more research, and I posted about, a farm a couple hours away that did live tours, and that seemed like a reasonable thing to learn, like, a limited but useful thing.
Definitely think that on the margin, more “directly verifying base reality with your own eyes” would be good in EA circles. Eg at one point, I was very critical of those mission trips to Africa where high schoolers spend a week digging a well; “obviously you should just send cash!” But now I’m much more sympathetic.
This also stings a bit for Manifund; like 80% of what we fund is AI safety but I don’t really have much ability to personally verify that the stuff we funded is any good.
The natural life cycle of movements and institutions is to get captured and be pretty undifferentiated from other movements in their larger cultural context. They just get normal because normal is there for a reason and normal is easiest. And if you want to do better than that, if you want to keep high epistemics, because normal does not prioritize epistemics, you need to be actively fighting for it, and bringing a high amount of skill to it. I can’t tell you if EA is degrading at like 5 percent a year or 25 percent a year, I can tell you that it is not self correcting enough to escape this trap.
I think not enforcing an “in or out” boundary is big contributor to this degradation—like, majorly successful religions required all kinds of sacrifice and
What I think is more likely than EA pivoting is a handful of people launch a lifeboat and recreate a high integrity version of EA.
It feels like AI safety is the best current candidate for this, though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what “Post EA” looks like.
I hear that as every true wizard must test the integrity of their teacher or of their school, Hogwarts, whatever the thing is. The reason you don’t get to graduate until you actually test the integrity of the school is because if you’re just taking it on its own word, then you could become a villain.
You have to respect your own moral compass to be able to be trusted.
Really liked this analogy!
Which EA leaders do you most resonate with?
I would suggest that if you don’t care about the movement leaders who have any steering power, you’re not in that movement.
I like this as a useful question to keep in mind, though I don’t think it’s totally explanatory. I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
Timothy: Give me a vision of a different world where ea would be better served by the by having leadership that actually was willing to own their power more
Elizabeth: which you’ll notice even holden won’t do
Timothy: yeah, he literally doesn’t want the power.
Elizabeth: Yeah, none of them do. CEA doesn’t want it.
I’ve been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA. (plus of course, we’d get to debate stuff like optimal voting systems and enfranchisement—my kind of catnip)
don’t see the downstream impacts of their choices,
This could be part of it… but I think a hypothesis that does have to be kept in mind is that some people don’t care. They aren’t trying to follow action-policies that lead to good outcomes, they’re doing something else. Primarily, acting on an addiction to Steam. If a recruitment strategy works, that’s a justification in and of itself, full stop. EA is good because it has power, more people in EA means more power to EA, therefore more people in EA is good. Given a choice between recruiting 2 agents and turning them both into zombies, vs recruiting 1 agent and keeping them an agent, you of course choose the first one--2 is more than 1.
Mm I’m extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as “I want recruits at all costs”. I predict that if you talk to one and asking them about it, you’d find the same.
I do think that it’s easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy—but I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.
I haven’t grokked the notion of “an addiction to steam” yet, so I’m not sure whether I agree with that account, but I have a feeling that when you write “I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned” you are papering over real values differences.
Tons of EAs will tell you that honesty and integrity and truth-seeking are of course ‘important’, but if you observe their behavior they’ll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think there’s a difference in the culture and values between (on one hand) people around rationalist circles who worry a lot about how to give honest answers to things like ‘How are you doing today?‘, who hold themselves to the standards of intent to inform rather than simply whether they out and out lied, who will show up and have long arguments with people who have moral critiques of them, and (on the other hand) most of the people in the EA culture and positions of power who don’t do this, and so the latter can much more easily deceive and take advantage of people by funneling them into career paths which basically boil down to ‘devoting yourself to whatever whoever is powerful in EA thinks is a maybe-good idea this month’. Paths that people wouldn’t go down if they candidly were told up front what was going on.
I think it’s fair to say that many/most EAs (including those involved with student groups) don’t care about integrity and truth-seeking things very much, or at least not enough to bend them off the path of reward and momentum by the standards of the EA ideology / EA leaders & grantmakers when the path is going wrong, and I think this is a key reason why EA student groups are able to be like ponzi schemes. ‘Well-intentioned’ does not get you ‘has good values’ and it is not a moral defense of ponzi schemes to argue that everyone involved was “kind and well-intentioned”.
I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned. Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don’t see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.
I agree it is hard to get feedback, but this doesn’t mean one cannot have good standards. A ton of my work involves maintaining of boundaries where I’m not quite sure what the concrete outputs will look like. I kind of think this is one of the main things people are talking about when we talk about values — what heuristics do you operate by in the world for most of the time when you’re mostly not going to get feedback?
there are real value differences between EA folks and rationalists
good intentions do not substitute for good outcomes
However:
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
There are occasional bad people like SBF but that’s not what I’m talking about here. I’m talking about a lot of perfectly kind people who don’t hold the values of integrity and truth-seeking as part of who they are, and who couldn’t give a good account for why many rationalists value those things so much (and might well call rationalists weird and autistic if you asked them to try).
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don’t think that’s actually saying that much, and I believe there’s a lot of massively impactful difference in culture and values.
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
I think EA recruiters have repeatedly made decisions like choosing 2 zombies over 1 agent, and were I or Tsvi to look at the same set of options and information we would have made a different decision, because we’ve learned to care about candor and wholesomeness and respecting other people’s sense-making and a bunch of other things. I don’t think this makes them bad people. Having good values takes a lot of work by a lot of people to encapsulate and teach them, a good person should not be expected to re-derive an entire culture for themselves, and I think most of the world does not teach all of the values I care about to people by the age of 18, like lightness and argument and empiricism and integrity and courage and more. They don’t care about a number of the values that I hold, and as a result will make decisions counter to those values.
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don’t think that’s actually saying that much, and I believe there’s a lot of massively impactful difference in culture and values.
My best guess is something like a third of rationalists are also EAs, at least going by identification. (I’m being lazy for the moment and not cross checking “Identifies as Rationalist” against “Identifies as EA” but I can if you want me to and I’m like 85% sure the less-lazy check will bear that out.) My educated but irresponsible guess is something like 10% of EAs are rationalists. Last time I did a straw poll at an ACX meetup, more than half the people attending were also EAs. Whatever the differences are, it’s not stopping a substantial overlap on membership, and I don’t think that’s just at the level of random members but includes a lot of the notable members.
I’d be pretty open to a definition of ‘rationalist’ that was about more than self-identification, but to my knowledge we don’t have a workable definition better than that. It’s plausible to me that the differences matter as you lean on them a lot, but I think it’s more likely the two groups are aligned for most purposes.
Thanks for the data! I agree there’s a fair bit of overlap in clusters of people.
Two points:
I am talking about the cultural values more than simply the individuals. I think a person’s environment really brings very different things out of them. The same person(s) working at Amazon, DC politics, and a global-health non-profit, will get invited to live out different values and build quite different identities for themselves. The same person in-person and on Twitter can also behave as quite different people. I think LessWrong has a distinct culture from the EA Forum, and I think EAG has a distinct culture from ACX meetups.
Not every person in a scene strongly embodies the ideals and aspirations of that scene. Many people who come to rationalist meetups I have yet to get on the same page about with lots of values e.g. I still somewhat regularly have to give arguments against various reasons for why people sometimes endorse self-deception, even to folks who have been around for many years. The ideals of EA and LW are different.
So even though the two scenes have overlap in people, I still think the scenes live out and aspire to different values and different cultures, and this explains a lot of difference in outcomes.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
I appreciate you drawing the distinction! The bit about “bad people” was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.
There’s a lot of massively impactful difference in culture and values
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like “what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement”; I feel like I’m lowkey failing an ITT here...)
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
I can think about that question if it seems relevant, but the initial claim of Elizabeth’s was “I believe there are ways to recruit college students responsibly. I don’t believe the way EA is doing it really has a chance to be responsible”. So I was trying to give an account of the root cause there.
Also — and I recognize that I’m saying something relatively trivial here — the root cause of a problem in a system can of course be any seemingly minor part of it. Just because I’m saying one part of the system is causing problems (the culture’s values) doesn’t mean I’m saying that’s what’s primarily responsible for the output. The current cause of a software company’s current problems might be the slow speed with which PR reviews are happening, but this shouldn’t be mistaken for the claim that the credit allocation for the company’s success is primarily that it can do PR reviews fast.
So to repeat, I’m saying that IMO the root cause of irresponsible movement growth and ponzi-scheme-like recruitment strategies was a lack of IMO very important values like dialogue and candor and respecting other people’s sense-making and courage and so on, rather than an explanation more like ‘those doing recruitment had poor feedback loops so had a hard time knowing what tradeoffs to make’ (my paraphrase of your suggestion).
I would have to think harder about which specific values I believe caused this particular issue, but that’s my broad point.
Ben’s responses largely cover what I would have wanted to say. But on a meta note: I wrote specifically
I think a hypothesis that does have to be kept in mind is that some people don’t care.
I do also think the hypothesis is true (and it’s reasonable for this thread to discuss that claim, of course).
But the reason I said it that way, is that it’s a relatively hard hypothesis to evaluate. You’d probably have to have several long conversations with several different people, in which you successfully listen intensely to who they are / what they’re thinking / how they’re processing what you say. Probably only then could you even have a chance at reasonably concluding something like “they actually don’t care about X”, as distinct from “they know something that implies X isn’t so important here” or “they just don’t get that I’m talking about X” or “they do care about X but I wasn’t hearing how” or “they’re defensive in this moment, but will update later” or “they just hadn’t heard why X is important (but would be open to learning that)”, etc.
I agree that it’s a potentially mindkilly hypothesis. And because it’s hard to evaluate, the implicature of assertions about it is awkward—I wanted to acknowledge that it would be difficult to find a consensus belief state, and I wanted to avoid implying that the assertion is something we ought to be able to come to consensus about right now. And, more simply, it would take substantial work to explain the evidence for the hypothesis being true (in large part because I’d have to sort out my thoughts). For these reasons, my implied request is less like “let’s evaluate this hypothesis right now”, and more like “would you please file this hypothesis away in your head, and then if you’re in a long conversation, on the relevant topic with someone in the relevant category, maybe try holding up the hypothesis next to your observations and seeing if it explains things or not”.
In other words, it’s a request for more data and a request for someone to think through the hypothesis more. It’s far from perfectly neutral—if someone follows that request, they are spending their own computational resources and thereby extending some credit to me and/or to the hypothesis.
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow.
Was there ever a time where CEA was focusing on truth-alignment?
I doesn’t seem to me like they used to be truth-aligned and then they did recruiting in a way that caused a value shift is a good explanation of what happened. They always optimized for PR instead of optimizing for truth-alignment.
It’s quite a while since they edited out Leverage Research on the photos that they published with their website, but the kind of organization where people consider it reasonable to edit photos that way is far from truth-aligned.
Edit:
Julia Wise messaged me and made me aware that I confused CEA with the other CEA. The photo incident happened on the 80,000 hours website and the page talks about promoting CEA events like EA global and the local EA groups that CEA supports (at the time 80,000 hours was part of the CEA that’s now called EV). While I don’t think that this makes CEA completely innocent here, because they should see that people who promote their events under the banner of their organization name should behave ethically, I do think it gives a valid explanation for why this wouldn’t be make it central for the mistakes page of CEA and they want to focus the mistakes page on mistakes made by direct employees of the entity that’s now called CEA.
I think not enforcing an “in or out” boundary is big contributor to this degradation—like, majorly successful religions required all kinds of sacrifice.
I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand… I wouldn’t say this to most people, but my model is you’d prefer I be this blunt… my understanding is Catholicism is about submission to the hierarchy, and if you’re not doing that or don’t actively believe they are worthy of that, you’re LARPing. I don’t think this is true of (most?) protestant denominations: working from books and a direct line to God is their jam. But Catholicism cares much more about authority and authorization.
It feels like AI safety is the best current candidate for [lifeboat], though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what “Post EA” looks like.
I’d love for this to be true because I think AIS is EA’s most important topic. OTOH, I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta (which is more AIS). ETG was a much more viable role for GD than for AIS.
I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
I think it’s good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you’re saying that believing ‘AI is an existential threat to our civilization’ is ‘crazy town’, I don’t really know what to say. I don’t believe it’s crazy town, and I don’t think that thinking it’s crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don’t understand and they’re getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you’re caught up worrying what people might’ve thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it’s ‘crazy town’ either haven’t engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it’s rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn’t exist and I’m crazy for thinking it, and so I’m naturally a bit scared by you asserting it as though it’s the default and correct position.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don’t want to be closer, and that I don’t respect them), while being the sort of thing that requires leaders.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]
I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
This might be a bit off-topic, but I’m very confused by this. I was raised Catholic, and the Wikipedia description matches my understanding of Catholicism (compared to other Christian denominations)
Do you not know who the living Pope is, while still believing he’s the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?
Or do you disagree with the Wikipedia and the Catholic Church definitions of the core beliefs of Catholicism?
Definitely think that on the margin, more “directly verifying base reality with your own eyes” would be good in EA circles. Eg at one point, I was very critical of those mission trips to Africa where high schoolers spend a week digging a well; “obviously you should just send cash!” But now I’m much more sympathetic.
I’m confused by this as well. All the people I know who worked on those trips (either as an organiser or as a volunteer) don’t think it helped their epistemics at all, compared to e.g. reading the literature on development economics. I definitely think on the ground experience is extremely valuable (see this recent comment and this classic post) but I think watching vegan documentaries, visiting farms, and doing voluntourism are all bad ways to improve the accuracy of your map of actual reality.
Do you not know who the living Pope is, while still believing he’s the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?
I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don’t feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can’t name my senator or representative and barely know what Biden stands for, but I think I’m reasonably American.
All the people I know who worked on those trips (either as an organiser or as a volunteer) don’t think it helped their epistemics at all, compared to e.g. reading the literature on development economics.
I went on one of those trips as a middle schooler (to Mexico, not Africa). I don’t know that it helped my epistemics much, but I did get like, a visceral experience of what the life of someone in a third-world country would be like, that I wouldn’t have gotten otherwise and no amount of research literature reading would replicate.
I don’t literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!) I do think there’s an overreliance on consuming research and data, and an underreliance on just doing things and having reality give you feedback.
I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope.
That makes sense, thanks. I would say that compared to Catholicism, in EA you have much less reason to care about the movement leaders, as them having authority to rule over EA is not part of its beliefs.
I don’t literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!)
For what it’s worth, I’ve talked with several people I’ve met through EA who regularly “break” into factory farms[1] or who regularly work in developing countries.
It’s definitely possible that it should be more, but I would claim that the percentage of people doing this is much higher than baseline among people who know about EA, and I think it can have downsides for the reasons mentioned in ‘Against Empathy.’
They claim that they enter them without any breaking, I can’t verify that claim, but I can verify that they have videos of themselves inside factory farms.
I’ve been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA.
Counterargument, I think there’s enough different streams of EA that this would not be especially helpful.
There exists a president of GiveWell. There exists a president of 80k Hours. There exists a president of Open Philanthropy. Those three organizations seem pretty close to each other, and there’s a lot of others further afield. I think there would be a lot of debating, some of it acrimonious, about who counted as ‘in the movement’ enough to vote on a president of EA, and it would be easy to wind up with a president that nobody with a big mailing list or a pile of money actually had to listen to.
Some notes from the transcript:
Enjoyed this point—I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned. Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don’t see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.
Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.
Definitely think that on the margin, more “directly verifying base reality with your own eyes” would be good in EA circles. Eg at one point, I was very critical of those mission trips to Africa where high schoolers spend a week digging a well; “obviously you should just send cash!” But now I’m much more sympathetic.
This also stings a bit for Manifund; like 80% of what we fund is AI safety but I don’t really have much ability to personally verify that the stuff we funded is any good.
I think not enforcing an “in or out” boundary is big contributor to this degradation—like, majorly successful religions required all kinds of sacrifice and
It feels like AI safety is the best current candidate for this, though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what “Post EA” looks like.
Really liked this analogy!
I like this as a useful question to keep in mind, though I don’t think it’s totally explanatory. I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
I’ve been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA. (plus of course, we’d get to debate stuff like optimal voting systems and enfranchisement—my kind of catnip)
This could be part of it… but I think a hypothesis that does have to be kept in mind is that some people don’t care. They aren’t trying to follow action-policies that lead to good outcomes, they’re doing something else. Primarily, acting on an addiction to Steam. If a recruitment strategy works, that’s a justification in and of itself, full stop. EA is good because it has power, more people in EA means more power to EA, therefore more people in EA is good. Given a choice between recruiting 2 agents and turning them both into zombies, vs recruiting 1 agent and keeping them an agent, you of course choose the first one--2 is more than 1.
Mm I’m extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as “I want recruits at all costs”. I predict that if you talk to one and asking them about it, you’d find the same.
I do think that it’s easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy—but I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.
I haven’t grokked the notion of “an addiction to steam” yet, so I’m not sure whether I agree with that account, but I have a feeling that when you write “I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned” you are papering over real values differences.
Tons of EAs will tell you that honesty and integrity and truth-seeking are of course ‘important’, but if you observe their behavior they’ll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think there’s a difference in the culture and values between (on one hand) people around rationalist circles who worry a lot about how to give honest answers to things like ‘How are you doing today?‘, who hold themselves to the standards of intent to inform rather than simply whether they out and out lied, who will show up and have long arguments with people who have moral critiques of them, and (on the other hand) most of the people in the EA culture and positions of power who don’t do this, and so the latter can much more easily deceive and take advantage of people by funneling them into career paths which basically boil down to ‘devoting yourself to whatever whoever is powerful in EA thinks is a maybe-good idea this month’. Paths that people wouldn’t go down if they candidly were told up front what was going on.
I think it’s fair to say that many/most EAs (including those involved with student groups) don’t care about integrity and truth-seeking things very much, or at least not enough to bend them off the path of reward and momentum by the standards of the EA ideology / EA leaders & grantmakers when the path is going wrong, and I think this is a key reason why EA student groups are able to be like ponzi schemes. ‘Well-intentioned’ does not get you ‘has good values’ and it is not a moral defense of ponzi schemes to argue that everyone involved was “kind and well-intentioned”.
I agree it is hard to get feedback, but this doesn’t mean one cannot have good standards. A ton of my work involves maintaining of boundaries where I’m not quite sure what the concrete outputs will look like. I kind of think this is one of the main things people are talking about when we talk about values — what heuristics do you operate by in the world for most of the time when you’re mostly not going to get feedback?
Mm I basically agree that:
there are real value differences between EA folks and rationalists
good intentions do not substitute for good outcomes
However:
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
There are occasional bad people like SBF but that’s not what I’m talking about here. I’m talking about a lot of perfectly kind people who don’t hold the values of integrity and truth-seeking as part of who they are, and who couldn’t give a good account for why many rationalists value those things so much (and might well call rationalists weird and autistic if you asked them to try).
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don’t think that’s actually saying that much, and I believe there’s a lot of massively impactful difference in culture and values.
I think EA recruiters have repeatedly made decisions like choosing 2 zombies over 1 agent, and were I or Tsvi to look at the same set of options and information we would have made a different decision, because we’ve learned to care about candor and wholesomeness and respecting other people’s sense-making and a bunch of other things. I don’t think this makes them bad people. Having good values takes a lot of work by a lot of people to encapsulate and teach them, a good person should not be expected to re-derive an entire culture for themselves, and I think most of the world does not teach all of the values I care about to people by the age of 18, like lightness and argument and empiricism and integrity and courage and more. They don’t care about a number of the values that I hold, and as a result will make decisions counter to those values.
My best guess is something like a third of rationalists are also EAs, at least going by identification. (I’m being lazy for the moment and not cross checking “Identifies as Rationalist” against “Identifies as EA” but I can if you want me to and I’m like 85% sure the less-lazy check will bear that out.) My educated but irresponsible guess is something like 10% of EAs are rationalists. Last time I did a straw poll at an ACX meetup, more than half the people attending were also EAs. Whatever the differences are, it’s not stopping a substantial overlap on membership, and I don’t think that’s just at the level of random members but includes a lot of the notable members.
I’d be pretty open to a definition of ‘rationalist’ that was about more than self-identification, but to my knowledge we don’t have a workable definition better than that. It’s plausible to me that the differences matter as you lean on them a lot, but I think it’s more likely the two groups are aligned for most purposes.
Thanks for the data! I agree there’s a fair bit of overlap in clusters of people.
Two points:
I am talking about the cultural values more than simply the individuals. I think a person’s environment really brings very different things out of them. The same person(s) working at Amazon, DC politics, and a global-health non-profit, will get invited to live out different values and build quite different identities for themselves. The same person in-person and on Twitter can also behave as quite different people. I think LessWrong has a distinct culture from the EA Forum, and I think EAG has a distinct culture from ACX meetups.
Not every person in a scene strongly embodies the ideals and aspirations of that scene. Many people who come to rationalist meetups I have yet to get on the same page about with lots of values e.g. I still somewhat regularly have to give arguments against various reasons for why people sometimes endorse self-deception, even to folks who have been around for many years. The ideals of EA and LW are different.
So even though the two scenes have overlap in people, I still think the scenes live out and aspire to different values and different cultures, and this explains a lot of difference in outcomes.
I appreciate you drawing the distinction! The bit about “bad people” was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like “what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement”; I feel like I’m lowkey failing an ITT here...)
I can think about that question if it seems relevant, but the initial claim of Elizabeth’s was “I believe there are ways to recruit college students responsibly. I don’t believe the way EA is doing it really has a chance to be responsible”. So I was trying to give an account of the root cause there.
Also — and I recognize that I’m saying something relatively trivial here — the root cause of a problem in a system can of course be any seemingly minor part of it. Just because I’m saying one part of the system is causing problems (the culture’s values) doesn’t mean I’m saying that’s what’s primarily responsible for the output. The current cause of a software company’s current problems might be the slow speed with which PR reviews are happening, but this shouldn’t be mistaken for the claim that the credit allocation for the company’s success is primarily that it can do PR reviews fast.
So to repeat, I’m saying that IMO the root cause of irresponsible movement growth and ponzi-scheme-like recruitment strategies was a lack of IMO very important values like dialogue and candor and respecting other people’s sense-making and courage and so on, rather than an explanation more like ‘those doing recruitment had poor feedback loops so had a hard time knowing what tradeoffs to make’ (my paraphrase of your suggestion).
I would have to think harder about which specific values I believe caused this particular issue, but that’s my broad point.
Ben’s responses largely cover what I would have wanted to say. But on a meta note: I wrote specifically
I do also think the hypothesis is true (and it’s reasonable for this thread to discuss that claim, of course).
But the reason I said it that way, is that it’s a relatively hard hypothesis to evaluate. You’d probably have to have several long conversations with several different people, in which you successfully listen intensely to who they are / what they’re thinking / how they’re processing what you say. Probably only then could you even have a chance at reasonably concluding something like “they actually don’t care about X”, as distinct from “they know something that implies X isn’t so important here” or “they just don’t get that I’m talking about X” or “they do care about X but I wasn’t hearing how” or “they’re defensive in this moment, but will update later” or “they just hadn’t heard why X is important (but would be open to learning that)”, etc.
I agree that it’s a potentially mindkilly hypothesis. And because it’s hard to evaluate, the implicature of assertions about it is awkward—I wanted to acknowledge that it would be difficult to find a consensus belief state, and I wanted to avoid implying that the assertion is something we ought to be able to come to consensus about right now. And, more simply, it would take substantial work to explain the evidence for the hypothesis being true (in large part because I’d have to sort out my thoughts). For these reasons, my implied request is less like “let’s evaluate this hypothesis right now”, and more like “would you please file this hypothesis away in your head, and then if you’re in a long conversation, on the relevant topic with someone in the relevant category, maybe try holding up the hypothesis next to your observations and seeing if it explains things or not”.
In other words, it’s a request for more data and a request for someone to think through the hypothesis more. It’s far from perfectly neutral—if someone follows that request, they are spending their own computational resources and thereby extending some credit to me and/or to the hypothesis.
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow.
Was there ever a time where CEA was focusing on truth-alignment?
I doesn’t seem to me like they used to be truth-aligned and then they did recruiting in a way that caused a value shift is a good explanation of what happened. They always optimized for PR instead of optimizing for truth-alignment.
It’s quite a while since they edited out Leverage Research on the photos that they published with their website, but the kind of organization where people consider it reasonable to edit photos that way is far from truth-aligned.
Edit:
Julia Wise messaged me and made me aware that I confused CEA with the other CEA. The photo incident happened on the 80,000 hours website and the page talks about promoting CEA events like EA global and the local EA groups that CEA supports (at the time 80,000 hours was part of the CEA that’s now called EV). While I don’t think that this makes CEA completely innocent here, because they should see that people who promote their events under the banner of their organization name should behave ethically, I do think it gives a valid explanation for why this wouldn’t be make it central for the mistakes page of CEA and they want to focus the mistakes page on mistakes made by direct employees of the entity that’s now called CEA.
I feel ambivalent about this. On one hand, yes, you need to have standards, and I think EA’s move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with other options.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand… I wouldn’t say this to most people, but my model is you’d prefer I be this blunt… my understanding is Catholicism is about submission to the hierarchy, and if you’re not doing that or don’t actively believe they are worthy of that, you’re LARPing. I don’t think this is true of (most?) protestant denominations: working from books and a direct line to God is their jam. But Catholicism cares much more about authority and authorization.
I’d love for this to be true because I think AIS is EA’s most important topic. OTOH, I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta (which is more AIS). ETG was a much more viable role for GD than for AIS.
If you’re only as good as your last 3 months, no one can take time to rest and reflect, much less recover from burnout.
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
I think it’s good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you’re saying that believing ‘AI is an existential threat to our civilization’ is ‘crazy town’, I don’t really know what to say. I don’t believe it’s crazy town, and I don’t think that thinking it’s crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don’t understand and they’re getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you’re caught up worrying what people might’ve thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it’s ‘crazy town’ either haven’t engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it’s rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn’t exist and I’m crazy for thinking it, and so I’m naturally a bit scared by you asserting it as though it’s the default and correct position.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I reject the implication that AI town is the last stop on the crazy train.
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
Catholic EA: You have a leader you trust and respect, and defer to their judgement.
Sola Fide EA: You read 80k hours and Givewell, but you keep your own spreadsheet of EV calculations.
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don’t want to be closer, and that I don’t respect them), while being the sort of thing that requires leaders.
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]
This might be a bit off-topic, but I’m very confused by this. I was raised Catholic, and the Wikipedia description matches my understanding of Catholicism (compared to other Christian denominations)
Do you not know who the living Pope is, while still believing he’s the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?
Or do you disagree with the Wikipedia and the Catholic Church definitions of the core beliefs of Catholicism?
I’m confused by this as well. All the people I know who worked on those trips (either as an organiser or as a volunteer) don’t think it helped their epistemics at all, compared to e.g. reading the literature on development economics. I definitely think on the ground experience is extremely valuable (see this recent comment and this classic post) but I think watching vegan documentaries, visiting farms, and doing voluntourism are all bad ways to improve the accuracy of your map of actual reality.
I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don’t feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can’t name my senator or representative and barely know what Biden stands for, but I think I’m reasonably American.
I went on one of those trips as a middle schooler (to Mexico, not Africa). I don’t know that it helped my epistemics much, but I did get like, a visceral experience of what the life of someone in a third-world country would be like, that I wouldn’t have gotten otherwise and no amount of research literature reading would replicate.
I don’t literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!) I do think there’s an overreliance on consuming research and data, and an underreliance on just doing things and having reality give you feedback.
That makes sense, thanks. I would say that compared to Catholicism, in EA you have much less reason to care about the movement leaders, as them having authority to rule over EA is not part of its beliefs.
For what it’s worth, I’ve talked with several people I’ve met through EA who regularly “break” into factory farms[1] or who regularly work in developing countries.
It’s definitely possible that it should be more, but I would claim that the percentage of people doing this is much higher than baseline among people who know about EA, and I think it can have downsides for the reasons mentioned in ‘Against Empathy.’
They claim that they enter them without any breaking, I can’t verify that claim, but I can verify that they have videos of themselves inside factory farms.
Counterargument, I think there’s enough different streams of EA that this would not be especially helpful.
There exists a president of GiveWell. There exists a president of 80k Hours. There exists a president of Open Philanthropy. Those three organizations seem pretty close to each other, and there’s a lot of others further afield. I think there would be a lot of debating, some of it acrimonious, about who counted as ‘in the movement’ enough to vote on a president of EA, and it would be easy to wind up with a president that nobody with a big mailing list or a pile of money actually had to listen to.