I haven’t grokked the notion of “an addiction to steam” yet, so I’m not sure whether I agree with that account, but I have a feeling that when you write “I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned” you are papering over real values differences.
Tons of EAs will tell you that honesty and integrity and truth-seeking are of course ‘important’, but if you observe their behavior they’ll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think there’s a difference in the culture and values between (on one hand) people around rationalist circles who worry a lot about how to give honest answers to things like ‘How are you doing today?‘, who hold themselves to the standards of intent to inform rather than simply whether they out and out lied, who will show up and have long arguments with people who have moral critiques of them, and (on the other hand) most of the people in the EA culture and positions of power who don’t do this, and so the latter can much more easily deceive and take advantage of people by funneling them into career paths which basically boil down to ‘devoting yourself to whatever whoever is powerful in EA thinks is a maybe-good idea this month’. Paths that people wouldn’t go down if they candidly were told up front what was going on.
I think it’s fair to say that many/most EAs (including those involved with student groups) don’t care about integrity and truth-seeking things very much, or at least not enough to bend them off the path of reward and momentum by the standards of the EA ideology / EA leaders & grantmakers when the path is going wrong, and I think this is a key reason why EA student groups are able to be like ponzi schemes. ‘Well-intentioned’ does not get you ‘has good values’ and it is not a moral defense of ponzi schemes to argue that everyone involved was “kind and well-intentioned”.
I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned. Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don’t see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.
I agree it is hard to get feedback, but this doesn’t mean one cannot have good standards. A ton of my work involves maintaining of boundaries where I’m not quite sure what the concrete outputs will look like. I kind of think this is one of the main things people are talking about when we talk about values — what heuristics do you operate by in the world for most of the time when you’re mostly not going to get feedback?
there are real value differences between EA folks and rationalists
good intentions do not substitute for good outcomes
However:
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
There are occasional bad people like SBF but that’s not what I’m talking about here. I’m talking about a lot of perfectly kind people who don’t hold the values of integrity and truth-seeking as part of who they are, and who couldn’t give a good account for why many rationalists value those things so much (and might well call rationalists weird and autistic if you asked them to try).
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don’t think that’s actually saying that much, and I believe there’s a lot of massively impactful difference in culture and values.
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
I think EA recruiters have repeatedly made decisions like choosing 2 zombies over 1 agent, and were I or Tsvi to look at the same set of options and information we would have made a different decision, because we’ve learned to care about candor and wholesomeness and respecting other people’s sense-making and a bunch of other things. I don’t think this makes them bad people. Having good values takes a lot of work by a lot of people to encapsulate and teach them, a good person should not be expected to re-derive an entire culture for themselves, and I think most of the world does not teach all of the values I care about to people by the age of 18, like lightness and argument and empiricism and integrity and courage and more. They don’t care about a number of the values that I hold, and as a result will make decisions counter to those values.
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don’t think that’s actually saying that much, and I believe there’s a lot of massively impactful difference in culture and values.
My best guess is something like a third of rationalists are also EAs, at least going by identification. (I’m being lazy for the moment and not cross checking “Identifies as Rationalist” against “Identifies as EA” but I can if you want me to and I’m like 85% sure the less-lazy check will bear that out.) My educated but irresponsible guess is something like 10% of EAs are rationalists. Last time I did a straw poll at an ACX meetup, more than half the people attending were also EAs. Whatever the differences are, it’s not stopping a substantial overlap on membership, and I don’t think that’s just at the level of random members but includes a lot of the notable members.
I’d be pretty open to a definition of ‘rationalist’ that was about more than self-identification, but to my knowledge we don’t have a workable definition better than that. It’s plausible to me that the differences matter as you lean on them a lot, but I think it’s more likely the two groups are aligned for most purposes.
Thanks for the data! I agree there’s a fair bit of overlap in clusters of people.
Two points:
I am talking about the cultural values more than simply the individuals. I think a person’s environment really brings very different things out of them. The same person(s) working at Amazon, DC politics, and a global-health non-profit, will get invited to live out different values and build quite different identities for themselves. The same person in-person and on Twitter can also behave as quite different people. I think LessWrong has a distinct culture from the EA Forum, and I think EAG has a distinct culture from ACX meetups.
Not every person in a scene strongly embodies the ideals and aspirations of that scene. Many people who come to rationalist meetups I have yet to get on the same page about with lots of values e.g. I still somewhat regularly have to give arguments against various reasons for why people sometimes endorse self-deception, even to folks who have been around for many years. The ideals of EA and LW are different.
So even though the two scenes have overlap in people, I still think the scenes live out and aspire to different values and different cultures, and this explains a lot of difference in outcomes.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
I appreciate you drawing the distinction! The bit about “bad people” was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.
There’s a lot of massively impactful difference in culture and values
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like “what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement”; I feel like I’m lowkey failing an ITT here...)
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
I can think about that question if it seems relevant, but the initial claim of Elizabeth’s was “I believe there are ways to recruit college students responsibly. I don’t believe the way EA is doing it really has a chance to be responsible”. So I was trying to give an account of the root cause there.
Also — and I recognize that I’m saying something relatively trivial here — the root cause of a problem in a system can of course be any seemingly minor part of it. Just because I’m saying one part of the system is causing problems (the culture’s values) doesn’t mean I’m saying that’s what’s primarily responsible for the output. The current cause of a software company’s current problems might be the slow speed with which PR reviews are happening, but this shouldn’t be mistaken for the claim that the credit allocation for the company’s success is primarily that it can do PR reviews fast.
So to repeat, I’m saying that IMO the root cause of irresponsible movement growth and ponzi-scheme-like recruitment strategies was a lack of IMO very important values like dialogue and candor and respecting other people’s sense-making and courage and so on, rather than an explanation more like ‘those doing recruitment had poor feedback loops so had a hard time knowing what tradeoffs to make’ (my paraphrase of your suggestion).
I would have to think harder about which specific values I believe caused this particular issue, but that’s my broad point.
Ben’s responses largely cover what I would have wanted to say. But on a meta note: I wrote specifically
I think a hypothesis that does have to be kept in mind is that some people don’t care.
I do also think the hypothesis is true (and it’s reasonable for this thread to discuss that claim, of course).
But the reason I said it that way, is that it’s a relatively hard hypothesis to evaluate. You’d probably have to have several long conversations with several different people, in which you successfully listen intensely to who they are / what they’re thinking / how they’re processing what you say. Probably only then could you even have a chance at reasonably concluding something like “they actually don’t care about X”, as distinct from “they know something that implies X isn’t so important here” or “they just don’t get that I’m talking about X” or “they do care about X but I wasn’t hearing how” or “they’re defensive in this moment, but will update later” or “they just hadn’t heard why X is important (but would be open to learning that)”, etc.
I agree that it’s a potentially mindkilly hypothesis. And because it’s hard to evaluate, the implicature of assertions about it is awkward—I wanted to acknowledge that it would be difficult to find a consensus belief state, and I wanted to avoid implying that the assertion is something we ought to be able to come to consensus about right now. And, more simply, it would take substantial work to explain the evidence for the hypothesis being true (in large part because I’d have to sort out my thoughts). For these reasons, my implied request is less like “let’s evaluate this hypothesis right now”, and more like “would you please file this hypothesis away in your head, and then if you’re in a long conversation, on the relevant topic with someone in the relevant category, maybe try holding up the hypothesis next to your observations and seeing if it explains things or not”.
In other words, it’s a request for more data and a request for someone to think through the hypothesis more. It’s far from perfectly neutral—if someone follows that request, they are spending their own computational resources and thereby extending some credit to me and/or to the hypothesis.
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow.
I haven’t grokked the notion of “an addiction to steam” yet, so I’m not sure whether I agree with that account, but I have a feeling that when you write “I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned” you are papering over real values differences.
Tons of EAs will tell you that honesty and integrity and truth-seeking are of course ‘important’, but if you observe their behavior they’ll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think there’s a difference in the culture and values between (on one hand) people around rationalist circles who worry a lot about how to give honest answers to things like ‘How are you doing today?‘, who hold themselves to the standards of intent to inform rather than simply whether they out and out lied, who will show up and have long arguments with people who have moral critiques of them, and (on the other hand) most of the people in the EA culture and positions of power who don’t do this, and so the latter can much more easily deceive and take advantage of people by funneling them into career paths which basically boil down to ‘devoting yourself to whatever whoever is powerful in EA thinks is a maybe-good idea this month’. Paths that people wouldn’t go down if they candidly were told up front what was going on.
I think it’s fair to say that many/most EAs (including those involved with student groups) don’t care about integrity and truth-seeking things very much, or at least not enough to bend them off the path of reward and momentum by the standards of the EA ideology / EA leaders & grantmakers when the path is going wrong, and I think this is a key reason why EA student groups are able to be like ponzi schemes. ‘Well-intentioned’ does not get you ‘has good values’ and it is not a moral defense of ponzi schemes to argue that everyone involved was “kind and well-intentioned”.
I agree it is hard to get feedback, but this doesn’t mean one cannot have good standards. A ton of my work involves maintaining of boundaries where I’m not quite sure what the concrete outputs will look like. I kind of think this is one of the main things people are talking about when we talk about values — what heuristics do you operate by in the world for most of the time when you’re mostly not going to get feedback?
Mm I basically agree that:
there are real value differences between EA folks and rationalists
good intentions do not substitute for good outcomes
However:
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
There are occasional bad people like SBF but that’s not what I’m talking about here. I’m talking about a lot of perfectly kind people who don’t hold the values of integrity and truth-seeking as part of who they are, and who couldn’t give a good account for why many rationalists value those things so much (and might well call rationalists weird and autistic if you asked them to try).
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don’t think that’s actually saying that much, and I believe there’s a lot of massively impactful difference in culture and values.
I think EA recruiters have repeatedly made decisions like choosing 2 zombies over 1 agent, and were I or Tsvi to look at the same set of options and information we would have made a different decision, because we’ve learned to care about candor and wholesomeness and respecting other people’s sense-making and a bunch of other things. I don’t think this makes them bad people. Having good values takes a lot of work by a lot of people to encapsulate and teach them, a good person should not be expected to re-derive an entire culture for themselves, and I think most of the world does not teach all of the values I care about to people by the age of 18, like lightness and argument and empiricism and integrity and courage and more. They don’t care about a number of the values that I hold, and as a result will make decisions counter to those values.
My best guess is something like a third of rationalists are also EAs, at least going by identification. (I’m being lazy for the moment and not cross checking “Identifies as Rationalist” against “Identifies as EA” but I can if you want me to and I’m like 85% sure the less-lazy check will bear that out.) My educated but irresponsible guess is something like 10% of EAs are rationalists. Last time I did a straw poll at an ACX meetup, more than half the people attending were also EAs. Whatever the differences are, it’s not stopping a substantial overlap on membership, and I don’t think that’s just at the level of random members but includes a lot of the notable members.
I’d be pretty open to a definition of ‘rationalist’ that was about more than self-identification, but to my knowledge we don’t have a workable definition better than that. It’s plausible to me that the differences matter as you lean on them a lot, but I think it’s more likely the two groups are aligned for most purposes.
Thanks for the data! I agree there’s a fair bit of overlap in clusters of people.
Two points:
I am talking about the cultural values more than simply the individuals. I think a person’s environment really brings very different things out of them. The same person(s) working at Amazon, DC politics, and a global-health non-profit, will get invited to live out different values and build quite different identities for themselves. The same person in-person and on Twitter can also behave as quite different people. I think LessWrong has a distinct culture from the EA Forum, and I think EAG has a distinct culture from ACX meetups.
Not every person in a scene strongly embodies the ideals and aspirations of that scene. Many people who come to rationalist meetups I have yet to get on the same page about with lots of values e.g. I still somewhat regularly have to give arguments against various reasons for why people sometimes endorse self-deception, even to folks who have been around for many years. The ideals of EA and LW are different.
So even though the two scenes have overlap in people, I still think the scenes live out and aspire to different values and different cultures, and this explains a lot of difference in outcomes.
I appreciate you drawing the distinction! The bit about “bad people” was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like “what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement”; I feel like I’m lowkey failing an ITT here...)
I can think about that question if it seems relevant, but the initial claim of Elizabeth’s was “I believe there are ways to recruit college students responsibly. I don’t believe the way EA is doing it really has a chance to be responsible”. So I was trying to give an account of the root cause there.
Also — and I recognize that I’m saying something relatively trivial here — the root cause of a problem in a system can of course be any seemingly minor part of it. Just because I’m saying one part of the system is causing problems (the culture’s values) doesn’t mean I’m saying that’s what’s primarily responsible for the output. The current cause of a software company’s current problems might be the slow speed with which PR reviews are happening, but this shouldn’t be mistaken for the claim that the credit allocation for the company’s success is primarily that it can do PR reviews fast.
So to repeat, I’m saying that IMO the root cause of irresponsible movement growth and ponzi-scheme-like recruitment strategies was a lack of IMO very important values like dialogue and candor and respecting other people’s sense-making and courage and so on, rather than an explanation more like ‘those doing recruitment had poor feedback loops so had a hard time knowing what tradeoffs to make’ (my paraphrase of your suggestion).
I would have to think harder about which specific values I believe caused this particular issue, but that’s my broad point.
Ben’s responses largely cover what I would have wanted to say. But on a meta note: I wrote specifically
I do also think the hypothesis is true (and it’s reasonable for this thread to discuss that claim, of course).
But the reason I said it that way, is that it’s a relatively hard hypothesis to evaluate. You’d probably have to have several long conversations with several different people, in which you successfully listen intensely to who they are / what they’re thinking / how they’re processing what you say. Probably only then could you even have a chance at reasonably concluding something like “they actually don’t care about X”, as distinct from “they know something that implies X isn’t so important here” or “they just don’t get that I’m talking about X” or “they do care about X but I wasn’t hearing how” or “they’re defensive in this moment, but will update later” or “they just hadn’t heard why X is important (but would be open to learning that)”, etc.
I agree that it’s a potentially mindkilly hypothesis. And because it’s hard to evaluate, the implicature of assertions about it is awkward—I wanted to acknowledge that it would be difficult to find a consensus belief state, and I wanted to avoid implying that the assertion is something we ought to be able to come to consensus about right now. And, more simply, it would take substantial work to explain the evidence for the hypothesis being true (in large part because I’d have to sort out my thoughts). For these reasons, my implied request is less like “let’s evaluate this hypothesis right now”, and more like “would you please file this hypothesis away in your head, and then if you’re in a long conversation, on the relevant topic with someone in the relevant category, maybe try holding up the hypothesis next to your observations and seeing if it explains things or not”.
In other words, it’s a request for more data and a request for someone to think through the hypothesis more. It’s far from perfectly neutral—if someone follows that request, they are spending their own computational resources and thereby extending some credit to me and/or to the hypothesis.
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow.