Effective altruism exists at the intersection of other social and intellectual movements, and communities. Some but not all of the organizations part of these communities focused on various focus areas of EA, such as existential risk reduction, identify as part of effective altruism as a movement. Such organizations are typically labeled as “EA-aligned organizations.”
I can’t answer your question properly, in part because I am not BERI. I’ll just share some of my thoughts that seems relevant for this question:
I expect everything BERI supports and funds to always be justified in terms of x-risk. It will try to support all the parts of EA that are focused on x-risk, and not the rest. For example, their grant to EA Sweden is described as “Effective Altruism Sweden will support Markus Stoor’s project to coordinate two follow-up lunch-to-lunch meetings in Sweden for x-risk-focused individuals.”
I think it would be correct to classify it entirely as an x-risk org and not as an EA org. I don’t think it does any EA-style analysis of what it should work on that is not captured under x-risk analysis, and I think that people working to do things like, say, fight factory farming, should never expect support from BERI (via the direct work BERI does).
I think it will have indirect effects on other EA work. For example BERI supports FHI and this will give FHI a lot more freedom to take actions in the world, and FHI does some support of other areas of EA (e.g. Owen Cotton-Barratt advises the CEA, and probably that trades off against his management time on the RSP programme). I expect BERI does not count this in their calculations on whether to help out with some work, but I’m not confident.
I would call it an x-risk org and not an EA-aligned org in its work, though I expect its staff all care about EA more broadly.
I think it’s worth noting that an org can be an EA org even if it focuses exclusively on one cause area, such as x-risk reduction. What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.
This seems like a thin concept of EA. I know there are organizations who choose to pursue interventions based on them being in an area they believe to be (among) the most impactful, and based on welfarist premises and rigorous reasoning. Yet they don’t identify as EA organizations. That would be because they disagree with the consensus in EA about what constitutes ‘the most impactful,’ ‘the greatest welfare,’ and/or ‘rigorous reasoning.’ So, the consensus position(s) in EA of how to interpret all those notions could be thought of as the thick concept of EA.
Also, this definition seems to be a prescriptive definition of “EA organizations,” as opposed to being a descriptive definition. That is, all the features you mentioned seem necessary to define EA-aligned organizations as they exist, but I’m not convinced they’re sufficient to capture all the characteristics of the typical EA-aligned organization. If they were sufficient, any NPO that could identify as an EA-aligned organization would do so. Yet there are some that aren’t. An example of a typical feature of EA-aligned NPOs that is superficial but describes them in practice would be if they receive most of their funding from sources also aligned with EA (e.g., the Open Philanthropy Project, the EA Funds, EA-aligned donors, etc.).
I said that the belief must be reached from welfarist premises and rigorous reasoning, not from what the organization believes are welfarist premises and rigorous reasoning.
I’m not sure what you mean by this. And it seems clear to me that lots of nonprofit orgs would not classify as EA orgs given my proposed criterion (note the clarification above).
Fair.
Why do you care? Can’t this be cached out into what you can actually expect of the org?
If I come to them with an x-risk project to support via their university help, will they seriously consider supporting it? Probably yes.
If I come to them with a global poverty project to support via their university help, will they seriously consider supporting it? Probably no.
Do their hires primarily come from people who have followed places like the EA Forum and attended EA Global conferences in past years and worked on other non-profit projects with people who’ve done the same? I think so.
When they used to run their grants programme, did they fund non-x-risk things? Largely not. I mean, of the less obvious ones, they funded LessWrong, and CFAR, and 80k, and REACH, and Leverage, which are each varying levels of indirect, but I expect funded them all out of the effects they thought they’d have on x-risk.
Summary:
I’m working on a global comparative analysis of funding/granting orgs not only in EA, but also in those movements/communities that overlap with EA, including x-risk.
Many in EA may evaluate/assess the relative effectiveness of these orgs in question according to the standard normative framework(s) of EA, as opposed to the lense(s)/framework(s) through which such orgs evaluate/assess themselves, or would prefer to be evaluted/assessed by other principals and agencies.
I expect that the EA community will want to know to what extent various orgs are amenable to change in practice or self-evaluation/self-assessment according to the standard normative framework(s) of EA, however more reductive they may be than ones employed for evaluating the effectiveness of funding allocation in x-risk of other communities, such as the rationality community.
Ergo, it may be in the self-interest of any funding/granting org in the x-risk space to precisely clarify their relationship to the EA community/movement, perhaps as operationalized through the heuristic framework of “(self-)identification as an EA-aligned organization. I assume that includes BERI.
I care because I’m working on a comparative analysis of funds and grants among EA-aligned organizations.
For the sake of completion, this will extend to funding and grantmaking organizations that are part of other movements that have overlap with or are constituent movements of effective altruism. This includes existential risk reduction.
Most of this series of analyses will be a review, as opposed to an evaluation or assessment. I believe the more of those normative judgements I leave out of the analysis, and to leave that to the community. I’m not confident with feasible to produce such a comparative analysis competently without at least a minimum of normative comparison. Yet, more importantly, the information could, and likely woud, be used by various communities/movements with a stake in x-risk reduction (e.g., EA, rationality, Long-Term World Improvement, transhumanism, etc.) to make those normative judgements far beyond what is contained in my own analysis.
I will include in a discussion section a variety of standards by which each of those communities might evaluate or assess BERI in relation to other funds and grants focused on x-risk reduction, most of which are EA-aligned organizations the form the structural core, not only of EA, but also of x-risk reduction. Hundreds, if not thousands, of individuals, including donors, vocal supporters, and managers of those funds and grants run by EA-aligned organizations, will be inclined to evaluate/assess each of these funds/grants focused on x-risk reduction through a lens in EA. Some of these funding/granting orgs in x-risk reduction may diverge in opinion about what is best in the practice and evaluation/assessment of funding allocation in x-risk reduction.
Out of respect for those funding/granting orgs in x-risk reduction that do diverge in opinion from those standards in EA, I would like to know that so as to include those details in the discussion section. This is important because it will inform how the EA community will engage with those orgs in question after my comparative analysis is complete. One shouldn’t realistically expect that many in the EA community will evaluate/assess such orgs with a common normative framework, e.g., where the norms of the rationality community diverge from those of EA. My experience is they won’t have the patience to read many blog posts about how the rationality community, as separate from EA, practices and evaluates/assesses x-risk reduction efforts differently than EA does, and why those of the rationality community are potentially better/superior. I expect many in EA will prefer a framework that is, however unfortunately, more reductive than applying conceptual tools like factorization and ‘caching out’ for parsing out more nuanced frameworks for evaluating x-risk reduction efforts.
So, it’s less about what I, Evan Gaensbauer, care about and more about what hundreds, if not thousands, of others in EA and beyond care about, in terms of evaluating/assessing funding/granting orgs in x-risk reduction. That will go more smoothly for both those funding/granting orgs in question, and x-risk reducers in the EA community, to know if those orgs in question fit into the heuristic framework of “identifying (or not) as an EA-aligned organization.” Ergo, it may be in the interest of those funding/granting orgs in question to clarify their relationship to EA as a movement/community, even if there are trade-offs, real or perceived, before I publish this series of comparative analyses. I imagine that includes BERI.
nods I think I understand your motivation better. I’ll leave a different top-level answer.
Technical Aside: Upvoted for being a thoughtful albeit challenging response that impelled me to clarify why I’m asking this as part of a framework for a broader project of analysis I’m currently pursuing.
Pardon for being so challenging, you know I’m always happy to talk with you and answer your questions Evan :) Am just a bit irritated, and let that out here.
I do think that “identity” and “brand” mustn’t become decoupled from what actually gets done—if you want to talk meaningfully about ‘EA’ and what’s true about it, it shouldn’t all be level 3⁄4 simulacra.
Identity without substance or action is meaningless, and sort of not something you get to decide for yourself. If you decide to identify as ‘an EA‘ this causes no changes in your career or your donations, has the average EA donations suddenly gone down? Has EA actually grown? It’s good to be clear on the object level and whether the proxy actually measures anything, and I’m not sure I should call that person and EA’ despite their speech acts to the contrary.
(Will go and read your longer comment now.)