Why do you care? Can’t this be cached out into what you can actually expect of the org?
If I come to them with an x-risk project to support via their university help, will they seriously consider supporting it? Probably yes.
If I come to them with a global poverty project to support via their university help, will they seriously consider supporting it? Probably no.
Do their hires primarily come from people who have followed places like the EA Forum and attended EA Global conferences in past years and worked on other non-profit projects with people who’ve done the same? I think so.
When they used to run their grants programme, did they fund non-x-risk things? Largely not. I mean, of the less obvious ones, they funded LessWrong, and CFAR, and 80k, and REACH, and Leverage, which are each varying levels of indirect, but I expect funded them all out of the effects they thought they’d have on x-risk.
I’m working on a global comparative analysis of funding/granting orgs not only in EA, but also in those movements/communities that overlap with EA, including x-risk.
Many in EA may evaluate/assess the relative effectiveness of these orgs in question according to the standard normative framework(s) of EA, as opposed to the lense(s)/framework(s) through which such orgs evaluate/assess themselves, or would prefer to be evaluted/assessed by other principals and agencies.
I expect that the EA community will want to know to what extent various orgs are amenable to change in practice or self-evaluation/self-assessment according to the standard normative framework(s) of EA, however more reductive they may be than ones employed for evaluating the effectiveness of funding allocation in x-risk of other communities, such as the rationality community.
Ergo, it may be in the self-interest of any funding/granting org in the x-risk space to precisely clarify their relationship to the EA community/movement, perhaps as operationalized through the heuristic framework of “(self-)identification as an EA-aligned organization. I assume that includes BERI.
I care because I’m working on a comparative analysis of funds and grants among EA-aligned organizations.
For the sake of completion, this will extend to funding and grantmaking organizations that are part of other movements that have overlap with or are constituent movements of effective altruism. This includes existential risk reduction.
Most of this series of analyses will be a review, as opposed to an evaluation or assessment. I believe the more of those normative judgements I leave out of the analysis, and to leave that to the community. I’m not confident with feasible to produce such a comparative analysis competently without at least a minimum of normative comparison. Yet, more importantly, the information could, and likely woud, be used by various communities/movements with a stake in x-risk reduction (e.g., EA, rationality, Long-Term World Improvement, transhumanism, etc.) to make those normative judgements far beyond what is contained in my own analysis.
I will include in a discussion section a variety of standards by which each of those communities might evaluate or assess BERI in relation to other funds and grants focused on x-risk reduction, most of which are EA-aligned organizations the form the structural core, not only of EA, but also of x-risk reduction. Hundreds, if not thousands, of individuals, including donors, vocal supporters, and managers of those funds and grants run by EA-aligned organizations, will be inclined to evaluate/assess each of these funds/grants focused on x-risk reduction through a lens in EA. Some of these funding/granting orgs in x-risk reduction may diverge in opinion about what is best in the practice and evaluation/assessment of funding allocation in x-risk reduction.
Out of respect for those funding/granting orgs in x-risk reduction that do diverge in opinion from those standards in EA, I would like to know that so as to include those details in the discussion section. This is important because it will inform how the EA community will engage with those orgs in question after my comparative analysis is complete. One shouldn’t realistically expect that many in the EA community will evaluate/assess such orgs with a common normative framework, e.g., where the norms of the rationality community diverge from those of EA. My experience is they won’t have the patience to read many blog posts about how the rationality community, as separate from EA, practices and evaluates/assesses x-risk reduction efforts differently than EA does, and why those of the rationality community are potentially better/superior. I expect many in EA will prefer a framework that is, however unfortunately, more reductive than applying conceptual tools like factorization and ‘caching out’ for parsing out more nuanced frameworks for evaluating x-risk reduction efforts.
So, it’s less about what I, Evan Gaensbauer, care about and more about what hundreds, if not thousands, of others in EA and beyond care about, in terms of evaluating/assessing funding/granting orgs in x-risk reduction. That will go more smoothly for both those funding/granting orgs in question, and x-risk reducers in the EA community, to know if those orgs in question fit into the heuristic framework of “identifying (or not) as an EA-aligned organization.” Ergo, it may be in the interest of those funding/granting orgs in question to clarify their relationship to EA as a movement/community, even if there are trade-offs, real or perceived, before I publish this series of comparative analyses. I imagine that includes BERI.
Technical Aside: Upvoted for being a thoughtful albeit challenging response that impelled me to clarify why I’m asking this as part of a framework for a broader project of analysis I’m currently pursuing.
Pardon for being so challenging, you know I’m always happy to talk with you and answer your questions Evan :) Am just a bit irritated, and let that out here.
I do think that “identity” and “brand” mustn’t become decoupled from what actually gets done—if you want to talk meaningfully about ‘EA’ and what’s true about it, it shouldn’t all be level 3⁄4 simulacra.
Identity without substance or action is meaningless, and sort of not something you get to decide for yourself. If you decide to identify as ‘an EA‘ this causes no changes in your career or your donations, has the average EA donations suddenly gone down? Has EA actually grown? It’s good to be clear on the object level and whether the proxy actually measures anything, and I’m not sure I should call that person and EA’ despite their speech acts to the contrary.
Why do you care? Can’t this be cached out into what you can actually expect of the org?
If I come to them with an x-risk project to support via their university help, will they seriously consider supporting it? Probably yes.
If I come to them with a global poverty project to support via their university help, will they seriously consider supporting it? Probably no.
Do their hires primarily come from people who have followed places like the EA Forum and attended EA Global conferences in past years and worked on other non-profit projects with people who’ve done the same? I think so.
When they used to run their grants programme, did they fund non-x-risk things? Largely not. I mean, of the less obvious ones, they funded LessWrong, and CFAR, and 80k, and REACH, and Leverage, which are each varying levels of indirect, but I expect funded them all out of the effects they thought they’d have on x-risk.
Summary:
I’m working on a global comparative analysis of funding/granting orgs not only in EA, but also in those movements/communities that overlap with EA, including x-risk.
Many in EA may evaluate/assess the relative effectiveness of these orgs in question according to the standard normative framework(s) of EA, as opposed to the lense(s)/framework(s) through which such orgs evaluate/assess themselves, or would prefer to be evaluted/assessed by other principals and agencies.
I expect that the EA community will want to know to what extent various orgs are amenable to change in practice or self-evaluation/self-assessment according to the standard normative framework(s) of EA, however more reductive they may be than ones employed for evaluating the effectiveness of funding allocation in x-risk of other communities, such as the rationality community.
Ergo, it may be in the self-interest of any funding/granting org in the x-risk space to precisely clarify their relationship to the EA community/movement, perhaps as operationalized through the heuristic framework of “(self-)identification as an EA-aligned organization. I assume that includes BERI.
I care because I’m working on a comparative analysis of funds and grants among EA-aligned organizations.
For the sake of completion, this will extend to funding and grantmaking organizations that are part of other movements that have overlap with or are constituent movements of effective altruism. This includes existential risk reduction.
Most of this series of analyses will be a review, as opposed to an evaluation or assessment. I believe the more of those normative judgements I leave out of the analysis, and to leave that to the community. I’m not confident with feasible to produce such a comparative analysis competently without at least a minimum of normative comparison. Yet, more importantly, the information could, and likely woud, be used by various communities/movements with a stake in x-risk reduction (e.g., EA, rationality, Long-Term World Improvement, transhumanism, etc.) to make those normative judgements far beyond what is contained in my own analysis.
I will include in a discussion section a variety of standards by which each of those communities might evaluate or assess BERI in relation to other funds and grants focused on x-risk reduction, most of which are EA-aligned organizations the form the structural core, not only of EA, but also of x-risk reduction. Hundreds, if not thousands, of individuals, including donors, vocal supporters, and managers of those funds and grants run by EA-aligned organizations, will be inclined to evaluate/assess each of these funds/grants focused on x-risk reduction through a lens in EA. Some of these funding/granting orgs in x-risk reduction may diverge in opinion about what is best in the practice and evaluation/assessment of funding allocation in x-risk reduction.
Out of respect for those funding/granting orgs in x-risk reduction that do diverge in opinion from those standards in EA, I would like to know that so as to include those details in the discussion section. This is important because it will inform how the EA community will engage with those orgs in question after my comparative analysis is complete. One shouldn’t realistically expect that many in the EA community will evaluate/assess such orgs with a common normative framework, e.g., where the norms of the rationality community diverge from those of EA. My experience is they won’t have the patience to read many blog posts about how the rationality community, as separate from EA, practices and evaluates/assesses x-risk reduction efforts differently than EA does, and why those of the rationality community are potentially better/superior. I expect many in EA will prefer a framework that is, however unfortunately, more reductive than applying conceptual tools like factorization and ‘caching out’ for parsing out more nuanced frameworks for evaluating x-risk reduction efforts.
So, it’s less about what I, Evan Gaensbauer, care about and more about what hundreds, if not thousands, of others in EA and beyond care about, in terms of evaluating/assessing funding/granting orgs in x-risk reduction. That will go more smoothly for both those funding/granting orgs in question, and x-risk reducers in the EA community, to know if those orgs in question fit into the heuristic framework of “identifying (or not) as an EA-aligned organization.” Ergo, it may be in the interest of those funding/granting orgs in question to clarify their relationship to EA as a movement/community, even if there are trade-offs, real or perceived, before I publish this series of comparative analyses. I imagine that includes BERI.
nods I think I understand your motivation better. I’ll leave a different top-level answer.
Technical Aside: Upvoted for being a thoughtful albeit challenging response that impelled me to clarify why I’m asking this as part of a framework for a broader project of analysis I’m currently pursuing.
Pardon for being so challenging, you know I’m always happy to talk with you and answer your questions Evan :) Am just a bit irritated, and let that out here.
I do think that “identity” and “brand” mustn’t become decoupled from what actually gets done—if you want to talk meaningfully about ‘EA’ and what’s true about it, it shouldn’t all be level 3⁄4 simulacra.
Identity without substance or action is meaningless, and sort of not something you get to decide for yourself. If you decide to identify as ‘an EA‘ this causes no changes in your career or your donations, has the average EA donations suddenly gone down? Has EA actually grown? It’s good to be clear on the object level and whether the proxy actually measures anything, and I’m not sure I should call that person and EA’ despite their speech acts to the contrary.
(Will go and read your longer comment now.)