I can’t answer your question properly, in part because I am not BERI. I’ll just share some of my thoughts that seems relevant for this question:
I expect everything BERI supports and funds to always be justified in terms of x-risk. It will try to support all the parts of EA that are focused on x-risk, and not the rest. For example, their grant to EA Sweden is described as “Effective Altruism Sweden will support Markus Stoor’s project to coordinate two follow-up lunch-to-lunch meetings in Sweden for x-risk-focused individuals.”
I think it would be correct to classify it entirely as an x-risk org and not as an EA org. I don’t think it does any EA-style analysis of what it should work on that is not captured under x-risk analysis, and I think that people working to do things like, say, fight factory farming, should never expect support from BERI (via the direct work BERI does).
I think it will have indirect effects on other EA work. For example BERI supports FHI and this will give FHI a lot more freedom to take actions in the world, and FHI does some support of other areas of EA (e.g. Owen Cotton-Barratt advises the CEA, and probably that trades off against his management time on the RSP programme). I expect BERI does not count this in their calculations on whether to help out with some work, but I’m not confident.
I would call it an x-risk org and not an EA-aligned org in its work, though I expect its staff all care about EA more broadly.
I think it would be correct to classify it entirely as an x-risk org and not as an EA org. I don’t think it does any EA-style analysis of what it should work on that is not captured under x-risk analysis, and I think that people working to do things like, say, fight factory farming, should never expect support from BERI (via the direct work BERI does).
I think it’s worth noting that an org can be an EA org even if it focuses exclusively on one cause area, such as x-risk reduction. What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.
What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.
This seems like a thin concept of EA. I know there are organizations who choose to pursue interventions based on them being in an area they believe to be (among) the most impactful, and based on welfarist premises and rigorous reasoning. Yet they don’t identify as EA organizations. That would be because they disagree with the consensus in EA about what constitutes ‘the most impactful,’ ‘the greatest welfare,’ and/or ‘rigorous reasoning.’ So, the consensus position(s) in EA of how to interpret all those notions could be thought of as the thick concept of EA.
Also, this definition seems to be a prescriptive definition of “EA organizations,” as opposed to being a descriptive definition. That is, all the features you mentioned seem necessary to define EA-aligned organizations as they exist, but I’m not convinced they’re sufficient to capture all the characteristics of the typical EA-aligned organization. If they were sufficient, any NPO that could identify as an EA-aligned organization would do so. Yet there are some that aren’t. An example of a typical feature of EA-aligned NPOs that is superficial but describes them in practice would be if they receive most of their funding from sources also aligned with EA (e.g., the Open Philanthropy Project, the EA Funds, EA-aligned donors, etc.).
That would be because they disagree with the consensus in EA about what constitutes ‘the most impactful,’ ‘the greatest welfare,’ and/or ‘rigorous reasoning.’
I said that the belief must be reached from welfarist premises and rigorous reasoning, not from what the organization believes are welfarist premises and rigorous reasoning.
If they were sufficient, any NPO that could identify as an EA-aligned organization would do so.
I’m not sure what you mean by this. And it seems clear to me that lots of nonprofit orgs would not classify as EA orgs given my proposed criterion (note the clarification above).
I can’t answer your question properly, in part because I am not BERI. I’ll just share some of my thoughts that seems relevant for this question:
I expect everything BERI supports and funds to always be justified in terms of x-risk. It will try to support all the parts of EA that are focused on x-risk, and not the rest. For example, their grant to EA Sweden is described as “Effective Altruism Sweden will support Markus Stoor’s project to coordinate two follow-up lunch-to-lunch meetings in Sweden for x-risk-focused individuals.”
I think it would be correct to classify it entirely as an x-risk org and not as an EA org. I don’t think it does any EA-style analysis of what it should work on that is not captured under x-risk analysis, and I think that people working to do things like, say, fight factory farming, should never expect support from BERI (via the direct work BERI does).
I think it will have indirect effects on other EA work. For example BERI supports FHI and this will give FHI a lot more freedom to take actions in the world, and FHI does some support of other areas of EA (e.g. Owen Cotton-Barratt advises the CEA, and probably that trades off against his management time on the RSP programme). I expect BERI does not count this in their calculations on whether to help out with some work, but I’m not confident.
I would call it an x-risk org and not an EA-aligned org in its work, though I expect its staff all care about EA more broadly.
I think it’s worth noting that an org can be an EA org even if it focuses exclusively on one cause area, such as x-risk reduction. What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.
This seems like a thin concept of EA. I know there are organizations who choose to pursue interventions based on them being in an area they believe to be (among) the most impactful, and based on welfarist premises and rigorous reasoning. Yet they don’t identify as EA organizations. That would be because they disagree with the consensus in EA about what constitutes ‘the most impactful,’ ‘the greatest welfare,’ and/or ‘rigorous reasoning.’ So, the consensus position(s) in EA of how to interpret all those notions could be thought of as the thick concept of EA.
Also, this definition seems to be a prescriptive definition of “EA organizations,” as opposed to being a descriptive definition. That is, all the features you mentioned seem necessary to define EA-aligned organizations as they exist, but I’m not convinced they’re sufficient to capture all the characteristics of the typical EA-aligned organization. If they were sufficient, any NPO that could identify as an EA-aligned organization would do so. Yet there are some that aren’t. An example of a typical feature of EA-aligned NPOs that is superficial but describes them in practice would be if they receive most of their funding from sources also aligned with EA (e.g., the Open Philanthropy Project, the EA Funds, EA-aligned donors, etc.).
I said that the belief must be reached from welfarist premises and rigorous reasoning, not from what the organization believes are welfarist premises and rigorous reasoning.
I’m not sure what you mean by this. And it seems clear to me that lots of nonprofit orgs would not classify as EA orgs given my proposed criterion (note the clarification above).
Fair.