Creating some kind of volunteer organization like that is an end-goal I have in mind, and I’ve started talking to other people about this project. I’ve volunteered and been friends for a long time with a local EA organization, Rethink Charity, which runs the Local Effective Altruism Network (LEAN), which does exactly that for EA: advising newly started groups, producing materials the groups can use, and innovating ways to help groups get organized. So as part of a volunteer organization I could get advice from them on how to optimize it for the rationality community.
what would be wrong with this?
Conceivably a community other than rationality steering the trajectory of AI alignment as a field might increase existential risk directly if they were abysmal at it, or counterfactually increase x-risk relative to what would be achieved by the rationality community. By ‘rationality community’, I also mean organizations that were started from within the rationality community or have significantly benefited from it, such as CFAR, MIRI, BERI, FLI and BERI. So my statement is based on 2 assumptions:
1. AI alignment is a crucial component of x-risk reduction, which is in turn a worthwhile endeavour.
2. The rationality community as a coalition, including the listed organizations, form a coalition which has the best track record of advancing AI alignment with epistemic hygiene relative to any other, and so on priors the loss of relative influence on AI alignment by the rationality community to other agencies would decrease x-risk less than it otherwise would be.
If someone doesn’t share those assumptions, my statement doesn’t apply.
Creating some kind of volunteer organization like that is an end-goal I have in mind, and I’ve started talking to other people about this project. I’ve volunteered and been friends for a long time with a local EA organization, Rethink Charity, which runs the Local Effective Altruism Network (LEAN), which does exactly that for EA: advising newly started groups, producing materials the groups can use, and innovating ways to help groups get organized. So as part of a volunteer organization I could get advice from them on how to optimize it for the rationality community.
Conceivably a community other than rationality steering the trajectory of AI alignment as a field might increase existential risk directly if they were abysmal at it, or counterfactually increase x-risk relative to what would be achieved by the rationality community. By ‘rationality community’, I also mean organizations that were started from within the rationality community or have significantly benefited from it, such as CFAR, MIRI, BERI, FLI and BERI. So my statement is based on 2 assumptions:
1. AI alignment is a crucial component of x-risk reduction, which is in turn a worthwhile endeavour.
2. The rationality community as a coalition, including the listed organizations, form a coalition which has the best track record of advancing AI alignment with epistemic hygiene relative to any other, and so on priors the loss of relative influence on AI alignment by the rationality community to other agencies would decrease x-risk less than it otherwise would be.
If someone doesn’t share those assumptions, my statement doesn’t apply.