Thanks for writing this post, this is a worry that I have as well.
I also believe that more could be done to build the global rationality community. I mean, I’m certainly keen to see the progress with LW2.0 and the new community section, but if we really want rationality to grow as a movement, we at least need some kind of volunteer organisation responsible for bringing this about. I think the community would be much more likely to grow if there was a group doing things like advising newly started groups, producing materials that groups could use or creating better material for beginners.
“While this worst-case scenario could apply to any large-scale rationalist project, with regards to AI alignment, if the locus of control for the field falls out of the hands of the rationality community, someone else might notice and decide to pick up that slack. This could be a sufficiently bad outcome rationalists everywhere should pay more attention to decreasing the chances of it happening.”—what would be wrong with this?
Creating some kind of volunteer organization like that is an end-goal I have in mind, and I’ve started talking to other people about this project. I’ve volunteered and been friends for a long time with a local EA organization, Rethink Charity, which runs the Local Effective Altruism Network (LEAN), which does exactly that for EA: advising newly started groups, producing materials the groups can use, and innovating ways to help groups get organized. So as part of a volunteer organization I could get advice from them on how to optimize it for the rationality community.
what would be wrong with this?
Conceivably a community other than rationality steering the trajectory of AI alignment as a field might increase existential risk directly if they were abysmal at it, or counterfactually increase x-risk relative to what would be achieved by the rationality community. By ‘rationality community’, I also mean organizations that were started from within the rationality community or have significantly benefited from it, such as CFAR, MIRI, BERI, FLI and BERI. So my statement is based on 2 assumptions:
1. AI alignment is a crucial component of x-risk reduction, which is in turn a worthwhile endeavour.
2. The rationality community as a coalition, including the listed organizations, form a coalition which has the best track record of advancing AI alignment with epistemic hygiene relative to any other, and so on priors the loss of relative influence on AI alignment by the rationality community to other agencies would decrease x-risk less than it otherwise would be.
If someone doesn’t share those assumptions, my statement doesn’t apply.
Thanks for writing this post, this is a worry that I have as well.
I also believe that more could be done to build the global rationality community. I mean, I’m certainly keen to see the progress with LW2.0 and the new community section, but if we really want rationality to grow as a movement, we at least need some kind of volunteer organisation responsible for bringing this about. I think the community would be much more likely to grow if there was a group doing things like advising newly started groups, producing materials that groups could use or creating better material for beginners.
“While this worst-case scenario could apply to any large-scale rationalist project, with regards to AI alignment, if the locus of control for the field falls out of the hands of the rationality community, someone else might notice and decide to pick up that slack. This could be a sufficiently bad outcome rationalists everywhere should pay more attention to decreasing the chances of it happening.”—what would be wrong with this?
Creating some kind of volunteer organization like that is an end-goal I have in mind, and I’ve started talking to other people about this project. I’ve volunteered and been friends for a long time with a local EA organization, Rethink Charity, which runs the Local Effective Altruism Network (LEAN), which does exactly that for EA: advising newly started groups, producing materials the groups can use, and innovating ways to help groups get organized. So as part of a volunteer organization I could get advice from them on how to optimize it for the rationality community.
Conceivably a community other than rationality steering the trajectory of AI alignment as a field might increase existential risk directly if they were abysmal at it, or counterfactually increase x-risk relative to what would be achieved by the rationality community. By ‘rationality community’, I also mean organizations that were started from within the rationality community or have significantly benefited from it, such as CFAR, MIRI, BERI, FLI and BERI. So my statement is based on 2 assumptions:
1. AI alignment is a crucial component of x-risk reduction, which is in turn a worthwhile endeavour.
2. The rationality community as a coalition, including the listed organizations, form a coalition which has the best track record of advancing AI alignment with epistemic hygiene relative to any other, and so on priors the loss of relative influence on AI alignment by the rationality community to other agencies would decrease x-risk less than it otherwise would be.
If someone doesn’t share those assumptions, my statement doesn’t apply.