Seeing my statements reflected back is helpful, thank you.
I think Effective Altruism isupper case and has been for a long time, in part because it aggressively recruited people who wanted to follow[1]. In my ideal world it both has better leadership and needs less of it, because members are less dependent.
I think rationality does a decent job here. There are strong leaders of individual fiefdoms, and networks of respect and trust, but it’s much more federated.
Which is noble and should be respected- the world needs more followers than leaders. But if you actively recruit them, you need to take responsibility for providing leadership.
[I have only read Elizabeth’s comment that I’m responding to here (so far); apologies if it would have been less confusing for me to read the entire thread before responding.]
I have always capitalized both EA and Rationality, and have never thought about it before. The first justification for capitalizing R that comes to mind is all the intentionality/intelligence that I perceive was invested into the proto-“AI Safety” community under EY’s (and others’) leadership. Isn’t it fair to describe the “Rationalist/Rationality” community as the branch of AI Safety/X-risk that is downstream of MIRI, LW, the Sequences, 🪄HPMOR, etc?
Seeing my statements reflected back is helpful, thank you.
I think Effective Altruism is upper case and has been for a long time, in part because it aggressively recruited people who wanted to follow[1]. In my ideal world it both has better leadership and needs less of it, because members are less dependent.
I think rationality does a decent job here. There are strong leaders of individual fiefdoms, and networks of respect and trust, but it’s much more federated.
Which is noble and should be respected- the world needs more followers than leaders. But if you actively recruit them, you need to take responsibility for providing leadership.
[I have only read Elizabeth’s comment that I’m responding to here (so far); apologies if it would have been less confusing for me to read the entire thread before responding.]
I have always capitalized both EA and Rationality, and have never thought about it before. The first justification for capitalizing R that comes to mind is all the intentionality/intelligence that I perceive was invested into the proto-“AI Safety” community under EY’s (and others’) leadership. Isn’t it fair to describe the “Rationalist/Rationality” community as the branch of AI Safety/X-risk that is downstream of MIRI, LW, the Sequences, 🪄HPMOR, etc?