I am guilty of being a zero-to-one, rather than one-to-many, type person. It seems far easier and more interesting to me, to create new forms of progress of any sort, rather than convincing people to adopt better ideas.
I guess the project of convincing people seems hard? Like, if I come up with something awesome that’s new, it seems easier to get it into people’s hands, rather than taking an existing thing which people have already rejected and telling them “hey this is actually cool, let’s look again”.
All that said, I do find this idea-space intriguing partly thanks to this post—it makes me want to think of ways of doing more one-to-many type stuff. I’ve been recently drawn into living in DC and I think the DC effective altruism folks are much more on the one-to-many side of the world.
I don’t blame anyone for being more personally interested in advancing the moral frontier than in distributing moral best practices. And we need both types of work. I’m just curious why the latter doesn’t figure larger in EA cause prioritization.
It may be the same kind of bias that disproportionately incentivizes publishing new shiny research papers, finding new hypotheses etc over trying to replicate what has already been published.
I am guilty of being a zero-to-one, rather than one-to-many, type person. It seems far easier and more interesting to me, to create new forms of progress of any sort, rather than convincing people to adopt better ideas.
I guess the project of convincing people seems hard? Like, if I come up with something awesome that’s new, it seems easier to get it into people’s hands, rather than taking an existing thing which people have already rejected and telling them “hey this is actually cool, let’s look again”.
All that said, I do find this idea-space intriguing partly thanks to this post—it makes me want to think of ways of doing more one-to-many type stuff. I’ve been recently drawn into living in DC and I think the DC effective altruism folks are much more on the one-to-many side of the world.
I don’t blame anyone for being more personally interested in advancing the moral frontier than in distributing moral best practices. And we need both types of work. I’m just curious why the latter doesn’t figure larger in EA cause prioritization.
It may be the same kind of bias that disproportionately incentivizes publishing new shiny research papers, finding new hypotheses etc over trying to replicate what has already been published.