This is worth thinking about in the future, thanks. I think right now, it’s good to take advantage of MIRI’s matched giving opportunities when they arise, and I’d expect either organization to announce if they were under a particular crunch or aiming to hit a particular target.
.impact is a volunteer task force of effective altruists who take upon projects not linked to any one organization. .impact deals in particular with implementing open-source software resources that are useful to effective altruists. Well, that’s what it’s trying to specialize in; the decentralized coordination of remote volunteers is very difficult.
Anyway, on the effective altruism forum, I was involved with a discussion about building an interactive visual map that updates on what the status of projects, and funding, for effective altruist organizations. Anybody trying to reduce existential risk would fall under effective altruism, so ostensibly, they’d be included on such a map, too. This would solve most of the problem I myself posed above.
I’ll update Less Wrong in the future if I get wind of any progress on such a project. Anyone: send me a private message if you want more information.
This is worth thinking about in the future, thanks. I think right now, it’s good to take advantage of MIRI’s matched giving opportunities when they arise, and I’d expect either organization to announce if they were under a particular crunch or aiming to hit a particular target.
.impact is a volunteer task force of effective altruists who take upon projects not linked to any one organization. .impact deals in particular with implementing open-source software resources that are useful to effective altruists. Well, that’s what it’s trying to specialize in; the decentralized coordination of remote volunteers is very difficult.
Anyway, on the effective altruism forum, I was involved with a discussion about building an interactive visual map that updates on what the status of projects, and funding, for effective altruist organizations. Anybody trying to reduce existential risk would fall under effective altruism, so ostensibly, they’d be included on such a map, too. This would solve most of the problem I myself posed above.
I’ll update Less Wrong in the future if I get wind of any progress on such a project. Anyone: send me a private message if you want more information.