Have you considered CEA? Not a perfect fit, but they’re remote-first and I personally think they help with alignment research indirectly by building the EA community and improving lesswrong.com as well (they use the same code). It’s really important, I think, for these places to be (1) inviting, (2) promote good complicated (non toxic) discussions, and (3) connect people to relevant orgs/people, including to AI Safety orgs.
Again, not sure this is what you’re looking for. It resonates with me personally
Have you considered CEA? Not a perfect fit, but they’re remote-first and I personally think they help with alignment research indirectly by building the EA community and improving lesswrong.com as well (they use the same code). It’s really important, I think, for these places to be (1) inviting, (2) promote good complicated (non toxic) discussions, and (3) connect people to relevant orgs/people, including to AI Safety orgs.
Again, not sure this is what you’re looking for. It resonates with me personally