(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil’s resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher.
Not to be confused with the user formerly known as trevor1.
Agreed, I think people should apply a pretty strong penalty when evaluating a potential donation that has or worsens these dynamics. There are some donation opportunities that still have the “major donors won’t [fully] fund it” and “I’m advantaged to evaluate it as an AIS professional” without the “I’m personal friends with the recipient” weirdness, though—e.g. alignment approaches or policy research/advocacy directions you find promising that Open Phil isn’t currently funding that would be executed thousands of miles away.