1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they’re aren’t responsible for anything to do with OpenAI.
2. It’s unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell’s annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn’t establish that, so it doesn’t make sense.
3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it’s all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.
Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?
Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
I won’t do those things. Yet that is what it would be for me to behave as you are behaving. I’ll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it’s justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?
Otherwise, here is what I was trying to say:
1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they’re aren’t responsible for anything to do with OpenAI.
2. It’s unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell’s annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn’t establish that, so it doesn’t make sense.
3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it’s all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.
Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?
Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
I won’t do those things. Yet that is what it would be for me to behave as you are behaving. I’ll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it’s justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?