Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases.
They share a physical office! Good Ventures pays for it! I’m not going to bother addressing comments this long in depth when they’re full of basic errors like this.
For the record, this is no longer going to be true starting in I think about a month, since GiveWell is moving to Oakland and Open Phil is staying in SF.
1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they’re aren’t responsible for anything to do with OpenAI.
2. It’s unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell’s annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn’t establish that, so it doesn’t make sense.
3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it’s all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.
Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?
Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
I won’t do those things. Yet that is what it would be for me to behave as you are behaving. I’ll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it’s justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?
I’m not going to bother addressing comments this long in depth when they’re full of basic errors like this.
While there is what you see as at least one error in my post, there are many items I see as errors in your post I will bring to everyone’s attention. It will be revised, edited, and polished to not have what errors you see in it, or at least it will be clear enough what I am and am not saying won’t be ambiguous. It will be a top-level article on both the EA Forum and LW. A large part of it is going to be that you at best are using extremely sloppy arguments, and at worst are making blatant attempts to use misleading info to convince others to do what you want, just as you accuse Good Ventures, Open Phil, and Givewell of doing. One theme will be that you’re still in the x-risk space, employed in AI alignment, willing to do this towards your former employers, also involved in the x-risk/AI alignment space. So, while you may not want to bother with addressing these points, I imagine you will have to eventually for the sake of your reputation.
They share a physical office! Good Ventures pays for it! I’m not going to bother addressing comments this long in depth when they’re full of basic errors like this.
For the record, this is no longer going to be true starting in I think about a month, since GiveWell is moving to Oakland and Open Phil is staying in SF.
Otherwise, here is what I was trying to say:
1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they’re aren’t responsible for anything to do with OpenAI.
2. It’s unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell’s annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn’t establish that, so it doesn’t make sense.
3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it’s all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.
Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?
Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
I won’t do those things. Yet that is what it would be for me to behave as you are behaving. I’ll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it’s justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?
While there is what you see as at least one error in my post, there are many items I see as errors in your post I will bring to everyone’s attention. It will be revised, edited, and polished to not have what errors you see in it, or at least it will be clear enough what I am and am not saying won’t be ambiguous. It will be a top-level article on both the EA Forum and LW. A large part of it is going to be that you at best are using extremely sloppy arguments, and at worst are making blatant attempts to use misleading info to convince others to do what you want, just as you accuse Good Ventures, Open Phil, and Givewell of doing. One theme will be that you’re still in the x-risk space, employed in AI alignment, willing to do this towards your former employers, also involved in the x-risk/AI alignment space. So, while you may not want to bother with addressing these points, I imagine you will have to eventually for the sake of your reputation.