> And secondly, EA simply has the right values.
I think this is false, because I think EA is too heterogeneous to count as having the same set of values.
> And secondly, EA simply has the right values.
I think this is false, because I think EA is too heterogeneous to count as having the same set of values.
I think Open Phil not even trying to bridge the gap between:
> (a) the reasons we believe what we believe and (b) the reasons we’re able to share publicly and relatively efficiently.
is deeply problematic.
The reasons given in the post you link to are, to my mind, not convincing at all. We are talking about directing large sums of money to AI research that could have done a lot of good if directed in a different way. The objection is that giving the justification for would just take too long, and also any objections to it from non-AI-specialists would not be worth listening to.
But given that the sums are large, spending time explaining the decision is crucial, because if the reasoning does not support the conclusion, it’s imperrative that this be discovered. And limiting input to AI-experts introduces what I would have thought is a totally unacceptable selection effect: these people are bound to be much more likely than average to belive that directing money to AI-research is very valuable.