I think it’s unique to MIRI in the sense that it makes sense for MIRI to be expected to explain how its research is going to accomplish its mission of making machine intelligence benefit humanity, whereas it doesn’t make sense for global health charities to be expected to explain why improving global health makes the far future go better.
Right. Very few charities are even claiming to be good for the far future. So there’s an asymmetry between MIRI and other charities w.r.t. responsibility to explain plausible effects on the far future. But among parties (including MIRI) who care principally about the far future and are trying to do something about it, there seems to be no such asymmetry — except for other reasons, e.g. asymmetry in resource use.
Right. Very few charities are even claiming to be good for the far future. So there’s an asymmetry between MIRI and other charities w.r.t. responsibility to explain plausible effects on the far future. But among parties (including MIRI) who care principally about the far future and are trying to do something about it, there seems to be no such asymmetry — except for other reasons, e.g. asymmetry in resource use.
Yes.