most of the issues that MIRI is currently working on are prerequisites for any sort of AI, not just friendly AI
This seems quite likely (or at least the weaker claim, that either these results are necessary for any AI or they are useless for any AI, seems very likely).
Point of order: Let A = “these results are necessary for any AI” and B = “they are useless for any AI”. It sounds like you’re weakening from A to (A or B) because you feel the probability of B is large, and therefore the probability of A isn’t all that large in absolute terms. But if much of the probability mass of the weaker claim (A or B) comes from B, then if at all possible, it seems more pragmatically useful to talk about (i) the probability of B and (ii) the probability of A given (not B), instead of talking about the probability of (A or B), since qualitative statements about (i) and (ii) seem to be what’s most relevant for policy. (In particular, even knowing that “the probability of (A or B) is very high” and “the probability of A is not that high”—or even “is low”—doesn’t tell us whether P(A|not B) is high or low.)
Point of order: Let A = “these results are necessary for any AI” and B = “they are useless for any AI”. It sounds like you’re weakening from A to (A or B) because you feel the probability of B is large, and therefore the probability of A isn’t all that large in absolute terms. But if much of the probability mass of the weaker claim (A or B) comes from B, then if at all possible, it seems more pragmatically useful to talk about (i) the probability of B and (ii) the probability of A given (not B), instead of talking about the probability of (A or B), since qualitative statements about (i) and (ii) seem to be what’s most relevant for policy. (In particular, even knowing that “the probability of (A or B) is very high” and “the probability of A is not that high”—or even “is low”—doesn’t tell us whether P(A|not B) is high or low.)