I mostly agree, but: You can affect “problem difficulty” by selecting harder or easier problems. It would still be right not to be discouraged about MIRI’s prospects if (1) the hard problems they’re attacking are hard problems that absolutely need to be solved or (2) the hardness of the problems they’re attacking is a necessary consequence of the hardness (or something) of other problems that absolutely need to be solved. But it might turn out, e.g., that the best road to safe AI takes an entirely different path from the ones MIRI is exploring, in which case it would be a reasonable criticism to say that they’re expending a lot of effort attacking intractably hard problems rather than addressing the tractable problems that would actually help.
I mostly agree, but: You can affect “problem difficulty” by selecting harder or easier problems. It would still be right not to be discouraged about MIRI’s prospects if (1) the hard problems they’re attacking are hard problems that absolutely need to be solved or (2) the hardness of the problems they’re attacking is a necessary consequence of the hardness (or something) of other problems that absolutely need to be solved. But it might turn out, e.g., that the best road to safe AI takes an entirely different path from the ones MIRI is exploring, in which case it would be a reasonable criticism to say that they’re expending a lot of effort attacking intractably hard problems rather than addressing the tractable problems that would actually help.
MIRI would say they don’t have the luxury of choosing easier problems. They think they are saving the world from an imminent crisis.
They might well do, but others (e.g., Clarity) might not be persuaded.
We’ll see :)