Doesn’t this part of the comment answer your question?
We can very easily “grab probability mass” in relatively optimistic worlds. From our perspective of assigning non-trivial probability mass to the optimistic worlds, there’s enormous opportunity to do work that, say, one might think moves us from a 20% chance of things going well to a 30% chance of things going well. This makes it the most efficient option on the present margin.
It sounds like they think it’s easier to make progress on research that will help in scenarios where alignment ends up being not that hard. And so they’re focusing there because it seems to be highest EV.
Seems reasonable to me. (Though noting that the full EV analysis would have to take into account how neglected different kinds of research are, and many other factors as well.)
What are the strategic reasons for prioritizing work on intermediate difficulty problems and “easy safety techniques” at this time?
Doesn’t this part of the comment answer your question?
It sounds like they think it’s easier to make progress on research that will help in scenarios where alignment ends up being not that hard. And so they’re focusing there because it seems to be highest EV.
Seems reasonable to me. (Though noting that the full EV analysis would have to take into account how neglected different kinds of research are, and many other factors as well.)