I don’t see how that would help at all, and pure busywork is silly when you have lots of things to do that are positive-EV but probably low-impact.
MIRI “doesn’t know what to do” in the sense that we don’t see a strategy with macroscopic probability of saving the world, and the most-promising ones with microscopic probability are very diverse and tend to violate or side-step our current models in various ways, such that it’s hard to pick actions that help much with those scenarios as a class.
That’s different from MIRI “not knowing what to do” in the sense of having no ideas for local actions that are worth trying on EV grounds. (Though a lot of these look like encouraging non-MIRI people to try lots of things and build skill and models in ways that might change the strategic situation down the road.)
(Also, I’m mainly trying to describe Nate and Eliezer’s views here. Other MIRI researchers are more optimistic about some of the technical work we’re doing, AFAIK.)
I don’t see how that would help at all, and pure busywork is silly when you have lots of things to do that are positive-EV but probably low-impact.
MIRI “doesn’t know what to do” in the sense that we don’t see a strategy with macroscopic probability of saving the world, and the most-promising ones with microscopic probability are very diverse and tend to violate or side-step our current models in various ways, such that it’s hard to pick actions that help much with those scenarios as a class.
That’s different from MIRI “not knowing what to do” in the sense of having no ideas for local actions that are worth trying on EV grounds. (Though a lot of these look like encouraging non-MIRI people to try lots of things and build skill and models in ways that might change the strategic situation down the road.)
(Also, I’m mainly trying to describe Nate and Eliezer’s views here. Other MIRI researchers are more optimistic about some of the technical work we’re doing, AFAIK.)