immediate downsides to what MIRI is currently doing, namely shortening AI timelines
To be clear, based on what I’ve seen you write elsewhere, you think they are shortening AI timelines because the mathematical work on reflection and decision theory would be useful for AIs in general, and are not specific to the problem of friendliness. Is that right?
This isn’t obvious to me. In particular, the reflection work seems much more relevant to creating stable goal structures than to engineering intelligence / optimization power.
To be clear, based on what I’ve seen you write elsewhere, you think they are shortening AI timelines because the mathematical work on reflection and decision theory would be useful for AIs in general, and are not specific to the problem of friendliness. Is that right?
This isn’t obvious to me. In particular, the reflection work seems much more relevant to creating stable goal structures than to engineering intelligence / optimization power.