The only two that seem to have a chance of avoiding Hansonian scenarios in the very long term, IMO, are the “unlimited economic growth is possible” scenario (which requires us to be wrong about physics, AFAICT) or a very powerful and far-thinking singleton. The selection for expansion is an incredibly strong optimization process, and it could only be kept in check by a stronger optimization process with a goal of keeping it in check. But that’s not a priori impossible, nor an especially unlikely subgoal for (say) a FAI.
The only two that seem to have a chance of avoiding Hansonian scenarios in the very long term, IMO, are the “unlimited economic growth is possible” scenario (which requires us to be wrong about physics, AFAICT) or a very powerful and far-thinking singleton. The selection for expansion is an incredibly strong optimization process, and it could only be kept in check by a stronger optimization process with a goal of keeping it in check. But that’s not a priori impossible, nor an especially unlikely subgoal for (say) a FAI.