It’s one of the proposed plans. The main difficulty is that low impact is hard to formalize. For example, if you ask the AI to cure cancer with low impact, it might give people another disease that kills them instead, to keep the global death rate constant. Fully unpacking “low impact” might be almost as hard as the friendliness problem. See this page for more. The LW user who’s doing most work on this now is Stuart Armstrong.
It’s one of the proposed plans. The main difficulty is that low impact is hard to formalize. For example, if you ask the AI to cure cancer with low impact, it might give people another disease that kills them instead, to keep the global death rate constant. Fully unpacking “low impact” might be almost as hard as the friendliness problem. See this page for more. The LW user who’s doing most work on this now is Stuart Armstrong.