The problem statement should look like “how do we stop unfriendly AIs”, not “how do we make friendly AIs”.
If the universe is capable of running super-intelligent beings, then eventually either there will be one, or civilization will collapse. Maintaining the current state where there are no minds more intelligent than base humans seems very unlikely to be stable in the long run.
Given that, it seems the problem should be framed as “how do we end up with a super-intelligent being (or beings) that will go on to rearrange the universe the way we prefer?” which is not too different from “how do we make friendly AIs” if we interpret things like recursively-improved uploads as AIs.
If the universe is capable of running super-intelligent beings, then eventually either there will be one, or civilization will collapse. Maintaining the current state where there are no minds more intelligent than base humans seems very unlikely to be stable in the long run.
Given that, it seems the problem should be framed as “how do we end up with a super-intelligent being (or beings) that will go on to rearrange the universe the way we prefer?” which is not too different from “how do we make friendly AIs” if we interpret things like recursively-improved uploads as AIs.