How can you get a superintelligent AI aligned with human values? There are two pathways that I often hear discussed. The first sees a general alignment problem—how to get a powerful AI to safely do anything—which, once we’ve solved, we can point towards human values. The second perspective is that we can only get alignment by targeting human values—these values must be aimed at, from the start of the process.
Some people argued that best way to the AI alignment is the use human uploads as AI. This seems to be radical example of the second approach you described here.
Some people argued that best way to the AI alignment is the use human uploads as AI. This seems to be radical example of the second approach you described here.