The way I imagined it, people inside the VR wouldn’t be able to change the AI’s values. Population ethics seems like a problem that people can solve by themselves, negotiating with each other under the VR’s rules, without help from AI.
CEV requires extracting all human preferences, extrapolating them, determining coherence, and finding a general way to map them to physics. (We need to either do it ourselves, or teach the AI how to do it, the difference doesn’t matter to the argument.) The approach in my post skips most of these tasks, by letting humans describe a nice normal world directly, and requires mapping only one thing (consciousness) to physics. Though I agree with you that the loss of potential utility is huge, the idea is intended as a kind of lower bound.
The way I imagined it, people inside the VR wouldn’t be able to change the AI’s values. Population ethics seems like a problem that people can solve by themselves, negotiating with each other under the VR’s rules, without help from AI.
CEV requires extracting all human preferences, extrapolating them, determining coherence, and finding a general way to map them to physics. (We need to either do it ourselves, or teach the AI how to do it, the difference doesn’t matter to the argument.) The approach in my post skips most of these tasks, by letting humans describe a nice normal world directly, and requires mapping only one thing (consciousness) to physics. Though I agree with you that the loss of potential utility is huge, the idea is intended as a kind of lower bound.