All formulations of human value are massively underspecified.
I agree that expecting humans to know what sorts of things would be good for humans in general is terrible. The problem is that we also can’t get an honest report of what people think would be good for them personally because lying is too useful/humans value things hypocritically.
All formulations of human value are massively underspecified.
I agree that expecting humans to know what sorts of things would be good for humans in general is terrible. The problem is that we also can’t get an honest report of what people think would be good for them personally because lying is too useful/humans value things hypocritically.