A solution to the problems of ethics, including the repugnant conclusion, preference aggregation, etc.
Isn’t this one of the problems you can let the FAI solve?
Actually the repugnant conclusion yes, preference aggregation no, because you have to aggregate individual humans’ preferences.
And what if preferences cannot be measured by a common “ruler”? What then?
I agree that preference aggregation is hard. Wei dai and nick Bostrom have both made proposals based upon agents negotiating with some deadline or constraint.
Isn’t this one of the problems you can let the FAI solve?
Actually the repugnant conclusion yes, preference aggregation no, because you have to aggregate individual humans’ preferences.
And what if preferences cannot be measured by a common “ruler”? What then?
I agree that preference aggregation is hard. Wei dai and nick Bostrom have both made proposals based upon agents negotiating with some deadline or constraint.