“Arguably, you can’t fully align with inconsistent preferences”
My intuitions tend to agree, but I’m also inclined to ask “why not?” e.g. even if my preferences are absurdly cyclical, but we get AGI to imitate me perfectly (or me + faster thinking + more information), under what sense of the word is it “unaligned” with me? More generally, what is it about these other coherence conditions that prevent meaningful “alignment”? (Maybe it takes a big discursive can of worms, but I actually haven’t seen this discussed on a serious level so I’m quite happy to just read references).
I’ve been thinking about whether you can have AGI that only aims for pareto-improvements, or a weaker formulation of that, in order to align with inconsistent values among groups of people. This is strongly based on Eric Drexler’s thoughts on what he has called “pareto-topia”. (I haven’t gotten anywhere thinking about this because I’m spending my time on other things.)
Yeah, I think something like this is pretty important. Another reason is that humans inherently don’t like to be told, top-down, that X is the optimal solution. A utilitarian AI might redistribute property forcefully, where a pareto-improving AI would seek to compensate people.
An even more stringent requirement which seems potentially sensible: only pareto-improvements which both parties both understand and endorse. (IE, there should be something like consent.) This seems very sensible with small numbers of people, but unfortunately, seems infeasible for large numbers of people (given the way all actions have side-effects for many many people).
I’ve been thinking about whether you can have AGI that only aims for pareto-improvements, or a weaker formulation of that, in order to align with inconsistent values among groups of people. This is strongly based on Eric Drexler’s thoughts on what he has called “pareto-topia”. (I haven’t gotten anywhere thinking about this because I’m spending my time on other things.)
Yeah, I think something like this is pretty important. Another reason is that humans inherently don’t like to be told, top-down, that X is the optimal solution. A utilitarian AI might redistribute property forcefully, where a pareto-improving AI would seek to compensate people.
An even more stringent requirement which seems potentially sensible: only pareto-improvements which both parties both understand and endorse. (IE, there should be something like consent.) This seems very sensible with small numbers of people, but unfortunately, seems infeasible for large numbers of people (given the way all actions have side-effects for many many people).
See my other reply about pseudo-pareto improvements—but I think the “understood + endorsed” idea is really important, and worth further thought.