TheOtherDave, I don’t really want to argue about whether talking about “right” adds value. I suspect it might (i.e., I’m not so confident as you that it doesn’t), but mainly I was trying to argue with Eliezer on his own terms. I do want to correct this:
A CEV-implementing FAI, supposing such a thing is possible, will do what we collectively want done, whatever that turns out to be.
CEV will not do “what we collectively want done”, it will do what’s “right” according to Eliezer’s meta-ethics, which is whatever is coherent amongst the volitions it extrapolates from humanity, which as others and I have argued, might turn out to be “nothing”. If you’re proposing that we build an AI that does do “what we collectively want done”, you’d have to define what that means first.
I don’t really want to argue about whether talking about “right” adds value.
OK. The question I started out with, way at the top of the chain, was precisely about why having a referent for “right” was important, so I will drop that question and everything that descends from it.
As for your correction, I actually don’t understand the distinction you’re drawing, but in any case I agree with you that it might turn out that human volition lacks a coherent core of any significance.
To me, “what we collectively want done” means somehow aggregating (for example, through voting or bargaining) our current preferences. It lacks the elements of extrapolation and coherence that are central to CEV.
What is the source of criteria such as voting or bargaining that you suggest? Why polling everyone and not polling every prime-indexed citizen instead? It’s always your judgment about what is the right thing to do.
TheOtherDave, I don’t really want to argue about whether talking about “right” adds value. I suspect it might (i.e., I’m not so confident as you that it doesn’t), but mainly I was trying to argue with Eliezer on his own terms. I do want to correct this:
CEV will not do “what we collectively want done”, it will do what’s “right” according to Eliezer’s meta-ethics, which is whatever is coherent amongst the volitions it extrapolates from humanity, which as others and I have argued, might turn out to be “nothing”. If you’re proposing that we build an AI that does do “what we collectively want done”, you’d have to define what that means first.
OK. The question I started out with, way at the top of the chain, was precisely about why having a referent for “right” was important, so I will drop that question and everything that descends from it.
As for your correction, I actually don’t understand the distinction you’re drawing, but in any case I agree with you that it might turn out that human volition lacks a coherent core of any significance.
To me, “what we collectively want done” means somehow aggregating (for example, through voting or bargaining) our current preferences. It lacks the elements of extrapolation and coherence that are central to CEV.
Gotcha… that makes sense. Thanks for clarifying.
What is the source of criteria such as voting or bargaining that you suggest? Why polling everyone and not polling every prime-indexed citizen instead? It’s always your judgment about what is the right thing to do.