I can make the practical case: If “right” refers to nothing, and we design an FAI to do what is right, then it will do nothing. We want the FAI to do something instead of nothing, so “right” having a referent is important.
Or the philosophical case: If “right” refers to nothing, then “it’s right for me to save that child” would be equivalent to the null sentence. From introspection I think I must mean something non-empty when I say something like that.
(sigh) Sure, agreed… if our intention is to build an FAI to do what is right, it’s important that “what is right” mean something. And I could ask why we should build an FAI that way, and you could tell me that that’s what it means to be Friendly, and on and on.
I’m not trying to be pedantic here, but this does seem sort of pointlessly circular… a discussion about words rather than things.
When a Jewish theist says “God has commanded me to save that child,” they may be entirely sincere, but that doesn’t in and of itself constitute evidence that “God” has a referent, let alone that the referent of “God” (supposing it exists) actually so commanded them.
When you say “It’s right for me to save that child,” the situation may be different, but the mere fact that you can utter that sentence with sincerity doesn’t constitute evidence of difference.
If we really want to save children, I would say we should talk about how most effectively to save children, and design our systems to save children, and that talking about whether God commanded us to save children or whether it’s right to save children adds nothing of value to the process.
More generally, if we actually knew everything we wanted, as individuals and groups, then we could talk about how most effectively to achieve that and design our FAIs to achieve that and discussions about whether it’s right would seem as extraneous as discussions about discussions about whether it’s God-willed.
The problem is that we don’t know what we want. So we attach labels to that-thing-we-don’t-understand, and over time those labels adopt all kinds of connotations that make discussion difficult. The analogy to theism applies here as well.
At some point, it becomes useful to discard those labels.
A CEV-implementing FAI, supposing such a thing is possible, will do what we collectively want done, whatever that turns out to be. A FAI implementing some other strategy will do something else. Whether those things are right is just as useless to talk about as whether they are God’s will; those terms add nothing to the conversation.
TheOtherDave, I don’t really want to argue about whether talking about “right” adds value. I suspect it might (i.e., I’m not so confident as you that it doesn’t), but mainly I was trying to argue with Eliezer on his own terms. I do want to correct this:
A CEV-implementing FAI, supposing such a thing is possible, will do what we collectively want done, whatever that turns out to be.
CEV will not do “what we collectively want done”, it will do what’s “right” according to Eliezer’s meta-ethics, which is whatever is coherent amongst the volitions it extrapolates from humanity, which as others and I have argued, might turn out to be “nothing”. If you’re proposing that we build an AI that does do “what we collectively want done”, you’d have to define what that means first.
I don’t really want to argue about whether talking about “right” adds value.
OK. The question I started out with, way at the top of the chain, was precisely about why having a referent for “right” was important, so I will drop that question and everything that descends from it.
As for your correction, I actually don’t understand the distinction you’re drawing, but in any case I agree with you that it might turn out that human volition lacks a coherent core of any significance.
To me, “what we collectively want done” means somehow aggregating (for example, through voting or bargaining) our current preferences. It lacks the elements of extrapolation and coherence that are central to CEV.
What is the source of criteria such as voting or bargaining that you suggest? Why polling everyone and not polling every prime-indexed citizen instead? It’s always your judgment about what is the right thing to do.
In that case, not really… what I was actually curious about is why “right” having a referent is important.
I can make the practical case: If “right” refers to nothing, and we design an FAI to do what is right, then it will do nothing. We want the FAI to do something instead of nothing, so “right” having a referent is important.
Or the philosophical case: If “right” refers to nothing, then “it’s right for me to save that child” would be equivalent to the null sentence. From introspection I think I must mean something non-empty when I say something like that.
Do either of these answer your question?
Congratulations, you just solved the Fermi paradox.
(sigh) Sure, agreed… if our intention is to build an FAI to do what is right, it’s important that “what is right” mean something. And I could ask why we should build an FAI that way, and you could tell me that that’s what it means to be Friendly, and on and on.
I’m not trying to be pedantic here, but this does seem sort of pointlessly circular… a discussion about words rather than things.
When a Jewish theist says “God has commanded me to save that child,” they may be entirely sincere, but that doesn’t in and of itself constitute evidence that “God” has a referent, let alone that the referent of “God” (supposing it exists) actually so commanded them.
When you say “It’s right for me to save that child,” the situation may be different, but the mere fact that you can utter that sentence with sincerity doesn’t constitute evidence of difference.
If we really want to save children, I would say we should talk about how most effectively to save children, and design our systems to save children, and that talking about whether God commanded us to save children or whether it’s right to save children adds nothing of value to the process.
More generally, if we actually knew everything we wanted, as individuals and groups, then we could talk about how most effectively to achieve that and design our FAIs to achieve that and discussions about whether it’s right would seem as extraneous as discussions about discussions about whether it’s God-willed.
The problem is that we don’t know what we want. So we attach labels to that-thing-we-don’t-understand, and over time those labels adopt all kinds of connotations that make discussion difficult. The analogy to theism applies here as well.
At some point, it becomes useful to discard those labels.
A CEV-implementing FAI, supposing such a thing is possible, will do what we collectively want done, whatever that turns out to be. A FAI implementing some other strategy will do something else. Whether those things are right is just as useless to talk about as whether they are God’s will; those terms add nothing to the conversation.
TheOtherDave, I don’t really want to argue about whether talking about “right” adds value. I suspect it might (i.e., I’m not so confident as you that it doesn’t), but mainly I was trying to argue with Eliezer on his own terms. I do want to correct this:
CEV will not do “what we collectively want done”, it will do what’s “right” according to Eliezer’s meta-ethics, which is whatever is coherent amongst the volitions it extrapolates from humanity, which as others and I have argued, might turn out to be “nothing”. If you’re proposing that we build an AI that does do “what we collectively want done”, you’d have to define what that means first.
OK. The question I started out with, way at the top of the chain, was precisely about why having a referent for “right” was important, so I will drop that question and everything that descends from it.
As for your correction, I actually don’t understand the distinction you’re drawing, but in any case I agree with you that it might turn out that human volition lacks a coherent core of any significance.
To me, “what we collectively want done” means somehow aggregating (for example, through voting or bargaining) our current preferences. It lacks the elements of extrapolation and coherence that are central to CEV.
Gotcha… that makes sense. Thanks for clarifying.
What is the source of criteria such as voting or bargaining that you suggest? Why polling everyone and not polling every prime-indexed citizen instead? It’s always your judgment about what is the right thing to do.