I don’t necessarily have a problem with using the word “good” so long as everyone understands it isn’t something out there in the world that we’ve discovered—that it’s a creation of our minds, words and behavior—like cake. This is a problem because most of the world doesn’t think that. A lot of times it doesn’t seem like Less Wrong thinks that (but I’m beginning to think that is just non-standard terminology).
Yeah, a lot of the Metaethics Sequence seems to be trying to get to this point.
For my part, it seems easier to just stop using words like “good” if we believe they are likely to be misunderstood, rather than devoting a lot of energy to convincing everyone that they should mean something different by the word (or that the word really means something different from what they think it means, or whatever).
I’m content to say that we value what we currently value, because we currently value it, and asking whether that’s good or not is asking an empty question.
Of course, I do understand the rhetorical value of getting to claim that our AI does good, rather than “merely” claiming that it implements what we currently value.
I’m content to say that we value what we currently value, because we currently value it, and asking whether that’s good or not is asking an empty question.
I am content to say the question is not empty, and if you assumptions lead you to suppose it is, then your assumptions need to be questioned.
I don’t necessarily have a problem with using the word “good” so long as everyone understands it isn’t something out there in the world that we’ve discovered—that it’s a creation of our minds, words and behavior—like cake. This is a problem because most of the world doesn’t think that. A lot of times it doesn’t seem like Less Wrong thinks that (but I’m beginning to think that is just non-standard terminology).
Yeah, a lot of the Metaethics Sequence seems to be trying to get to this point.
For my part, it seems easier to just stop using words like “good” if we believe they are likely to be misunderstood, rather than devoting a lot of energy to convincing everyone that they should mean something different by the word (or that the word really means something different from what they think it means, or whatever).
I’m content to say that we value what we currently value, because we currently value it, and asking whether that’s good or not is asking an empty question.
Of course, I do understand the rhetorical value of getting to claim that our AI does good, rather than “merely” claiming that it implements what we currently value.
.
I am content to say the question is not empty, and if you assumptions lead you to suppose it is, then your assumptions need to be questioned.
You seem to believe that I have arrived at my current position primarily via unquestioned assumptions.
What makes you conclude that?