Here is my question: Why bother with the middle man? No one can actually define good and everyone is constantly checking with ‘human values’ to see what it says! Assuming the universe runs on math and humans share attitudes about some things obviously there is some platonic entity which precisely describes human values (assuming there isn’t too much contradiction) and can be called “good”. But it doesn’t seem especially parsimonious to reify that concept. Why add it to our ontology?
It’s just semantics in a sense: but there is a reason we don’t multiply entities unnecessarily.
Well, if you valued cake you’d want a way to talk about cake and efficiently distinguish cakes from non-cakes—-and especially with regards to planning, to distinguish plans that lead to cake from plans that do not. When you talk about cake there isn’t really any reification of “the platonic form of cake” going on; “cake” is just a convenient word for a certain kind of confection.
The motivation for humans having a word for goodness is the same.
I don’t necessarily have a problem with using the word “good” so long as everyone understands it isn’t something out there in the world that we’ve discovered—that it’s a creation of our minds, words and behavior—like cake. This is a problem because most of the world doesn’t think that. A lot of times it doesn’t seem like Less Wrong thinks that (but I’m beginning to think that is just non-standard terminology).
Yeah, a lot of the Metaethics Sequence seems to be trying to get to this point.
For my part, it seems easier to just stop using words like “good” if we believe they are likely to be misunderstood, rather than devoting a lot of energy to convincing everyone that they should mean something different by the word (or that the word really means something different from what they think it means, or whatever).
I’m content to say that we value what we currently value, because we currently value it, and asking whether that’s good or not is asking an empty question.
Of course, I do understand the rhetorical value of getting to claim that our AI does good, rather than “merely” claiming that it implements what we currently value.
I’m content to say that we value what we currently value, because we currently value it, and asking whether that’s good or not is asking an empty question.
I am content to say the question is not empty, and if you assumptions lead you to suppose it is, then your assumptions need to be questioned.
This is an excellent description of the argument.
Here is my question: Why bother with the middle man? No one can actually define good and everyone is constantly checking with ‘human values’ to see what it says! Assuming the universe runs on math and humans share attitudes about some things obviously there is some platonic entity which precisely describes human values (assuming there isn’t too much contradiction) and can be called “good”. But it doesn’t seem especially parsimonious to reify that concept. Why add it to our ontology?
It’s just semantics in a sense: but there is a reason we don’t multiply entities unnecessarily.
Well, if you valued cake you’d want a way to talk about cake and efficiently distinguish cakes from non-cakes—-and especially with regards to planning, to distinguish plans that lead to cake from plans that do not. When you talk about cake there isn’t really any reification of “the platonic form of cake” going on; “cake” is just a convenient word for a certain kind of confection.
The motivation for humans having a word for goodness is the same.
I don’t necessarily have a problem with using the word “good” so long as everyone understands it isn’t something out there in the world that we’ve discovered—that it’s a creation of our minds, words and behavior—like cake. This is a problem because most of the world doesn’t think that. A lot of times it doesn’t seem like Less Wrong thinks that (but I’m beginning to think that is just non-standard terminology).
Yeah, a lot of the Metaethics Sequence seems to be trying to get to this point.
For my part, it seems easier to just stop using words like “good” if we believe they are likely to be misunderstood, rather than devoting a lot of energy to convincing everyone that they should mean something different by the word (or that the word really means something different from what they think it means, or whatever).
I’m content to say that we value what we currently value, because we currently value it, and asking whether that’s good or not is asking an empty question.
Of course, I do understand the rhetorical value of getting to claim that our AI does good, rather than “merely” claiming that it implements what we currently value.
.
I am content to say the question is not empty, and if you assumptions lead you to suppose it is, then your assumptions need to be questioned.
You seem to believe that I have arrived at my current position primarily via unquestioned assumptions.
What makes you conclude that?