But why do you care about the concept of a bachelor? What makes you pick it out of the space of ideas and concepts as worthy of discussion and consideration?
Well, “bachelor” was just an example of a word for which you don’t know the meaning, but want to know the meaning. The important thing here is that it has a meaning, not how useful the concept is.
But I think you actually want to talk about the meaning of terms like “good”. Apparently you now concede that they are meaningful (are associated with anticipated experiences) and instead claim that the concept of “good” is useless. That is surprising. There is arguably nothing more important than ethics; than the world being in a good state or trajectory. So it is obvious that the term “good” is useful. Especially because it is exactly what an aligned superintelligence should be targeted at. After all, it’s not an accident that EY came up with extrapolated volition as an ethical theory for solving the problem of what a superintelligence should be aligned to. An ASI shouldn’t do bad things and should do good things, and the problem is making the ASI care for being good rather than for something else, like making paperclips.
Regarding the SEP quote: It doesn’t argue that moral internalism is part of moral realism, which was what you originally were objecting to. But we need not even use the term “moral realism”, we only need the claim that statements on what is good or bad have non-trivial truth values, i.e. aren’t purely subjective, or mere expressions of applause, or meaningless, or the like. This is a semantic question about what terms like “good” mean.
For moral realism to be true in the sense which most people mean when they talk about it, “good” would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn’t good.
(Also, throughout this discussion and the previous one you’ve misunderstood what it means for beliefs to pay rent in anticipated experiences. For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it. Let me bring in the opening paragraphs of the post:
Thus begins the ancient parable:
If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”
If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.
Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?
Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.
If you call increasing-welfare “good” and I call honoring-ancestors “good”, our models do not make different predictions about what will happen, only about which things should be assigned the label “good”. That is what it means for a belief to not pay rent.)
For moral realism to be true in the sense which most people mean when they talk about it, “good” would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn’t good.
That would only be a case of ambiguity (one word used with two different meanings). If you mean with saying “good” the same as people usually mean with “chair”, this doesn’t imply anti-realism, just likely misunderstandings.
Assume you are a realist about rocks, but call them trees. That wouldn’t be a contradiction. Realism has nothing to do with “observer-independent meaning”.
For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it.
This doesn’t make sense. A model doesn’t have beliefs, and if there is no belief, there is nothing it (the belief) predicts. Instead, for a belief to “pay rent” it is necessary and sufficient that it makes different predictions than believing its negation.
If you call increasing-welfare “good” and I call honoring-ancestors “good”, our models do not make different predictions about what will happen, only about which things should be assigned the label “good”. That is what it means for a belief to not pay rent.
Compare:
If you call a boulder a “tree” and I call a plant with a woody trunk a “tree”, our models do not make different predictions about what will happen, only about which things should be assigned the label “tree”. That is what it means for a belief to not pay rent.
Of course our beliefs pay rent here, they just pay different rent. If we both express our beliefs with “There is a tree behind the house” then we have just two different beliefs, because we expect different experiences. Which has nothing to do with anti-realism about trees.
Well, “bachelor” was just an example of a word for which you don’t know the meaning, but want to know the meaning. The important thing here is that it has a meaning, not how useful the concept is.
But I think you actually want to talk about the meaning of terms like “good”. Apparently you now concede that they are meaningful (are associated with anticipated experiences) and instead claim that the concept of “good” is useless. That is surprising. There is arguably nothing more important than ethics; than the world being in a good state or trajectory. So it is obvious that the term “good” is useful. Especially because it is exactly what an aligned superintelligence should be targeted at. After all, it’s not an accident that EY came up with extrapolated volition as an ethical theory for solving the problem of what a superintelligence should be aligned to. An ASI shouldn’t do bad things and should do good things, and the problem is making the ASI care for being good rather than for something else, like making paperclips.
Regarding the SEP quote: It doesn’t argue that moral internalism is part of moral realism, which was what you originally were objecting to. But we need not even use the term “moral realism”, we only need the claim that statements on what is good or bad have non-trivial truth values, i.e. aren’t purely subjective, or mere expressions of applause, or meaningless, or the like. This is a semantic question about what terms like “good” mean.
For moral realism to be true in the sense which most people mean when they talk about it, “good” would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn’t good.
(Also, throughout this discussion and the previous one you’ve misunderstood what it means for beliefs to pay rent in anticipated experiences. For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it. Let me bring in the opening paragraphs of the post:
If you call increasing-welfare “good” and I call honoring-ancestors “good”, our models do not make different predictions about what will happen, only about which things should be assigned the label “good”. That is what it means for a belief to not pay rent.)
That would only be a case of ambiguity (one word used with two different meanings). If you mean with saying “good” the same as people usually mean with “chair”, this doesn’t imply anti-realism, just likely misunderstandings.
Assume you are a realist about rocks, but call them trees. That wouldn’t be a contradiction. Realism has nothing to do with “observer-independent meaning”.
This doesn’t make sense. A model doesn’t have beliefs, and if there is no belief, there is nothing it (the belief) predicts. Instead, for a belief to “pay rent” it is necessary and sufficient that it makes different predictions than believing its negation.
Compare:
If you call a boulder a “tree” and I call a plant with a woody trunk a “tree”, our models do not make different predictions about what will happen, only about which things should be assigned the label “tree”. That is what it means for a belief to not pay rent.
Of course our beliefs pay rent here, they just pay different rent. If we both express our beliefs with “There is a tree behind the house” then we have just two different beliefs, because we expect different experiences. Which has nothing to do with anti-realism about trees.