I will try, one more[1] time, and I will keep this brief.
I think I have identified the confusion here. Assume you don’t know what “bachelor” means, and you ask me which evidence I associate with that term. And I reply: If I believe something is a bachelor, I anticipate evidence that confirms that it is an unmarried man. Now you could reply that this is simply saying “‘bachelor’ fulfills the conditions of membership”. But no, I have given you a non-trivial definition of the term, and if you already knew what “unmarried” and “man” meant (what evidence to expect if those terms apply), you now also know what to anticipate for “bachelor”—what the term “bachelor” means. Giving a definition for X is not the same as merely saying “X fulfills the conditions of membership”.
It is not just my interpretation, that is how the term “moral realism” is commonly defined in philosophy, e.g. in the SEP.
The SEP entry for “moral realism” is, unfortunately, not sufficient to resolve issues regarding what it means or how useful a concept it is. I would point you to the very introduction of the SEP entry on moral anti-realism:
It might be expected that it would suffice for the entry for “moral anti-realism” to contain only some links to other entries in this encyclopedia. It could contain a link to “moral realism” and stipulate the negation of the view described there. Alternatively, it could have links to the entries “anti-realism” and “morality” and could stipulate the conjunction of the materials contained therein. The fact that neither of these approaches would be adequate—and, more strikingly, that following the two procedures would yield substantively non-equivalent results—reveals the contentious and unsettled nature of the topic.
“Anti-realism,” “non-realism,” and “irrealism” may for most purposes be treated as synonymous. Occasionally, distinctions have been suggested for local pedagogic reasons (see, e.g., Wright 1988; Dreier 2004), but no such distinction has generally taken hold. (“Quasi-realism” denotes something very different, to be described below.) All three terms are to be defined in opposition to realism, but since there is no consensus on how “realism” is to be understood, “anti-realism” fares no better. Crispin Wright (1992: 1) comments that “if there ever was a consensus of understanding about ‘realism’, as a philosophical term of art, it has undoubtedly been fragmented by the pressures exerted by the various debates—so much so that a philosopher who asserts that she is a realist about theoretical science, for example, or ethics, has probably, for most philosophical audiences, accomplished little more than to clear her throat.”
because of reasons that go beyond knowing how to answer the question “is he a bachelor?” or “does he have the properties tautologically contained within the status of bachelors?”
But why do you care about the concept of a bachelor? What makes you pick it out of the space of ideas and concepts as worthy of discussion and consideration?
Well, “bachelor” was just an example of a word for which you don’t know the meaning, but want to know the meaning. The important thing here is that it has a meaning, not how useful the concept is.
But I think you actually want to talk about the meaning of terms like “good”. Apparently you now concede that they are meaningful (are associated with anticipated experiences) and instead claim that the concept of “good” is useless. That is surprising. There is arguably nothing more important than ethics; than the world being in a good state or trajectory. So it is obvious that the term “good” is useful. Especially because it is exactly what an aligned superintelligence should be targeted at. After all, it’s not an accident that EY came up with extrapolated volition as an ethical theory for solving the problem of what a superintelligence should be aligned to. An ASI shouldn’t do bad things and should do good things, and the problem is making the ASI care for being good rather than for something else, like making paperclips.
Regarding the SEP quote: It doesn’t argue that moral internalism is part of moral realism, which was what you originally were objecting to. But we need not even use the term “moral realism”, we only need the claim that statements on what is good or bad have non-trivial truth values, i.e. aren’t purely subjective, or mere expressions of applause, or meaningless, or the like. This is a semantic question about what terms like “good” mean.
For moral realism to be true in the sense which most people mean when they talk about it, “good” would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn’t good.
(Also, throughout this discussion and the previous one you’ve misunderstood what it means for beliefs to pay rent in anticipated experiences. For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it. Let me bring in the opening paragraphs of the post:
Thus begins the ancient parable:
If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”
If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.
Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?
Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.
If you call increasing-welfare “good” and I call honoring-ancestors “good”, our models do not make different predictions about what will happen, only about which things should be assigned the label “good”. That is what it means for a belief to not pay rent.)
For moral realism to be true in the sense which most people mean when they talk about it, “good” would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn’t good.
That would only be a case of ambiguity (one word used with two different meanings). If you mean with saying “good” the same as people usually mean with “chair”, this doesn’t imply anti-realism, just likely misunderstandings.
Assume you are a realist about rocks, but call them trees. That wouldn’t be a contradiction. Realism has nothing to do with “observer-independent meaning”.
For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it.
This doesn’t make sense. A model doesn’t have beliefs, and if there is no belief, there is nothing it (the belief) predicts. Instead, for a belief to “pay rent” it is necessary and sufficient that it makes different predictions than believing its negation.
If you call increasing-welfare “good” and I call honoring-ancestors “good”, our models do not make different predictions about what will happen, only about which things should be assigned the label “good”. That is what it means for a belief to not pay rent.
Compare:
If you call a boulder a “tree” and I call a plant with a woody trunk a “tree”, our models do not make different predictions about what will happen, only about which things should be assigned the label “tree”. That is what it means for a belief to not pay rent.
Of course our beliefs pay rent here, they just pay different rent. If we both express our beliefs with “There is a tree behind the house” then we have just two different beliefs, because we expect different experiences. Which has nothing to do with anti-realism about trees.
I will try, one more[1] time, and I will keep this brief.
But why do you care about the concept of a bachelor? What makes you pick it out of the space of ideas and concepts as worthy of discussion and consideration? In my conception, it is the fact that you believe it carves reality at the joints by allowing you to have relevant and useful anticipated experiences about the world outside of what is contained inside the very definition or meaning of the word. If we did not know, due to personal experience, that it was useful to know whether someone was a bachelor[2], we would not talk about it; it would be just as arbitrary and useless a subset of idea-space as “the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley.”
The SEP entry for “moral realism” is, unfortunately, not sufficient to resolve issues regarding what it means or how useful a concept it is. I would point you to the very introduction of the SEP entry on moral anti-realism:
and possibly final
because of reasons that go beyond knowing how to answer the question “is he a bachelor?” or “does he have the properties tautologically contained within the status of bachelors?”
Well, “bachelor” was just an example of a word for which you don’t know the meaning, but want to know the meaning. The important thing here is that it has a meaning, not how useful the concept is.
But I think you actually want to talk about the meaning of terms like “good”. Apparently you now concede that they are meaningful (are associated with anticipated experiences) and instead claim that the concept of “good” is useless. That is surprising. There is arguably nothing more important than ethics; than the world being in a good state or trajectory. So it is obvious that the term “good” is useful. Especially because it is exactly what an aligned superintelligence should be targeted at. After all, it’s not an accident that EY came up with extrapolated volition as an ethical theory for solving the problem of what a superintelligence should be aligned to. An ASI shouldn’t do bad things and should do good things, and the problem is making the ASI care for being good rather than for something else, like making paperclips.
Regarding the SEP quote: It doesn’t argue that moral internalism is part of moral realism, which was what you originally were objecting to. But we need not even use the term “moral realism”, we only need the claim that statements on what is good or bad have non-trivial truth values, i.e. aren’t purely subjective, or mere expressions of applause, or meaningless, or the like. This is a semantic question about what terms like “good” mean.
For moral realism to be true in the sense which most people mean when they talk about it, “good” would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn’t good.
(Also, throughout this discussion and the previous one you’ve misunderstood what it means for beliefs to pay rent in anticipated experiences. For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it. Let me bring in the opening paragraphs of the post:
If you call increasing-welfare “good” and I call honoring-ancestors “good”, our models do not make different predictions about what will happen, only about which things should be assigned the label “good”. That is what it means for a belief to not pay rent.)
That would only be a case of ambiguity (one word used with two different meanings). If you mean with saying “good” the same as people usually mean with “chair”, this doesn’t imply anti-realism, just likely misunderstandings.
Assume you are a realist about rocks, but call them trees. That wouldn’t be a contradiction. Realism has nothing to do with “observer-independent meaning”.
This doesn’t make sense. A model doesn’t have beliefs, and if there is no belief, there is nothing it (the belief) predicts. Instead, for a belief to “pay rent” it is necessary and sufficient that it makes different predictions than believing its negation.
Compare:
If you call a boulder a “tree” and I call a plant with a woody trunk a “tree”, our models do not make different predictions about what will happen, only about which things should be assigned the label “tree”. That is what it means for a belief to not pay rent.
Of course our beliefs pay rent here, they just pay different rent. If we both express our beliefs with “There is a tree behind the house” then we have just two different beliefs, because we expect different experiences. Which has nothing to do with anti-realism about trees.