I don’t know how to explain “actual meaning”, but it seems intuitively obvious to me that the actual meaning of “murder is wrong” is not “murder is forbidden by Yahweh”, even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh. Do you disagree with this?
First I specified that one way we can talk about morality is to stipulate what we mean by terms like ‘morally good’, so as to resolve debates about morality in the same way that we resolve a hypothetical debate about ‘sound’ by stipulating our definitions of ‘sound.’
But the way we actually resolved the debate about ‘sound’ is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers. I think saying “let’s resolve confusions in metaethics by asking people to stipulating definitions for ‘morally good’”, before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
I don’t know how to explain “actual meaning”, but it seems intuitively obvious to me that the actual meaning of “murder is wrong” is not “murder is forbidden by Yahweh”, even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh.
That doesn’t seem intuitively obvious to me, which illustrates one reason why I prefer to taboo terms rather than bash my intuitions against the intuitions of others in an endless game of intuitionist conceptual analysis. :)
Perhaps the most common ‘foundational’ family of theories of meaning in linguistics and philosophy of language belong to the mentalist program, according to which semantic content is determined by the mental contents of the speaker, not by an abstract analysis of symbol forms taken out of context from their speaker. One straightforward application of a mentalist approach to meaning would conclude that if the speaker was assuming (or mentally representing) a judgment of moral wrongness in the sense of forbidden-by-God, then the meaning of the speaker’s sentence refers in part to the demands of an imagined deity.
But the way we actually resolved the debate about ‘sound’ is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers. I think saying “let’s resolve confusions in metaethics by asking people to stipulating definitions for ‘morally good’”, before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
But “reaching this understanding” with regard to morality was precisely the goal of ‘Conceptual Analysis and Moral Theory’ and ‘Pluralistic Moral Reductionism.’ I repeatedly made the point that people regularly use a narrow family of signifiers (‘morally good’, ‘morally right’, etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier ‘sound’ to call upon two distinct concepts (acoustic vibrations and auditory experience).
I repeatedly made the point that people regularly use a narrow family of signifiers (‘morally good’, ‘morally right’, etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier ‘sound’ to call upon two distinct concepts (acoustic vibrations and auditory experience).
With regard to “sound”, the two concepts are complementary, and people can easily agree that “sound” sometimes refers to one or the other or often both of these concepts. The same is not true in the “morality” case. The concepts you list seem mutually exclusive, and most people have a strong intuition that “morality” can correctly refer to at most one of them. For example a consequentialist will argue that a deontologist is wrong when he asserts that “morality” means “adhering to rules X, Y, Z”. Similarly a divine command theorist will not answer “well, that’s true” if an egoist says “murdering Bob (in a way that serves my interests) is right, and I stipulate ‘right’ to mean ‘serving my interests’”.
It appears to me confusion here is not being caused mainly by linguistic ambiguity, i.e., people using the same word to refer to different things, which can be easily cleared up once pointed out. I see the situation as being closer to the following: in many cases, people are using “morality” to refer to the same concept, and are disagreeing over the nature of that concept. Some people think it’s equivalent to or closely related to the concept of divine attitudes, and others think it has more to do with well-being of conscious creatures, etc.
I see the situation as being closer to the following: in many cases, people are using “morality” to refer to the same concept, and are disagreeing over the nature of that concept.
When many people agree that murder is wrong but disagree on the reasons why, you can argue that they’re referring to the same concept of morality but confused about its nature. But what about less clear-cut statements, like “women should be able to vote”? Many people in the past would’ve disagreed with that. Would you say they’re referring to a different concept of morality?
I’m not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
I tried to explain some of the cause of persistent moral debate (as opposed to e.g. sound debate) in this way:
The problem may be worse for moral terms than for (say) art terms. Moral terms have more powerful connotations than art terms, and are thus a greater attractor for sneaking in connotations. Moral terms are used to persuade. “It’s just wrong!” the moralist cries, “I don’t care what definition you’re using right now. It’s just wrong: don’t do it.”
Moral discourse is rife with motivated cognition. This is part of why, I suspect, people resist dissolving moral debates even while they have no trouble dissolving the ‘tree falling in a forest’ debate.
I’m not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
Let me try an analogy. Consider someone who believes in the phlogiston theory of fire, and another person who believes in the oxidation theory. They are having a substantive disagreement about the nature of fire, and not merely causing unnecessary confusion by using the same word “fire” to refer to different things. And if the phlogiston theorist were to say “by ‘fire’ I mean the release of phlogiston” then that would just be wrong, and would be adding to the confusion instead of helping to resolve it.
I think the situation with “morality” is closer to this than to the “sound” example.
(ETA: I could also try to define “same concept” more directly, for example as occupying roughly the same position in the graph of relationships between one’s concepts, or playing approximately the same role in one’s cognitive algorithms, but I’d rather not take an exact position on what “same concept” means if I can avoid it, since I have mostly just an intuitive understanding of it.)
This is the exact debate currently being hashed out by Richard Joyce and Stephen Finlay (whom I interviewed here). A while back I wrote an article that can serve as a good entry point into the debate, here. A response from Joyce is here and here. Finlay replies again here.
I tend to side with Finlay, though I suspect not for all the same reasons. Recently, Joyce has admitted that both languages can work, but he’ll (personally) talk the language of error theory rather than the language of moral naturalism.
I’m having trouble understanding how the debate between Joyce and Finlay, over Error Theory, is the same as ours. (Did you perhaps reply to the wrong comment?)
The core of their debate concerns whether certain features are ‘essential’ to the concept of morality, and thus concerns whether people share the same concept of morality, and what it would mean to say that people share the concept of morality, and what the implications of that are. Phlogiston is even one of the primary examples used throughout the debate. (Also, witches!)
I’m still not getting it. From what I can tell, both Joyce and Finlay implicitly assume that most people are referring to the same concept by “morality”. They do use phlogiston as an example, but seemingly in a very different way from me, to illustrate different points. Also, two of the papers you link to by Joyce don’t cite Finlay at all and I think may not even be part of the debate. Actually the last paper you link to by Joyce (which doesn’t cite Finlay) does seem relevant to our discussion. For example this paragraph:
We gave the name “Earth” to the thing we live upon and at one time
reckoned it flat (or at least a good many people reckoned it flat); but the discovery that the
thing we live upon is a big ball was not taken to be the discovery that we do not live upon
Earth. It was once widely thought that gorillas are aggressive brutes, but the discovery that
they’re in fact gentle social creatures was not taken to be the discovery that gorillas do not
exist.
I will read that paper over more carefully, and in the mean time, please let me know if you still think the other papers are also relevant, and point to specific passages if yes.
This article by Joyce doesn’t cite Finlay, but its central topic is ‘concessive strategies’ for responding to Mackie, and Finlay is a leading figure in concessive strategies for responding to Mackie. Joyce also doesn’t cite Finlay here, but it discusses how two people who accept that Mackie’s suspect properties fail to refer might nevertheless speak two different languages about whether moral properties exist (as Joyce and Finlay do).
One way of expressing the central debate between them is to say that they are arguing over whether certain features (like moral ‘absolutism’ or ‘objectivity’) are ‘essential’ to moral concepts. (Without the assumption of absolutism, is X a ‘moral’ concept?) Another way to say that is to say that they are arguing over the boundaries of moral concepts; whether people can be said to share the ‘same’ concept of morality but disagree on some of its features, or whether this disagreement means they have ‘different’ concepts of morality.
But really, I’m just trying to get clear on what you might mean by saying that people have the ‘same’ concept of morality while disagreeing on fundamental features, and what you think the implications are. I’m sorry my pointers to the literature weren’t too helpful.
But really, I’m just trying to get clear on what you might mean by saying that people have the ‘same’ concept of morality while disagreeing on fundamental features, and what you think the implications are.
Unfortunately I’m not sure how to explain it better than I already did. But I did notice that Richard Chappell made a similar point (while criticizing Eliezer):
His view implies that many normative disagreements are simply terminological; different people mean different things by the term ‘ought’, so they’re simply talking past each other. This is a popular stance to take, especially among non-philosophers, but it is terribly superficial. See my ‘Is Normativity Just Semantics?’ for more detail.
Chappell’s discussion makes more and more sense to me lately. Many previously central reasons for disagreement turn out to be my misunderstanding, but I haven’t re-read enough to form a new opinion yet.
Sure, except he doesn’t make any arguments for his position. He just says:
Normative disputes, e.g. between theories of wellbeing, are surely more substantive than is allowed for by this account.
I don’t think normative debates are always “merely verbal”. I just think they are very often ‘merely verbal’, and that there are multiple concepts of normativity in use. Chappel and I, for example, seem to have different intuitions (see comments) about what normativity amounts to.
Let’s say a deontologist and a consequentialist are on the board of SIAI, and they are debating which kind of seed AI the Institute should build.
D: We should build a deontic AI. C: We should build a consequentialist AI.
Surely their disagreement is substantive. But if by “we should do X”, the deontologist just means “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” and the consequentialist just means “X maximizes expected utility under utility function Y according to decision theory Z” then they are talking past each other and their disagreement is “merely verbal”. Yet these are the kinds of meanings you seem to think their normative language do have. Don’t you think there’s something wrong about that?
(ETA: To any bystanders still following this argument, I feel like I’m starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
I completely agree with what you are saying. Disagreement requires shared meaning. Cons. and Deont. are rival
theories, not alternative meanings.
(ETA: To any bystanders still following this argument, I feel like I’m starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
Good question. There’s a lot of momentum behind the “meaning theory”.
If the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, then they aren’t necessarily disagreeing with each other by having one state D and the other state C.
But perhaps we aren’t considering propositions D and C using meaning_stipulated. Perhaps we decide to consider propositions D and C using meaning-cognitive-algorithm. And perhaps a completed cognitive neuroscience would show us that they both mean the same thing by ‘should’ in the meaning-cognitive-algorithm sense. And in that case they would be having a substantive disagreement, when using meaning-cognitive-algorithm to determine the truth conditions of D and C.
Thus:
meaning-stipulated of D is X, meaning-stipulated of C is Y, but X and Y need not be mutually exclusive.
meaning-cognitive-algorithm of D is A, meaning-cognitive-algorithm of C is B, and in my story above A and B are mutually exclusive.
Since people have different ideas about what ‘meaning’ is, I’m skipping past that worry by tabooing ‘meaning.’
[Damn I wish LW would let me use underscores or subscripts instead of hyphens!]
Suppose the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, but if you ask them they also say that they are disagreeing with each other in a substantive way. They must be wrong about either what their sentences mean, or about whether their disagreement is substantive, right? (*) I think it’s more likely that they’re wrong about what their sentences mean, because meanings of normative sentences are confusing and lack of substantive disagreement in this particular scenario seems very unlikely.
(*) If we replace “mean” in this sentence by “mean_stipulated”, then it no longer makes sense, since clearly it’s possible that their sentences mean_stipulated D and C, and that their disagreement is substantive. Actually now that I think about it, I’m not sure that “mean” can ever be correctly taboo’ed into “mean_stipulated”. For example, suppose Bob says “By ‘sound’ I mean acoustic waves. Sorry, I misspoke, actually by ‘sound’ I mean auditory experiences. [some time later] To recall, by ‘sound’ I mean auditory experiences.” The first “mean” does not mean “mean_stipulated” since Bob hadn’t stipulated any meanings yet when he said that. The second “mean” does not mean “mean_stipulated” since otherwise that sentence would just be stating a plain falsehood. The third “mean” must mean the same thing as the second “mean”, so it’s also not “mean_stipulated”.
To continue along this line, suppose Alice inserts after the first sentence, “Bob, that sounds wrong. I think by ‘sound’ you mean auditory experiences.” Obviously not “mean_stipulated” here. Alternatively, suppose Bob only says the first sentence, and nobody bothers to correct him because they’ve all heard the lecture several times and know that Bob means auditory experiences by ‘sound’, and think that everyone else knows. Except that Carol is new and doesn’t know, and write in her notes “In this lecture, ‘sound’ means acoustic waves.” in her notebook. Later on Alice tells Carol what everyone else knows, and Carol corrects the sentence. If “mean” means “mean_stipulated” in that sentence, then it would be true and there would be no need to correct it.
Since people have different ideas about what ‘meaning’ is, I’m skipping past that worry by tabooing ‘meaning.’
Taboo seems to be a tool that needs to be wielded very carefully, and wanting to “skip pass worry” is probably not the right frame of mind for wielding it. One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
I’m not sure that “mean” can ever be correctly taboo’ed into “mean_stipulated”.
It seems a desperate move to say that stipulative meaning just isn’t a kind of meaning wielded by humans. I use it all the time, it’s used in law, it’s used in other fields, it’s taught in textbooks… If you think stipulative meaning just isn’t a legitimate kind of meaning commonly used by humans, I don’t know what to say.
One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
I agree, but ‘tabooing’ ‘meaning’ to mean (in some cases) ‘stipulated meaning’ shouldn’t be objectionable because, as I said above, it’s a very commonly used kind of ‘meaning.’ We can also taboo ‘meaning’ to refer to other types of meaning.
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations). This is precisely the kind of use for which playing Taboo was originally proposed:
the principle [of Tabooing] applies much more broadly:
Albert: “A tree falling in a deserted forest makes a sound.”
Barry: “A tree falling in a deserted forest does not make a sound.”
Clearly, since one says “sound” and one says “~sound”, we must have a contradiction, right? But suppose that they both dereference their pointers before speaking:
Albert: “A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations].”
Barry: “A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].”
Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound. If “acoustic vibrations” came into dispute, we would just play Taboo again and say “pressure waves in a material medium”; if necessary we would play Taboo again on the word “wave” and replace it with the wave equation. (Play Taboo on “auditory experience” and you get “That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes.”)
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations).
To come back to this point, what if we can’t translate a disagreement into disagreement over anticipations (which is the case in many debates over rationality and morality), nor do the participants know how to correctly Taboo (i.e., they don’t know how to capture the meanings of certain key words), but there still seems to be substantive disagreement or the participants themselves claim they do have a substantive disagreement?
Earlier, in another context, I suggested that we extend Eliezer’s “make beliefs pay rent in anticipated experiences” into “make beliefs pay rent in decision making”. Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance. What do you think?
But I missed your point in the previous response. The idea of disagreement about decisions in the same sense as usual disagreement about anticipation caused by errors/uncertainty is interesting. This is not bargaining about outcome, for the object under consideration is agents’ belief, not the fact the belief is about. The agents could work on correct belief about a fact even in the absence of reliable access to the fact itself, reaching agreement.
Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance.
It seems that “what to do” has to refer to properties of a fixed fact, so disagreement is bargaining over what actually gets determined, and so probably doesn’t even involve different anticipations.
Both your suggestions sound plausible. I’ll have to think about it more when I have time to work more on this problem, probably in the context of a planned LW post on Chalmer’s Verbal Disputes paper. Right now I have to get back to some other projects.
I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations).
But that assumes that two sides of the disagreement are both Taboo’ing correctly. How can you tell? (You do agree that Taboo is hard and people can easily get it wrong, yes?)
ETA: Do you want to try to hash this out via online chat? I added you to my Google Chat contacts a few days ago, but it’s still showing “awaiting authorization”.
But that assumes that two sides of the disagreement are both Taboo’ing correctly.
Not sure what ‘correctly’ means, here. I’d feel safer saying they were both Tabooing ‘acceptably’. In the above example, Albert and Barry were both Tabooing ‘acceptably.’ It would have been strange and unhelpful if one of them had Tabooed ‘sound’ to mean ‘rodents on the moon’. But Tabooing ‘sounds’ to talk about auditory experiences or acoustic vibrations is fine, because those are two commonly used meanings for ‘sound’. Likewise, ‘stipulated meaning’ and ‘intuitive meaning’ and a few other things are commonly used meanings of ‘meaning.’
If you’re saying that there’s “only one correct meaning for ‘meaning’” or “only one correct meaning for ‘ought’”, then I’m not sure what to make of that, since humans employ the word-tool ‘meaning’ and the word-tool ‘ought’ in a variety of ways. If whatever you’re saying predicts otherwise, then what you’re saying is empirically incorrect. But that’s so obvious that I keep assuming you must be saying something else.
Just because there’s a word “art” doesn’t mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.
Wondering how to define a word means you’re looking at the problem the wrong way—searching for the mysterious essence of what is, in fact, a communication signal.
Another point. Switching back to a particular ‘conventional’ meaning that doesn’t match the stipulative meaning you just gave a word is one of the ways words can be wrong (#4).
And frankly, I’m worried that we are falling prey to the 14th way words can be wrong:
You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what’s left to ask by arguing, “Is it a blegg?” But if your brain’s categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there’s a leftover question.
And, the 17th way words can be wrong:
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we’re hard to stop; if we have no common language, we’ll draw pictures in sand. When you each understand what is in the other’s mind, you are done.
Now, I suspect you may be trying to say that I’m committing mistake #20:
You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.
But I’ve pointed out that, for example, stipulative meaning is a very common usage of ‘meaning’...
That’s a great example. I’ll reproduce it here for readability of this thread:
Consider a hypothetical debate between two decision theorists who happen to be Taboo fans:
A: It’s rational to two-box in Newcomb’s problem.
B: No, one-boxing is rational.
A: Let’s taboo “rational” and replace it with math instead. What I meant was that two-boxing is what CDT recommends.
B: Oh, what I meant was that one-boxing is what EDT recommends.
A: Great, it looks like we don’t disagree after all!
What did these two Taboo’ers do wrong, exactly?
I’d rather not talk about ‘wrong’; that makes things messier. But let me offer a few comments on what happened:
If this conversation occurred at a decision theory meetup known to have an even mix of CDTers and EDTers, then it was perhaps inefficient (for communication) for either of them to use ‘rational’ to mean either CDT-rational or EDT-rational. That strategy was only going to cause confusion until Tabooing occurred.
If this conversation occurred at a decision theory meetup for CDTers, then person A might be forgiven for assuming the other person would think of ‘rational’ in terms of ‘CDT-rational’. But then person A used Tabooing to discover that an EDTer had snuck into the party, and they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT.
In either case, once they’ve had the conversation quoted above, they are correct that they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT. Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma. Now that they’ve cleared up their momentary confusion about ‘rational’, they can move on to discuss the point at which they really do disagree. Tabooing for the win.
They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma.
An action does not naturally “have” an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can’t describe their disagreement as “about what action has the highest expected value”. It seems that we can only describe their disagreement as about “what is rational” or “what is the correct decision theory” because we don’t know how to Taboo “rational” or “correct” in a way that preserves the substantive nature of their disagreement. (BTW, I guess we could define “have” to mean “assigned by the correct decision theory/prior/utility function” but that doesn’t help.)
Now that they’ve cleared up their momentary confusion about ‘rational’, they can move on to discuss the point at which they really do disagree.
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree. It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
ETA:
Tabooing for the win.
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right? In the case of “morality”, why do you trust the process of Tabooing so much that you do not give this possibility much credence?
An action does not naturally “have” an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can’t describe their disagreement as “about what action has the highest expected value”.
Fair enough. Let me try again: “They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb’s dilemma.” Or something like that.
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree.
According to their Taboo transcript, they don’t disagree over the solutions of Newcomb’s problem recommended by EDT and CDT. But they might still disagree about whether EDT or CDT is most likely to fulfill the agent’s desires when faced with Newcomb’s problem.
It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
Yes. Ask about anticipations.
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right?
That didn’t happen in this example. They do not, in fact, disagree over the solutions to Newcomb’s problem recommended by EDT and CDT. If they disagree, it’s about something else, like who is the tallest living person on Earth or which action is most likely to fulfill an agent’s desires when faced with Newcomb’s dilemma.
Of course Tabooing can go wrong, but it’s a useful tool. So is testing for differences of anticipation, though that can also go wrong.
In the case of “morality”, why do you trust the process of Tabooing so much that you do not give this possibility much credence?
No, I think it’s quite plausible that Tabooing can be done wrong when talking about morality. In fact, it may be more likely to go wrong there than anywhere else. But it’s also better to Taboo than to simply not use such a test for surface-level confusion. It’s also another option to not Taboo and instead propose that we try to decode the cognitive algorithms involved in order to get a clearer picture of our intuitive notion of moral terms than we can get using introspection and intuition.
“They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb’s dilemma.”
This introduces even more assumptions into the picture. Why fulfillment of desires or specifically agent desires is relevant? Why is “most likely” in there? You are trying to make things precise at the expense of accuracy, that’s the big taboo failure mode, increasingly obscure lost purposes.
I’m just providing an example. It’s not my story. I invite you or Wei Dai to say what it is the two speakers disagree about even after they agree about the conclusions of CDT and EDT for Newcomb’s problem. If all you can say is that they disagree about what they ‘should’ do, or what it would be ‘rational’ to do, then we’ll have to talk about things at that level of understanding, but that will be tricky.
If all you can say is that they disagree about what they ‘should’ do, or what it would be ‘rational’ to do, then we’ll have to talk about things at that level of understanding, but that will be tricky.
What other levels of understanding do we have? The question needs to be addressed on its own terms. Very tricky. There are ways of making this better, platonism extended to everything seems to help a lot, for example. Toy models of epistemic and decision-theoretic primitives also clarify things, training intuition.
We’re making progress on what it means for brains to value things, for example. Or we can talk in an ends-relational sense, and specify ends. Or we can keep things even more vague but then we can’t say much at all about ‘ought’ or ‘rational’.
The problem is that it doesn’t look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
The problem is that it doesn’t look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
If by ‘should’ in this sense you mean the ‘intended’ meaning of ‘should’ that we don’t have access too, then I agree.
Note: Wei Dai and I chatted for a while, and this resulted in three new clarifying paragraphs at the end of the is-ought section of my post ’Pluralistic Moral Reductionism.
Even given your disclaimer, I suspect we still disagree on the merits of Taboo as it apply to metaethics. Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On “morality” we don’t have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
Yes. The most common result is that people come to realize they don’t know what they mean by ‘morally good’, unless they are theists.
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On “morality” we don’t have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
If it looks like I’m focusing on neuroscience, I think that’s an accident of looking at work I’ve produced in a 4-month period rather than over a longer period (that hasn’t occurred yet). I don’t think neuroscience is as central to metaethics or rationality as my recent output might suggest. Humans with meat-brains are strange agents who will make up a tiny minority of rational and moral agents in the history of intelligent agents in our light-cone (unless we bring an end to intelligent agents in our light-cone).
Yes. The most common result is that people come to realize they don’t know what they mean by ‘morally good’, unless they are theists.
Huh, I think that would have been good to mention in one of your posts. (Unless you did and I failed to notice it.)
It occurs to me that with a bit of tweaking to Austere Metaethics (which I’ll call Interim Metaethics), we can help everyone realize that they don’t know what they mean by “morally good”.
For example:
Deontologist: Should we build a deontic seed AI?
Interim Metaethicist: What do you mean by “should X”?
Deontologist: “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.”
Interim Metaethicist: Are you sure? If that’s really what you mean, then when a consequentialist says “should X” he probably means “X maximizes expected utility according to decision theory Y and utility function Z”. In which case the two of you do not actually disagree. But you do disagree with him, right?
Deontologist: Good point. I guess I don’t really mean that by “should”. I’m confused.
(Doesn’t that seem like an improvement over Austere Metaethics?)
I guess one difference between us is that I don’t see anything particularly ‘wrong’ with using stipulative definitions as long as you’re aware that they don’t match the intended meaning (that we don’t have access to yet), whereas you like to characterize stipulative definitions as ‘wrong’ when they don’t match the intended meaning.
But perhaps I should add one post before my empathic metaethics post which stresses that the stipulative definitions of ‘austere metaethics’ don’t match the intended meaning—and we can make this point by using all the standard thought experiments that deontologists and utilitarians and virtue ethicists and contractarian theorists use against each other.
After the above conversation, wouldn’t the deontologist want to figure out what he actually means by “should” and what its properties are? Why would he want to continue to use the stipulated definition that he knows he doesn’t actually mean? I mean I can imagine something like:
Deontologist: I guess I don’t really mean that by “should”, but I need to publish a few more papers for tenure, so please just help me figure out whether we should build an deontic seed AI under that stipulated definition of “should”, so I can finish my paper and submit it to the Journal of Machine Deontology.
But even in this case it would make more sense for him to avoid “stipulative definition” and instead say
Deontologist: Ok, by “should” I actually mean a concept that I can’t define at this point. But I guess it has something to do with deontic logic, and it would be useful to explore the properties of deontic logic in more detail. So, can you please help me figure out whether building a deontic seed AI is obligatory (by deontic logic) if we assume axiomatic imperatives Y and Z?
This way, he clarifies to himself and others that “”X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” is not what he means by “should X”, but instead a guess about the nature of morality (a concept that we can’t yet precisely define).
Perhaps you’d answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of “guess” is more appropriate here than “meaning”.
Perhaps you’d answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of “guess” is more appropriate here than “meaning”.
The problem is that we have to act in the world now. We can’t wait around for metaethics and decision theory to be solved. Thus, science books have glossaries in the back full of highly useful operationalized and stiuplated definitions for hundreds of terms, whether or not they match the intended meanings (that we don’t have access to) of those terms for person A, or the intended meanings of those terms for person B, or the intended meanings for those terms for person C.
I think this glossary business is a familiar enough practice that calling that thing a glossary of ‘meanings’ instead of a glossary of ‘guesses at meanings’ is fine. Maybe ‘meaning’ doesn’t have the connotations for me that it has for you.
Science needs doing, laws need to be written and enforced, narrow AIs need to be programmed, best practices in medicine need to be written, agents need to act… all before metaethics and decision theory are solved. In a great many cases, we need to have meaning_stipulated before we can figure out meaning_intended.
Sigh… Maybe I should just put a sticky note on my monitor that says
REMEMBER: You probably don’t actually disagree with Luke, because whenever he says “X means Z by Y”, he might just mean “X stipulated Y to mean Z”, which in turn is just another way of saying “X guesses that the nature of Y is Z”.
We humans have different intuitions about the meanings of terms and the nature of meaning itself, and thus we’re all speaking slightly different languages. We always need to translate between our languages, which is where Taboo and testing for anticipations come in handy.
I’m using the concept of meaning from linguistics, which seems fair to me. In linguistics, stipulated meaning is most definitely a kind of meaning (and not merely a kind of guessing at meaning), for it is often “what is expressed by the writer or speaker, and what is conveyed to the reader or listener, provided that they talk about the same thing.”
Whatever the case, this language looks confusing/misleading enough to avoid. It conflates the actual search for intended meaning with all those irrelevant stipulations, and assigns misleading connotations to the words referring to these things. In Eliezer’s sequences, the term was “fake utility function”. The presence of “fake” in the term is important, it reminds of incorrectness of the view.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
Perhaps, though I’ve gotten comments from others that it was highly clarifying for them. Maybe they’re more used to the meaning of ‘meaning’ from linguistics.
But one must not fall into the trap of thinking that a definition you’ve stipulated (aloud or in your head) for ‘ought’ must match up to your intuitive concept of ‘ought’. In fact, I suspect it never does, which is why the conceptual analysis of ‘ought’ language can go in circles for thousands of years, and why any stipulated meaning of ‘ought’ is a fake utility function. To see clearly to our intuitive concept of ought, we’ll have to try empathic metaethics (see below).
It’s not clear from this paragraph whether “intuitive concept” refers to the oafish tools in human brain (which have the same problems as stipulated definitions, including irrelevance) or the intended meaning that those tools seek. Conceptual analysis, as I understand, is concerned with analysis of the imperfect intuitive tools, so it’s also unclear in what capacity you mention conceptual analysis here.
(I do think this and other changes will probably make new readers less confused.)
Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version:
But one must not fall into the trap of thinking that a definition you’ve stipulated (aloud or in your head) for ‘ought’ must match up to your intended meaning of ‘ought’ (to which you don’t have introspective access). In fact, I suspect it never does, which is why the conceptual analysis of ‘ought’ language can go in circles for centuries, and why any stipulated meaning of ‘ought’ is a fake utility function. To see clearly to our intuitive concept of ought, we’ll have to try empathic metaethics (see below).
I’ve been veryclearmanytimes that ‘austere metaethics’ is for clearing up certain types of confusions, but won’t do anything to solve FAI, which is why we need ‘empathic metaethics’.
I was discussing that particular comment, not rehashing the intention behind ‘austere metaethics’.
More specifically, you made a statement “We can’t wait around for metaethics and decision theory to be solved.” It’s not clear to me what purpose is being served by what alternative action to “waiting around for metaethics to be solved”. It looks like you were responding to Wei’s invitation to justify the use of word “meaning” instead of “guess”, but it’s not clear how your response relates to that question.
Like I said over here, I’m using the concept of ‘meaning’ from linguistics. I’m hoping that fewer people are confused by my use of ‘meaning’ as employed in the field that studies meaning than if I had used ‘meaning’ in a more narrow and less standard way, like Wei Dai’s. Perhaps I’m wrong about that, but I’m not sure.
My comment above about how “we have to act in the world now” gives one reason why, I suspect, the linguist’s sense of ‘meaning’ includes stipulated meaning, and why stipulated meaning is so common.
In any case, I think you and Wei Dai have helped me think about how to be more clear to more people by adding such clarifications as this.
In those paragraphs, you add intuition as an alternative to stipulated meaning. But this is not what we are talking about, we are talking about some unknown, but normative meaning that can’t be presently stipulated, and is referred partly through intuition in a way that is more accurate than any currently available stipulation. What intuition tells is as irrelevant as what the various stipulations tell, what matters is the thing that the imperfect intuition refers. This idea doesn’t require a notion of automated stipulation (“empathic” discussion).
“some unknown, but normative meaning that can’t be presently stipulated” is what I meant by “intuitive meaning” in this case.
automated stipulation (“empathic” discussion)
I’ve never thought of ‘empathic’ discussion as ‘automated stipulation’. What do you mean by that?
Even our stipulated definitions are only promissory notes for meaning. Luckily, stipulated definitions can be quite useful for achieving our goals. Figuring out what we ‘really want’, or what we ‘rationally ought to do’ when faced with Newcomb’s problem, would also be useful. Such terms are carry even more vague promissory notes for meaning than stipulated definitions, and yet they are worth pursuing.
Treat intuition as just another stipulated definition, that happens to be expressed as a pattern of mind activity, as opposed to a sequence of words. The intuition itself doesn’t define the thing it refers to, it can be slightly wrong, or very wrong. The same goes for words. Both intuition and various words we might find are tools for referring to some abstract structure (intended meaning), that is not accurately captured by any of these tools. The purpose of intuition, and of words, is in capturing this structure accurately, accessing its properties. We can develop better understanding by inventing new words, training new intuitions, etc.
None of these tools hold a privileged position with respect to the target structure, some of them just happen to more carefully refer to it. At the beginning of any investigation, we would typically only have intuitions, which specify the problem that needs solving. They are inaccurate fuzzy lumps of confusion, too. At the same time, any early attempt at finding better tools will be unsuccessful, explicit definitions will fail to capture the intended meaning, even as intuition doesn’t capture it precisely. Attempts at guiding intuition to better precision can likewise make it a less accurate tool for accessing the original meaning. On the other hand, when the topic is well-understood, we might find an explicit definition that is much better than the original intuition. We might train new intuitions that reflect the new explicit definition, and are much better tools than the original intuition.
And as far as I can tell, you don’t agree. You express agreement too much, like your stipulated-meaning thought experiments, this is one of the problems. But I’d probably need a significantly more clear presentation of what feels wrong to make progress on our disagreement.
Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma.
I agree with Wei. There is no reason to talk about “highest expected value” specifically, that would be merely a less clear option on the same list as CDT and EDT recommendations. We need to find the correct decision instead, expected value or not.
Playing Eliezer-post-ping-pong, you are almost demanding “But what do you mean by truth?”. When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
you are almost demanding “But what do you mean by truth?”. When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
No, I agree there are important things to investigate for which we don’t have clear definitions. That’s why I keep talking about ‘empathic metaethics.’
Also, by ‘less accurate definition’ do you just mean that a stipulated definition can differ from the intuitive definition that we don’t have access to? Well of course. But why privilege the intuitive definition by saying a stipulated definition is ‘less accurate’ than it is? I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions. Example: ‘planet’.
Also, by ‘less accurate definition’ do you just mean that a stipulated definition can differ from the intuitive definition that we don’t have access to?
Not “just”. Not every change is an improvement, but every improvement is a change. There can be better definitions of whatever the intuitions are talking about, and they will differ from the intuitive definitions. But when the purpose of discussion is referred by an unclear intuition with no other easy ways to reach it, stipulating a different definition would normally be a change that is not an improvement.
I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions.
It’s not easy to find a more successful definition of the same thing. You can’t always just say “taboo” and pick the best thought that decades of careful research failed to rule out. Sometimes the intuitive definition is still better, or, more to the point, the precise explicit definition still misses the point.
An analogy for “sharing common understanding of morality”. In the sound example, even though the arguers talk about different situations in a confusingly ambiguous way, they share a common understanding of what facts hold in reality. If they were additionally ignorant about reality in different ways (even though there would still be the same truth about reality, they just wouldn’t have reliable access to it), that would bring the situation closer to what we have with morality.
If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers.
Even by getting such confused answers out in the open, we might get them to break out of complacency and recognize the presence of confusion. (Fat chance, of course.)
I don’t know how to explain “actual meaning”, but it seems intuitively obvious to me that the actual meaning of “murder is wrong” is not “murder is forbidden by Yahweh”, even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh. Do you disagree with this?
But the way we actually resolved the debate about ‘sound’ is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers. I think saying “let’s resolve confusions in metaethics by asking people to stipulating definitions for ‘morally good’”, before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
That doesn’t seem intuitively obvious to me, which illustrates one reason why I prefer to taboo terms rather than bash my intuitions against the intuitions of others in an endless game of intuitionist conceptual analysis. :)
Perhaps the most common ‘foundational’ family of theories of meaning in linguistics and philosophy of language belong to the mentalist program, according to which semantic content is determined by the mental contents of the speaker, not by an abstract analysis of symbol forms taken out of context from their speaker. One straightforward application of a mentalist approach to meaning would conclude that if the speaker was assuming (or mentally representing) a judgment of moral wrongness in the sense of forbidden-by-God, then the meaning of the speaker’s sentence refers in part to the demands of an imagined deity.
But “reaching this understanding” with regard to morality was precisely the goal of ‘Conceptual Analysis and Moral Theory’ and ‘Pluralistic Moral Reductionism.’ I repeatedly made the point that people regularly use a narrow family of signifiers (‘morally good’, ‘morally right’, etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier ‘sound’ to call upon two distinct concepts (acoustic vibrations and auditory experience).
With regard to “sound”, the two concepts are complementary, and people can easily agree that “sound” sometimes refers to one or the other or often both of these concepts. The same is not true in the “morality” case. The concepts you list seem mutually exclusive, and most people have a strong intuition that “morality” can correctly refer to at most one of them. For example a consequentialist will argue that a deontologist is wrong when he asserts that “morality” means “adhering to rules X, Y, Z”. Similarly a divine command theorist will not answer “well, that’s true” if an egoist says “murdering Bob (in a way that serves my interests) is right, and I stipulate ‘right’ to mean ‘serving my interests’”.
It appears to me confusion here is not being caused mainly by linguistic ambiguity, i.e., people using the same word to refer to different things, which can be easily cleared up once pointed out. I see the situation as being closer to the following: in many cases, people are using “morality” to refer to the same concept, and are disagreeing over the nature of that concept. Some people think it’s equivalent to or closely related to the concept of divine attitudes, and others think it has more to do with well-being of conscious creatures, etc.
When many people agree that murder is wrong but disagree on the reasons why, you can argue that they’re referring to the same concept of morality but confused about its nature. But what about less clear-cut statements, like “women should be able to vote”? Many people in the past would’ve disagreed with that. Would you say they’re referring to a different concept of morality?
I’m not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
I tried to explain some of the cause of persistent moral debate (as opposed to e.g. sound debate) in this way:
Let me try an analogy. Consider someone who believes in the phlogiston theory of fire, and another person who believes in the oxidation theory. They are having a substantive disagreement about the nature of fire, and not merely causing unnecessary confusion by using the same word “fire” to refer to different things. And if the phlogiston theorist were to say “by ‘fire’ I mean the release of phlogiston” then that would just be wrong, and would be adding to the confusion instead of helping to resolve it.
I think the situation with “morality” is closer to this than to the “sound” example.
(ETA: I could also try to define “same concept” more directly, for example as occupying roughly the same position in the graph of relationships between one’s concepts, or playing approximately the same role in one’s cognitive algorithms, but I’d rather not take an exact position on what “same concept” means if I can avoid it, since I have mostly just an intuitive understanding of it.)
This is the exact debate currently being hashed out by Richard Joyce and Stephen Finlay (whom I interviewed here). A while back I wrote an article that can serve as a good entry point into the debate, here. A response from Joyce is here and here. Finlay replies again here.
I tend to side with Finlay, though I suspect not for all the same reasons. Recently, Joyce has admitted that both languages can work, but he’ll (personally) talk the language of error theory rather than the language of moral naturalism.
I’m having trouble understanding how the debate between Joyce and Finlay, over Error Theory, is the same as ours. (Did you perhaps reply to the wrong comment?)
Sorry, let me make it clearer...
The core of their debate concerns whether certain features are ‘essential’ to the concept of morality, and thus concerns whether people share the same concept of morality, and what it would mean to say that people share the concept of morality, and what the implications of that are. Phlogiston is even one of the primary examples used throughout the debate. (Also, witches!)
I’m still not getting it. From what I can tell, both Joyce and Finlay implicitly assume that most people are referring to the same concept by “morality”. They do use phlogiston as an example, but seemingly in a very different way from me, to illustrate different points. Also, two of the papers you link to by Joyce don’t cite Finlay at all and I think may not even be part of the debate. Actually the last paper you link to by Joyce (which doesn’t cite Finlay) does seem relevant to our discussion. For example this paragraph:
I will read that paper over more carefully, and in the mean time, please let me know if you still think the other papers are also relevant, and point to specific passages if yes.
This article by Joyce doesn’t cite Finlay, but its central topic is ‘concessive strategies’ for responding to Mackie, and Finlay is a leading figure in concessive strategies for responding to Mackie. Joyce also doesn’t cite Finlay here, but it discusses how two people who accept that Mackie’s suspect properties fail to refer might nevertheless speak two different languages about whether moral properties exist (as Joyce and Finlay do).
One way of expressing the central debate between them is to say that they are arguing over whether certain features (like moral ‘absolutism’ or ‘objectivity’) are ‘essential’ to moral concepts. (Without the assumption of absolutism, is X a ‘moral’ concept?) Another way to say that is to say that they are arguing over the boundaries of moral concepts; whether people can be said to share the ‘same’ concept of morality but disagree on some of its features, or whether this disagreement means they have ‘different’ concepts of morality.
But really, I’m just trying to get clear on what you might mean by saying that people have the ‘same’ concept of morality while disagreeing on fundamental features, and what you think the implications are. I’m sorry my pointers to the literature weren’t too helpful.
Unfortunately I’m not sure how to explain it better than I already did. But I did notice that Richard Chappell made a similar point (while criticizing Eliezer):
Does his version makes any more sense?
Chappell’s discussion makes more and more sense to me lately. Many previously central reasons for disagreement turn out to be my misunderstanding, but I haven’t re-read enough to form a new opinion yet.
Sure, except he doesn’t make any arguments for his position. He just says:
I don’t think normative debates are always “merely verbal”. I just think they are very often ‘merely verbal’, and that there are multiple concepts of normativity in use. Chappel and I, for example, seem to have different intuitions (see comments) about what normativity amounts to.
Let’s say a deontologist and a consequentialist are on the board of SIAI, and they are debating which kind of seed AI the Institute should build.
D: We should build a deontic AI.
C: We should build a consequentialist AI.
Surely their disagreement is substantive. But if by “we should do X”, the deontologist just means “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” and the consequentialist just means “X maximizes expected utility under utility function Y according to decision theory Z” then they are talking past each other and their disagreement is “merely verbal”. Yet these are the kinds of meanings you seem to think their normative language do have. Don’t you think there’s something wrong about that?
(ETA: To any bystanders still following this argument, I feel like I’m starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
I completely agree with what you are saying. Disagreement requires shared meaning. Cons. and Deont. are rival theories, not alternative meanings.
Good question. There’s a lot of momentum behind the “meaning theory”.
If the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, then they aren’t necessarily disagreeing with each other by having one state D and the other state C.
But perhaps we aren’t considering propositions D and C using meaning_stipulated. Perhaps we decide to consider propositions D and C using meaning-cognitive-algorithm. And perhaps a completed cognitive neuroscience would show us that they both mean the same thing by ‘should’ in the meaning-cognitive-algorithm sense. And in that case they would be having a substantive disagreement, when using meaning-cognitive-algorithm to determine the truth conditions of D and C.
Thus:
meaning-stipulated of D is X, meaning-stipulated of C is Y, but X and Y need not be mutually exclusive.
meaning-cognitive-algorithm of D is A, meaning-cognitive-algorithm of C is B, and in my story above A and B are mutually exclusive.
Since people have different ideas about what ‘meaning’ is, I’m skipping past that worry by tabooing ‘meaning.’
[Damn I wish LW would let me use underscores or subscripts instead of hyphens!]
You_can_do_that, just use backslash ‘\’ to escape ‘\_’ the underscores, although people quoting your text would need to repeat the trick.
Thanks!
Suppose the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, but if you ask them they also say that they are disagreeing with each other in a substantive way. They must be wrong about either what their sentences mean, or about whether their disagreement is substantive, right? (*) I think it’s more likely that they’re wrong about what their sentences mean, because meanings of normative sentences are confusing and lack of substantive disagreement in this particular scenario seems very unlikely.
(*) If we replace “mean” in this sentence by “mean_stipulated”, then it no longer makes sense, since clearly it’s possible that their sentences mean_stipulated D and C, and that their disagreement is substantive. Actually now that I think about it, I’m not sure that “mean” can ever be correctly taboo’ed into “mean_stipulated”. For example, suppose Bob says “By ‘sound’ I mean acoustic waves. Sorry, I misspoke, actually by ‘sound’ I mean auditory experiences. [some time later] To recall, by ‘sound’ I mean auditory experiences.” The first “mean” does not mean “mean_stipulated” since Bob hadn’t stipulated any meanings yet when he said that. The second “mean” does not mean “mean_stipulated” since otherwise that sentence would just be stating a plain falsehood. The third “mean” must mean the same thing as the second “mean”, so it’s also not “mean_stipulated”.
To continue along this line, suppose Alice inserts after the first sentence, “Bob, that sounds wrong. I think by ‘sound’ you mean auditory experiences.” Obviously not “mean_stipulated” here. Alternatively, suppose Bob only says the first sentence, and nobody bothers to correct him because they’ve all heard the lecture several times and know that Bob means auditory experiences by ‘sound’, and think that everyone else knows. Except that Carol is new and doesn’t know, and write in her notes “In this lecture, ‘sound’ means acoustic waves.” in her notebook. Later on Alice tells Carol what everyone else knows, and Carol corrects the sentence. If “mean” means “mean_stipulated” in that sentence, then it would be true and there would be no need to correct it.
Taboo seems to be a tool that needs to be wielded very carefully, and wanting to “skip pass worry” is probably not the right frame of mind for wielding it. One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
It seems a desperate move to say that stipulative meaning just isn’t a kind of meaning wielded by humans. I use it all the time, it’s used in law, it’s used in other fields, it’s taught in textbooks… If you think stipulative meaning just isn’t a legitimate kind of meaning commonly used by humans, I don’t know what to say.
I agree, but ‘tabooing’ ‘meaning’ to mean (in some cases) ‘stipulated meaning’ shouldn’t be objectionable because, as I said above, it’s a very commonly used kind of ‘meaning.’ We can also taboo ‘meaning’ to refer to other types of meaning.
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations). This is precisely the kind of use for which playing Taboo was originally proposed:
To come back to this point, what if we can’t translate a disagreement into disagreement over anticipations (which is the case in many debates over rationality and morality), nor do the participants know how to correctly Taboo (i.e., they don’t know how to capture the meanings of certain key words), but there still seems to be substantive disagreement or the participants themselves claim they do have a substantive disagreement?
Earlier, in another context, I suggested that we extend Eliezer’s “make beliefs pay rent in anticipated experiences” into “make beliefs pay rent in decision making”. Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance. What do you think?
But I missed your point in the previous response. The idea of disagreement about decisions in the same sense as usual disagreement about anticipation caused by errors/uncertainty is interesting. This is not bargaining about outcome, for the object under consideration is agents’ belief, not the fact the belief is about. The agents could work on correct belief about a fact even in the absence of reliable access to the fact itself, reaching agreement.
It seems that “what to do” has to refer to properties of a fixed fact, so disagreement is bargaining over what actually gets determined, and so probably doesn’t even involve different anticipations.
Wei Dai & Vladimir Nesov,
Both your suggestions sound plausible. I’ll have to think about it more when I have time to work more on this problem, probably in the context of a planned LW post on Chalmer’s Verbal Disputes paper. Right now I have to get back to some other projects.
Also perhaps of interest is Schroeder’s paper, A Recipe for Concept Similarity.
But that assumes that two sides of the disagreement are both Taboo’ing correctly. How can you tell? (You do agree that Taboo is hard and people can easily get it wrong, yes?)
ETA: Do you want to try to hash this out via online chat? I added you to my Google Chat contacts a few days ago, but it’s still showing “awaiting authorization”.
Not sure what ‘correctly’ means, here. I’d feel safer saying they were both Tabooing ‘acceptably’. In the above example, Albert and Barry were both Tabooing ‘acceptably.’ It would have been strange and unhelpful if one of them had Tabooed ‘sound’ to mean ‘rodents on the moon’. But Tabooing ‘sounds’ to talk about auditory experiences or acoustic vibrations is fine, because those are two commonly used meanings for ‘sound’. Likewise, ‘stipulated meaning’ and ‘intuitive meaning’ and a few other things are commonly used meanings of ‘meaning.’
If you’re saying that there’s “only one correct meaning for ‘meaning’” or “only one correct meaning for ‘ought’”, then I’m not sure what to make of that, since humans employ the word-tool ‘meaning’ and the word-tool ‘ought’ in a variety of ways. If whatever you’re saying predicts otherwise, then what you’re saying is empirically incorrect. But that’s so obvious that I keep assuming you must be saying something else.
Also relevant:
Another point. Switching back to a particular ‘conventional’ meaning that doesn’t match the stipulative meaning you just gave a word is one of the ways words can be wrong (#4).
And frankly, I’m worried that we are falling prey to the 14th way words can be wrong:
And, the 17th way words can be wrong:
Now, I suspect you may be trying to say that I’m committing mistake #20:
But I’ve pointed out that, for example, stipulative meaning is a very common usage of ‘meaning’...
Could you please take a look at this example, and tell me whether you think they are Tabooing “acceptably”?
That’s a great example. I’ll reproduce it here for readability of this thread:
I’d rather not talk about ‘wrong’; that makes things messier. But let me offer a few comments on what happened:
If this conversation occurred at a decision theory meetup known to have an even mix of CDTers and EDTers, then it was perhaps inefficient (for communication) for either of them to use ‘rational’ to mean either CDT-rational or EDT-rational. That strategy was only going to cause confusion until Tabooing occurred.
If this conversation occurred at a decision theory meetup for CDTers, then person A might be forgiven for assuming the other person would think of ‘rational’ in terms of ‘CDT-rational’. But then person A used Tabooing to discover that an EDTer had snuck into the party, and they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT.
In either case, once they’ve had the conversation quoted above, they are correct that they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT. Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma. Now that they’ve cleared up their momentary confusion about ‘rational’, they can move on to discuss the point at which they really do disagree. Tabooing for the win.
An action does not naturally “have” an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can’t describe their disagreement as “about what action has the highest expected value”. It seems that we can only describe their disagreement as about “what is rational” or “what is the correct decision theory” because we don’t know how to Taboo “rational” or “correct” in a way that preserves the substantive nature of their disagreement. (BTW, I guess we could define “have” to mean “assigned by the correct decision theory/prior/utility function” but that doesn’t help.)
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree. It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
ETA:
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right? In the case of “morality”, why do you trust the process of Tabooing so much that you do not give this possibility much credence?
Fair enough. Let me try again: “They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb’s dilemma.” Or something like that.
According to their Taboo transcript, they don’t disagree over the solutions of Newcomb’s problem recommended by EDT and CDT. But they might still disagree about whether EDT or CDT is most likely to fulfill the agent’s desires when faced with Newcomb’s problem.
Yes. Ask about anticipations.
That didn’t happen in this example. They do not, in fact, disagree over the solutions to Newcomb’s problem recommended by EDT and CDT. If they disagree, it’s about something else, like who is the tallest living person on Earth or which action is most likely to fulfill an agent’s desires when faced with Newcomb’s dilemma.
Of course Tabooing can go wrong, but it’s a useful tool. So is testing for differences of anticipation, though that can also go wrong.
No, I think it’s quite plausible that Tabooing can be done wrong when talking about morality. In fact, it may be more likely to go wrong there than anywhere else. But it’s also better to Taboo than to simply not use such a test for surface-level confusion. It’s also another option to not Taboo and instead propose that we try to decode the cognitive algorithms involved in order to get a clearer picture of our intuitive notion of moral terms than we can get using introspection and intuition.
This introduces even more assumptions into the picture. Why fulfillment of desires or specifically agent desires is relevant? Why is “most likely” in there? You are trying to make things precise at the expense of accuracy, that’s the big taboo failure mode, increasingly obscure lost purposes.
I’m just providing an example. It’s not my story. I invite you or Wei Dai to say what it is the two speakers disagree about even after they agree about the conclusions of CDT and EDT for Newcomb’s problem. If all you can say is that they disagree about what they ‘should’ do, or what it would be ‘rational’ to do, then we’ll have to talk about things at that level of understanding, but that will be tricky.
What other levels of understanding do we have? The question needs to be addressed on its own terms. Very tricky. There are ways of making this better, platonism extended to everything seems to help a lot, for example. Toy models of epistemic and decision-theoretic primitives also clarify things, training intuition.
We’re making progress on what it means for brains to value things, for example. Or we can talk in an ends-relational sense, and specify ends. Or we can keep things even more vague but then we can’t say much at all about ‘ought’ or ‘rational’.
The problem is that it doesn’t look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
If by ‘should’ in this sense you mean the ‘intended’ meaning of ‘should’ that we don’t have access too, then I agree.
Note: Wei Dai and I chatted for a while, and this resulted in three new clarifying paragraphs at the end of the is-ought section of my post ’Pluralistic Moral Reductionism.
Some remaining issues:
Even given your disclaimer, I suspect we still disagree on the merits of Taboo as it apply to metaethics. Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On “morality” we don’t have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
Yes. The most common result is that people come to realize they don’t know what they mean by ‘morally good’, unless they are theists.
If it looks like I’m focusing on neuroscience, I think that’s an accident of looking at work I’ve produced in a 4-month period rather than over a longer period (that hasn’t occurred yet). I don’t think neuroscience is as central to metaethics or rationality as my recent output might suggest. Humans with meat-brains are strange agents who will make up a tiny minority of rational and moral agents in the history of intelligent agents in our light-cone (unless we bring an end to intelligent agents in our light-cone).
Huh, I think that would have been good to mention in one of your posts. (Unless you did and I failed to notice it.)
It occurs to me that with a bit of tweaking to Austere Metaethics (which I’ll call Interim Metaethics), we can help everyone realize that they don’t know what they mean by “morally good”.
For example:
Deontologist: Should we build a deontic seed AI?
Interim Metaethicist: What do you mean by “should X”?
Deontologist: “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.”
Interim Metaethicist: Are you sure? If that’s really what you mean, then when a consequentialist says “should X” he probably means “X maximizes expected utility according to decision theory Y and utility function Z”. In which case the two of you do not actually disagree. But you do disagree with him, right?
Deontologist: Good point. I guess I don’t really mean that by “should”. I’m confused.
(Doesn’t that seem like an improvement over Austere Metaethics?)
I guess one difference between us is that I don’t see anything particularly ‘wrong’ with using stipulative definitions as long as you’re aware that they don’t match the intended meaning (that we don’t have access to yet), whereas you like to characterize stipulative definitions as ‘wrong’ when they don’t match the intended meaning.
But perhaps I should add one post before my empathic metaethics post which stresses that the stipulative definitions of ‘austere metaethics’ don’t match the intended meaning—and we can make this point by using all the standard thought experiments that deontologists and utilitarians and virtue ethicists and contractarian theorists use against each other.
After the above conversation, wouldn’t the deontologist want to figure out what he actually means by “should” and what its properties are? Why would he want to continue to use the stipulated definition that he knows he doesn’t actually mean? I mean I can imagine something like:
Deontologist: I guess I don’t really mean that by “should”, but I need to publish a few more papers for tenure, so please just help me figure out whether we should build an deontic seed AI under that stipulated definition of “should”, so I can finish my paper and submit it to the Journal of Machine Deontology.
But even in this case it would make more sense for him to avoid “stipulative definition” and instead say
Deontologist: Ok, by “should” I actually mean a concept that I can’t define at this point. But I guess it has something to do with deontic logic, and it would be useful to explore the properties of deontic logic in more detail. So, can you please help me figure out whether building a deontic seed AI is obligatory (by deontic logic) if we assume axiomatic imperatives Y and Z?
This way, he clarifies to himself and others that “”X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” is not what he means by “should X”, but instead a guess about the nature of morality (a concept that we can’t yet precisely define).
Perhaps you’d answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of “guess” is more appropriate here than “meaning”.
The problem is that we have to act in the world now. We can’t wait around for metaethics and decision theory to be solved. Thus, science books have glossaries in the back full of highly useful operationalized and stiuplated definitions for hundreds of terms, whether or not they match the intended meanings (that we don’t have access to) of those terms for person A, or the intended meanings of those terms for person B, or the intended meanings for those terms for person C.
I think this glossary business is a familiar enough practice that calling that thing a glossary of ‘meanings’ instead of a glossary of ‘guesses at meanings’ is fine. Maybe ‘meaning’ doesn’t have the connotations for me that it has for you.
Science needs doing, laws need to be written and enforced, narrow AIs need to be programmed, best practices in medicine need to be written, agents need to act… all before metaethics and decision theory are solved. In a great many cases, we need to have meaning_stipulated before we can figure out meaning_intended.
Sigh… Maybe I should just put a sticky note on my monitor that says
REMEMBER: You probably don’t actually disagree with Luke, because whenever he says “X means Z by Y”, he might just mean “X stipulated Y to mean Z”, which in turn is just another way of saying “X guesses that the nature of Y is Z”.
That might work.
We humans have different intuitions about the meanings of terms and the nature of meaning itself, and thus we’re all speaking slightly different languages. We always need to translate between our languages, which is where Taboo and testing for anticipations come in handy.
I’m using the concept of meaning from linguistics, which seems fair to me. In linguistics, stipulated meaning is most definitely a kind of meaning (and not merely a kind of guessing at meaning), for it is often “what is expressed by the writer or speaker, and what is conveyed to the reader or listener, provided that they talk about the same thing.”
Whatever the case, this language looks confusing/misleading enough to avoid. It conflates the actual search for intended meaning with all those irrelevant stipulations, and assigns misleading connotations to the words referring to these things. In Eliezer’s sequences, the term was “fake utility function”. The presence of “fake” in the term is important, it reminds of incorrectness of the view.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
Perhaps, though I’ve gotten comments from others that it was highly clarifying for them. Maybe they’re more used to the meaning of ‘meaning’ from linguistics.
Does this new paragraph at the end of this section in PMR help?
It’s not clear from this paragraph whether “intuitive concept” refers to the oafish tools in human brain (which have the same problems as stipulated definitions, including irrelevance) or the intended meaning that those tools seek. Conceptual analysis, as I understand, is concerned with analysis of the imperfect intuitive tools, so it’s also unclear in what capacity you mention conceptual analysis here.
(I do think this and other changes will probably make new readers less confused.)
Here’s the way I’m thinking about it.
Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Does that make sense?
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version:
You think this applies to figuring out decision theory for FAI? If not, how is that relevant in this context?
Vladimir,
I’ve been very clear many times that ‘austere metaethics’ is for clearing up certain types of confusions, but won’t do anything to solve FAI, which is why we need ‘empathic metaethics’.
I was discussing that particular comment, not rehashing the intention behind ‘austere metaethics’.
More specifically, you made a statement “We can’t wait around for metaethics and decision theory to be solved.” It’s not clear to me what purpose is being served by what alternative action to “waiting around for metaethics to be solved”. It looks like you were responding to Wei’s invitation to justify the use of word “meaning” instead of “guess”, but it’s not clear how your response relates to that question.
Like I said over here, I’m using the concept of ‘meaning’ from linguistics. I’m hoping that fewer people are confused by my use of ‘meaning’ as employed in the field that studies meaning than if I had used ‘meaning’ in a more narrow and less standard way, like Wei Dai’s. Perhaps I’m wrong about that, but I’m not sure.
My comment above about how “we have to act in the world now” gives one reason why, I suspect, the linguist’s sense of ‘meaning’ includes stipulated meaning, and why stipulated meaning is so common.
In any case, I think you and Wei Dai have helped me think about how to be more clear to more people by adding such clarifications as this.
(This is similar to my reaction expressed here.)
In those paragraphs, you add intuition as an alternative to stipulated meaning. But this is not what we are talking about, we are talking about some unknown, but normative meaning that can’t be presently stipulated, and is referred partly through intuition in a way that is more accurate than any currently available stipulation. What intuition tells is as irrelevant as what the various stipulations tell, what matters is the thing that the imperfect intuition refers. This idea doesn’t require a notion of automated stipulation (“empathic” discussion).
“some unknown, but normative meaning that can’t be presently stipulated” is what I meant by “intuitive meaning” in this case.
I’ve never thought of ‘empathic’ discussion as ‘automated stipulation’. What do you mean by that?
Even our stipulated definitions are only promissory notes for meaning. Luckily, stipulated definitions can be quite useful for achieving our goals. Figuring out what we ‘really want’, or what we ‘rationally ought to do’ when faced with Newcomb’s problem, would also be useful. Such terms are carry even more vague promissory notes for meaning than stipulated definitions, and yet they are worth pursuing.
My understanding of this topic is as follows.
Treat intuition as just another stipulated definition, that happens to be expressed as a pattern of mind activity, as opposed to a sequence of words. The intuition itself doesn’t define the thing it refers to, it can be slightly wrong, or very wrong. The same goes for words. Both intuition and various words we might find are tools for referring to some abstract structure (intended meaning), that is not accurately captured by any of these tools. The purpose of intuition, and of words, is in capturing this structure accurately, accessing its properties. We can develop better understanding by inventing new words, training new intuitions, etc.
None of these tools hold a privileged position with respect to the target structure, some of them just happen to more carefully refer to it. At the beginning of any investigation, we would typically only have intuitions, which specify the problem that needs solving. They are inaccurate fuzzy lumps of confusion, too. At the same time, any early attempt at finding better tools will be unsuccessful, explicit definitions will fail to capture the intended meaning, even as intuition doesn’t capture it precisely. Attempts at guiding intuition to better precision can likewise make it a less accurate tool for accessing the original meaning. On the other hand, when the topic is well-understood, we might find an explicit definition that is much better than the original intuition. We might train new intuitions that reflect the new explicit definition, and are much better tools than the original intuition.
As far as I can tell, I agree with all of this.
And as far as I can tell, you don’t agree. You express agreement too much, like your stipulated-meaning thought experiments, this is one of the problems. But I’d probably need a significantly more clear presentation of what feels wrong to make progress on our disagreement.
I look forward to it.
I’m not sure what you mean by “you agree too much”, though. Like I said, as far as I can tell I agree with everything in this comment of yours.
I agree with Wei. There is no reason to talk about “highest expected value” specifically, that would be merely a less clear option on the same list as CDT and EDT recommendations. We need to find the correct decision instead, expected value or not.
Playing Eliezer-post-ping-pong, you are almost demanding “But what do you mean by truth?”. When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
I updated the bit about expected value here.
No, I agree there are important things to investigate for which we don’t have clear definitions. That’s why I keep talking about ‘empathic metaethics.’
Also, by ‘less accurate definition’ do you just mean that a stipulated definition can differ from the intuitive definition that we don’t have access to? Well of course. But why privilege the intuitive definition by saying a stipulated definition is ‘less accurate’ than it is? I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions. Example: ‘planet’.
Not “just”. Not every change is an improvement, but every improvement is a change. There can be better definitions of whatever the intuitions are talking about, and they will differ from the intuitive definitions. But when the purpose of discussion is referred by an unclear intuition with no other easy ways to reach it, stipulating a different definition would normally be a change that is not an improvement.
It’s not easy to find a more successful definition of the same thing. You can’t always just say “taboo” and pick the best thought that decades of careful research failed to rule out. Sometimes the intuitive definition is still better, or, more to the point, the precise explicit definition still misses the point.
(They perhaps shouldn’t have done that.)
An analogy for “sharing common understanding of morality”. In the sound example, even though the arguers talk about different situations in a confusingly ambiguous way, they share a common understanding of what facts hold in reality. If they were additionally ignorant about reality in different ways (even though there would still be the same truth about reality, they just wouldn’t have reliable access to it), that would bring the situation closer to what we have with morality.
Can you elaborate this a bit more? I don’t follow.
Everyone understands “moral” to entail “should be praised/encouraged” and everyone understands “immoral” to entail “should be blamed/discouraged”
“Should”?
Of course “should”. It’s a definition, not a reduction
Even by getting such confused answers out in the open, we might get them to break out of complacency and recognize the presence of confusion. (Fat chance, of course.)