I’ll first try to restate your position in order to check my understanding. Let me know if I don’t do it justice.
People use “should” in several different ways. Most of these ways can be “reducible to physics”, or in other words can be restated as talking about how our universe is, without losing any of the intended meaning. Some of these ways can’t be so reduced (they are talking about the world of “is not”) but those usages are simply meaningless and can be safely ignored.
I agree that many usages of “should” can be reduced to physics. (Or perhaps instead to mathematics.) But there may be other usages that can’t be so reduced, and which are not clearly safe to ignore. Originally I was planning to wait for you to list the usages of “should” that can be reduced, and then show that there are other usages that are not obviously talking about “the world of is” but are not clearly meaningless either. (Of course I hope that your reductions do cover all of the important/interesting usages, but I’m not expecting that to be the case.)
Since you ask for my criticism now, I’ll just give an example that seems to be one of the hardest to reduce: “Should I consider the lives of random strangers to have (terminal) value?”
(Eliezer’s proposal is that what I’m really asking when I ask that question is “Does my CEV think the lives of random strangers should have (terminal) value?” I’ve given various arguments why I find this solution unsatisfactory. One that is currently fresh on my mind is that “coherent extrapolation” is merely a practical way to find the answer to any given question, but should not be used as the definition of what the question means. For example I could use a variant of CEV (call it Coherent Extrapolated Pi Estimation) to answer “What is the trillionth digit of pi?” but that doesn’t imply that by “the trillionth digit of pi” I actually mean “the output of CEPE”.)
I’m not planning to list all the reductions of normative language. There are too many. People use normative language in too many ways.
Also, I should clarify that when I talk about reducing ought statements into physical statements, I’m including logic. On my view, logic is just a feature of the language we use to talk about physical facts. (More on that if needed.)
Most of these ways can be “reducible to physics”… without losing any of the intended meaning.
I’m not sure I would say “most.”
But there may be other usages that can’t be so reduced, and which are not clearly safe to ignore.
What do you mean by “safe to ignore”?
If you’re talking about something that doesn’t reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about? Fiction? Magic? Those are fine things to talk about, as long as we understand we’re talking about fiction or magic.
Should I consider the lives of random strangers to have (terminal) value?
What about this is hard to reduce? We can ask for what you mean by ‘should’ in this question, and reduce it if possible. Perhaps what you have in mind isn’t reducible (divine commands), but then your question is without an answer.
Or perhaps you’re asking the question in the sense of “Please fix my broken question for me. I don’t know what I mean by ‘should’. Would you please do a stack trace on the cognitive algorithms that generated that question, fix my question, and then answer it for me?” And in that case we’re doing empathic metaethics.
I’m still confused as to what your objection is. Will you clarify?
You said that you’re not interested in an “ought” sentence if it reduces to talking about the world of is not. I was trying to make the same point by “safe to ignore”.
If you’re talking about something that doesn’t reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about?
I don’t know, but I don’t think it’s a good idea to assume that only things that are reducible to physics and/or math are worth talking about. I mean it’s a good working assumption to guide your search for possible meanings of “should”, but why declare that you’re not “interested” in anything else? Couldn’t you make that decision on a case by case basis, just in case there is a meaning of “should” that talks about something else besides physics and/or math and its interestingness will be apparent once you see it?
Or perhaps you’re asking the question in the sense of “Please fix my broken question for me. I don’t know what I mean by ‘should’. Would you please do a stack trace on the cognitive algorithms that generated that question, fix my question, and then answer it for me?” And in that case we’re doing empathic metaethics.
Maybe I should have waited until you finish your sequence after all, because I don’t know what “doing empathic metaethics” actually entails at this point. How are you proposing to “fix my question”? It’s not as if there is a design spec buried somewhere in my brain, and you can check my actual code against the design spec to see where the bug is… Do you want to pick up this conversation after you explain it in more detail?
I don’t think it’s a good idea to assume that only things that are reducible to physics and/or math are worth talking about. I mean it’s a good working assumption to guide your search for possible meanings of “should”, but why declare that you’re not “interested” in anything else?
Maybe this is because I’m fairly confident of physicalism? Of course I’ll change my mind if presented with enough evidence, but I’m not anticipating such a surprise.
‘Interest’ wasn’t the best word for me to use. I’ll have to fix that. All I was trying to say is that if somebody uses ‘ought’ to refer to something that isn’t physical or logical, then this punts the discussion back to a debate over physicalism, which isn’t the topic of my already-too-long ‘Pluralistic Moral Reductionism’ post.
Surely, many people use ‘ought’ to refer to things non-reducible to physics or logic, and they may even be interesting (as in fiction), but in the search for true statements that use ‘ought’ language they are not ‘interesting’, unless physicalism is false (which is a different discussion, then).
Does that make sense? I’ll explain empathic metaethics in more detail later, but I hope we can get some clarity on this part right now.
Maybe this is because I’m fairly confident of physicalism? Of course I’ll change my mind if presented with enough evidence, but I’m not anticipating such a surprise.
First I would call myself a radical platonist instead of a physicalist. (If all universes that exist mathematically also exist physically, perhaps it could be said that there is no difference between platonism and physicalism, but I think most people who call themselves physicalists would deny that premise.) So I think it’s likely that everything “interesting” can be reduced to math, but given the history of philosophy I don’t think I should be very confident in that. See my recent How To Be More Confident… That You’re Wrong.
Right, I’m pretty partial to Tegmark, too. So what I call physicalism is compatible with Tegmark. But could you perhaps give an example of what it would mean to reduce normative language to a logical-mathematical function—even a silly one?
(It’s late and I’m thinking up this example on the spot, so let me know if it doesn’t make sense.)
Suppose I’m in a restaurant and I say to my dinner companion Bob, “I’m too tired to think tonight. You know me pretty well. What do you think I should order?” From the answer I get, I can infer (when I’m not so tired) a set of joint constraints on what Bob believes to be my preferences, what decision theory he applied on my behalf, and the outcome of his (possibly subconscious) computation. If there is little uncertainty about my preferences and the decision theory involved, then the information conveyed by “you should order X” in this context just reduces to a mathematical statement about (for example) what the arg max of a set of weighted averages is.
(I notice an interesting subtlety here. Even though what I infer from “you should order X” is (1) “according to Bob’s computation, the arg max of … is X”, what Bob means by “you should order X” must be (2) “the arg max of … is X”, because if he means (1), then “you should order X” would be true even if Bob made an error in his computation.)
Yeah, that’s definitely compatible with what I’m talking about when I talk about reducing normative language to natural language (that is, to math/logic + physics).
Do you think any disagreements or confusion remains in this thread?
Having thought more about these matters over the last couple of weeks, I’ve come to realize that my analysis in the grandparent comment is not very good, and also that I’m confused about the relationship between semantics (i.e., study of meaning) and reductionism.
First, I learned that it’s important (and I failed) to distinguish between (A) the meaning of a sentence (in some context), (B) the set of inferences that can be drawn from it, and (C) what information the speaker intends to convey.
For example, suppose Alice says to Bob, “It’s raining outside. You should wear your rainboots.” The information that Alice really wants to convey by “it’s raining outside” is that there are puddles on the ground. That, along with for example “it’s probably not sunny” and “I will get wet if I don’t use an umbrella”, belongs to the set of inferences that can be drawn from the sentence. But clearly the meaning of “it’s raining outside” is distinct from either of these. Similarly, the fact that Bob can infer that there are puddles on the ground from “you should wear your rainboots” does not show that “you should wear your rainboots” means “there are puddles on the ground”.
Nor does it seem to make sense to say that “you should wear your rainboots” reduces to “there are puddles on the ground” (why should it, when clearly “it’s raining outside” doesn’t reduce that way?), which, by analogy, calls into question my claim in the grandparent comment that “you should order X” reduces to “the arg max of … is X”.
But I’m confused about what reductionism even means in the context of semantics. The Eliezer post that you linked to from Pluralistic Moral Reductionism defined “reductionism” as:
Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
But that appears to be a position about ontology, and it not clear to me what implications it has for semantics, especially for the semantics of normative language. (I know you posted a reading list for reductionism, which I have not gone though except to skim the encyclopedia entry. Please let me know if the answer will be apparent once I do read them, or if there is a more specific reference you can point me to that will answer this immediate question.)
Excellent. We should totally be clarifying such things.
There are many things we might intend to communicate when we talk about the ‘meaning’ of a word or phrase or sentence. Let’s consider some possible concepts of ‘the meaning of a sentence’, in the context of declarative sentences only:
(1) The ‘meaning of a sentence’ is what the speaker intended to assert, that assertion being captured by truth conditions the speaker would endorse when asked for them.
(2) The ‘meaning of a sentence’ is what the sentence asserts if the assertion is captured by truth conditions that are fixed by the sentence’s syntax and the first definition of each word that is provided by the Oxford English Dictionary.
(3) The ‘meaning of a sentence’ is what the speaker intended to assert, that assertion being captured by truth conditions determined by a full analysis of the cognitive algorithms that produced the sentence (which are not accessible to the speaker).
There are several other possibilities, even just for declarative sentences.
I tried to make it clear that when doing austere metaethics, I was taking #1 to be the meaning of a declarative moral judgment (e.g. “Murder is wrong!”), at least when the speaker of such sentences intended them to be declarative (rather than intending them to be, say, merely emotive or in other ways ‘non-cognitive’).
The advantage of this is that we can actually answer (to some degree, in many cases) the question of what a moral judgment ‘means’ (in the austere metaethics sense), and thus evaluate whether it is true or untrue. After some questioning of the speaker, we might determine that meaning~1 of “Murder is wrong” in a particular case is actually “Murder is forbidden by Yahweh”, in which case we can evaluate the speaker’s sentence as untrue given its truth conditions (given its meaning~1).
But we may very well want to know instead what is ‘right’ or ‘wrong’ or ‘good’ or ‘bad’ when evaluating sentences that use those words using the third sense of ‘the meaning of a sentence’ listed above. Though my third sense of meaning above is left a bit vague for now, that’s roughly what I’ll be doing when I start talking about empathic metaethics.
Will Sawin has been talking about the ‘meaning’ of ‘ought’ sentences in a fourth sense of the word ‘meaning’ that is related to but not identical to meaning~3 I gave above. I might interpret Will as saying that:
The meaning~4 of ‘ought’ in a declarative ought-sentence is determined by the cognitive algorithms that process ‘ought’ reasoning in a distinctive cognitive module devoted to that task, which do not include normative primitives nor reference to physical phenomena but only relate normative concepts to each other.
I am not going to do a thousand years of conceptual analysis on the English word-tool ‘meaning.’ I’m not going to survey which definition of ‘meaning’ is consistent with the greatest number of our intuitions about its meaning given a certain set of hypothetical scenarios in which we might use the term. Instead, I’m going to taboo ‘meaning’ so that I can use the word along with others to transfer ideas from my head into the heads of others, and take ideas from their heads into mine. If there’s an objection to this, I’ll be tempte to invent a new word-tool that I can use in the circumstances where I currently want to use the word-tool ‘meaning’ to transfer ideas between brains.
In discussing austere metaethics, I’m considering the ‘meaning’ of declarative moral judgment sentences as meaning~1. In discussing empathic metaethics, I’m considering the ‘meaning’ of declarative moral judgment sentences as (something like) meaning~3. I’m also happy to have additional discussions about ‘ought’ when considering the meaning of ‘ought’ as meaning~4, though the empirical assumptions underlying meaning~4 might turn out to be false. We could discuss ‘meaning’ as meaning~2, too, but I’m personally not that interested to do so.
Before I talk about reductionism, does this comment about meaning make sense?
As I indicated in a recent comment, I don’t really see the point of austere metaethics. Meaning~1 just doesn’t seem that interesting, given that meaning~1 is not likely to be closely related to actual meaning, as in your example when someone thinks that by “Murder is wrong” they are asserting “Murder is forbidden by Yahweh”.
Empathic metaethics is much more interesting, of course, but I do not understand why you seem to assume that if we delve into the cognitive algorithms that produce a sentence like “murder is wrong” we will be able to obtain a list of truth conditions. For example if I examine the algorithms behind an Eliza bot that sometimes says “murder is wrong” I’m certainly not going to obtain a list of truth conditions. It seems clear that information/beliefs about math and physics definitely influence the production of normative sentences in humans, but it’s much less clear that those sentences can be said to assert facts about math and physics.
Instead, I’m going to taboo ‘meaning’ so that I can use the word along with others to transfer ideas from my head into the heads of others, and take ideas from their heads into mine.
Can you show me an example of such idea transfer? (Depending on what ideas you want to transfer, perhaps you do not need to “fully” solve metaethics, in which case our interests might diverge at some point.)
If there’s an objection to this, I’ll be tempte to invent a new word-tool that I can use in the circumstances where I currently want to use the word-tool ‘meaning’ to transfer ideas between brains.
This is probably a good idea. (Nesov previously made a general suggestion along those lines.)
I don’t really see the point of austere metaethics. Meaning~1 just doesn’t seem that interesting, given that meaning~1 is not likely to be closely related to actual meaning
What do you mean by ‘actual meaning’?
The point of pluralistic moral reductionism (austere metaethics) is to resolve lots of confused debates in metaethics that arise from doing metaethics (implicitly or explicitly) in the context of traditional conceptual analysis. It’s clearing away the dust and confusion from such debates so that we can move on to figure out what I think is more important: empathic metaethics.
I do not understand why you seem to assume that if we delve into the cognitive algorithms that produce a sentence like “murder is wrong” we will be able to obtain a list of truth conditions
I don’t assume this. Whether this can be done is an open research question.
Can you show me an example of such idea transfer?
My entire post ‘Pluralistic Moral Reductionism’ is an example of such idea transfer. First I specified that one way we can talk about morality is to stipulate what we mean by terms like ‘morally good’, so as to resolve debates about morality in the same way that we resolve a hypothetical debate about ‘sound’ by stipulating our definitions of ‘sound.’ Then I worked through the implications of that approach to metaethics, and suggested toward the end that it wasn’t the only approach to metaethics, and that we’ll explore empathic metaethics in a later post.
I don’t know how to explain “actual meaning”, but it seems intuitively obvious to me that the actual meaning of “murder is wrong” is not “murder is forbidden by Yahweh”, even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh. Do you disagree with this?
First I specified that one way we can talk about morality is to stipulate what we mean by terms like ‘morally good’, so as to resolve debates about morality in the same way that we resolve a hypothetical debate about ‘sound’ by stipulating our definitions of ‘sound.’
But the way we actually resolved the debate about ‘sound’ is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers. I think saying “let’s resolve confusions in metaethics by asking people to stipulating definitions for ‘morally good’”, before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
I don’t know how to explain “actual meaning”, but it seems intuitively obvious to me that the actual meaning of “murder is wrong” is not “murder is forbidden by Yahweh”, even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh.
That doesn’t seem intuitively obvious to me, which illustrates one reason why I prefer to taboo terms rather than bash my intuitions against the intuitions of others in an endless game of intuitionist conceptual analysis. :)
Perhaps the most common ‘foundational’ family of theories of meaning in linguistics and philosophy of language belong to the mentalist program, according to which semantic content is determined by the mental contents of the speaker, not by an abstract analysis of symbol forms taken out of context from their speaker. One straightforward application of a mentalist approach to meaning would conclude that if the speaker was assuming (or mentally representing) a judgment of moral wrongness in the sense of forbidden-by-God, then the meaning of the speaker’s sentence refers in part to the demands of an imagined deity.
But the way we actually resolved the debate about ‘sound’ is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers. I think saying “let’s resolve confusions in metaethics by asking people to stipulating definitions for ‘morally good’”, before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
But “reaching this understanding” with regard to morality was precisely the goal of ‘Conceptual Analysis and Moral Theory’ and ‘Pluralistic Moral Reductionism.’ I repeatedly made the point that people regularly use a narrow family of signifiers (‘morally good’, ‘morally right’, etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier ‘sound’ to call upon two distinct concepts (acoustic vibrations and auditory experience).
I repeatedly made the point that people regularly use a narrow family of signifiers (‘morally good’, ‘morally right’, etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier ‘sound’ to call upon two distinct concepts (acoustic vibrations and auditory experience).
With regard to “sound”, the two concepts are complementary, and people can easily agree that “sound” sometimes refers to one or the other or often both of these concepts. The same is not true in the “morality” case. The concepts you list seem mutually exclusive, and most people have a strong intuition that “morality” can correctly refer to at most one of them. For example a consequentialist will argue that a deontologist is wrong when he asserts that “morality” means “adhering to rules X, Y, Z”. Similarly a divine command theorist will not answer “well, that’s true” if an egoist says “murdering Bob (in a way that serves my interests) is right, and I stipulate ‘right’ to mean ‘serving my interests’”.
It appears to me confusion here is not being caused mainly by linguistic ambiguity, i.e., people using the same word to refer to different things, which can be easily cleared up once pointed out. I see the situation as being closer to the following: in many cases, people are using “morality” to refer to the same concept, and are disagreeing over the nature of that concept. Some people think it’s equivalent to or closely related to the concept of divine attitudes, and others think it has more to do with well-being of conscious creatures, etc.
I see the situation as being closer to the following: in many cases, people are using “morality” to refer to the same concept, and are disagreeing over the nature of that concept.
When many people agree that murder is wrong but disagree on the reasons why, you can argue that they’re referring to the same concept of morality but confused about its nature. But what about less clear-cut statements, like “women should be able to vote”? Many people in the past would’ve disagreed with that. Would you say they’re referring to a different concept of morality?
I’m not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
I tried to explain some of the cause of persistent moral debate (as opposed to e.g. sound debate) in this way:
The problem may be worse for moral terms than for (say) art terms. Moral terms have more powerful connotations than art terms, and are thus a greater attractor for sneaking in connotations. Moral terms are used to persuade. “It’s just wrong!” the moralist cries, “I don’t care what definition you’re using right now. It’s just wrong: don’t do it.”
Moral discourse is rife with motivated cognition. This is part of why, I suspect, people resist dissolving moral debates even while they have no trouble dissolving the ‘tree falling in a forest’ debate.
I’m not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
Let me try an analogy. Consider someone who believes in the phlogiston theory of fire, and another person who believes in the oxidation theory. They are having a substantive disagreement about the nature of fire, and not merely causing unnecessary confusion by using the same word “fire” to refer to different things. And if the phlogiston theorist were to say “by ‘fire’ I mean the release of phlogiston” then that would just be wrong, and would be adding to the confusion instead of helping to resolve it.
I think the situation with “morality” is closer to this than to the “sound” example.
(ETA: I could also try to define “same concept” more directly, for example as occupying roughly the same position in the graph of relationships between one’s concepts, or playing approximately the same role in one’s cognitive algorithms, but I’d rather not take an exact position on what “same concept” means if I can avoid it, since I have mostly just an intuitive understanding of it.)
This is the exact debate currently being hashed out by Richard Joyce and Stephen Finlay (whom I interviewed here). A while back I wrote an article that can serve as a good entry point into the debate, here. A response from Joyce is here and here. Finlay replies again here.
I tend to side with Finlay, though I suspect not for all the same reasons. Recently, Joyce has admitted that both languages can work, but he’ll (personally) talk the language of error theory rather than the language of moral naturalism.
I’m having trouble understanding how the debate between Joyce and Finlay, over Error Theory, is the same as ours. (Did you perhaps reply to the wrong comment?)
The core of their debate concerns whether certain features are ‘essential’ to the concept of morality, and thus concerns whether people share the same concept of morality, and what it would mean to say that people share the concept of morality, and what the implications of that are. Phlogiston is even one of the primary examples used throughout the debate. (Also, witches!)
I’m still not getting it. From what I can tell, both Joyce and Finlay implicitly assume that most people are referring to the same concept by “morality”. They do use phlogiston as an example, but seemingly in a very different way from me, to illustrate different points. Also, two of the papers you link to by Joyce don’t cite Finlay at all and I think may not even be part of the debate. Actually the last paper you link to by Joyce (which doesn’t cite Finlay) does seem relevant to our discussion. For example this paragraph:
We gave the name “Earth” to the thing we live upon and at one time
reckoned it flat (or at least a good many people reckoned it flat); but the discovery that the
thing we live upon is a big ball was not taken to be the discovery that we do not live upon
Earth. It was once widely thought that gorillas are aggressive brutes, but the discovery that
they’re in fact gentle social creatures was not taken to be the discovery that gorillas do not
exist.
I will read that paper over more carefully, and in the mean time, please let me know if you still think the other papers are also relevant, and point to specific passages if yes.
This article by Joyce doesn’t cite Finlay, but its central topic is ‘concessive strategies’ for responding to Mackie, and Finlay is a leading figure in concessive strategies for responding to Mackie. Joyce also doesn’t cite Finlay here, but it discusses how two people who accept that Mackie’s suspect properties fail to refer might nevertheless speak two different languages about whether moral properties exist (as Joyce and Finlay do).
One way of expressing the central debate between them is to say that they are arguing over whether certain features (like moral ‘absolutism’ or ‘objectivity’) are ‘essential’ to moral concepts. (Without the assumption of absolutism, is X a ‘moral’ concept?) Another way to say that is to say that they are arguing over the boundaries of moral concepts; whether people can be said to share the ‘same’ concept of morality but disagree on some of its features, or whether this disagreement means they have ‘different’ concepts of morality.
But really, I’m just trying to get clear on what you might mean by saying that people have the ‘same’ concept of morality while disagreeing on fundamental features, and what you think the implications are. I’m sorry my pointers to the literature weren’t too helpful.
But really, I’m just trying to get clear on what you might mean by saying that people have the ‘same’ concept of morality while disagreeing on fundamental features, and what you think the implications are.
Unfortunately I’m not sure how to explain it better than I already did. But I did notice that Richard Chappell made a similar point (while criticizing Eliezer):
His view implies that many normative disagreements are simply terminological; different people mean different things by the term ‘ought’, so they’re simply talking past each other. This is a popular stance to take, especially among non-philosophers, but it is terribly superficial. See my ‘Is Normativity Just Semantics?’ for more detail.
Chappell’s discussion makes more and more sense to me lately. Many previously central reasons for disagreement turn out to be my misunderstanding, but I haven’t re-read enough to form a new opinion yet.
Sure, except he doesn’t make any arguments for his position. He just says:
Normative disputes, e.g. between theories of wellbeing, are surely more substantive than is allowed for by this account.
I don’t think normative debates are always “merely verbal”. I just think they are very often ‘merely verbal’, and that there are multiple concepts of normativity in use. Chappel and I, for example, seem to have different intuitions (see comments) about what normativity amounts to.
Let’s say a deontologist and a consequentialist are on the board of SIAI, and they are debating which kind of seed AI the Institute should build.
D: We should build a deontic AI. C: We should build a consequentialist AI.
Surely their disagreement is substantive. But if by “we should do X”, the deontologist just means “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” and the consequentialist just means “X maximizes expected utility under utility function Y according to decision theory Z” then they are talking past each other and their disagreement is “merely verbal”. Yet these are the kinds of meanings you seem to think their normative language do have. Don’t you think there’s something wrong about that?
(ETA: To any bystanders still following this argument, I feel like I’m starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
I completely agree with what you are saying. Disagreement requires shared meaning. Cons. and Deont. are rival
theories, not alternative meanings.
(ETA: To any bystanders still following this argument, I feel like I’m starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
Good question. There’s a lot of momentum behind the “meaning theory”.
If the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, then they aren’t necessarily disagreeing with each other by having one state D and the other state C.
But perhaps we aren’t considering propositions D and C using meaning_stipulated. Perhaps we decide to consider propositions D and C using meaning-cognitive-algorithm. And perhaps a completed cognitive neuroscience would show us that they both mean the same thing by ‘should’ in the meaning-cognitive-algorithm sense. And in that case they would be having a substantive disagreement, when using meaning-cognitive-algorithm to determine the truth conditions of D and C.
Thus:
meaning-stipulated of D is X, meaning-stipulated of C is Y, but X and Y need not be mutually exclusive.
meaning-cognitive-algorithm of D is A, meaning-cognitive-algorithm of C is B, and in my story above A and B are mutually exclusive.
Since people have different ideas about what ‘meaning’ is, I’m skipping past that worry by tabooing ‘meaning.’
[Damn I wish LW would let me use underscores or subscripts instead of hyphens!]
Suppose the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, but if you ask them they also say that they are disagreeing with each other in a substantive way. They must be wrong about either what their sentences mean, or about whether their disagreement is substantive, right? (*) I think it’s more likely that they’re wrong about what their sentences mean, because meanings of normative sentences are confusing and lack of substantive disagreement in this particular scenario seems very unlikely.
(*) If we replace “mean” in this sentence by “mean_stipulated”, then it no longer makes sense, since clearly it’s possible that their sentences mean_stipulated D and C, and that their disagreement is substantive. Actually now that I think about it, I’m not sure that “mean” can ever be correctly taboo’ed into “mean_stipulated”. For example, suppose Bob says “By ‘sound’ I mean acoustic waves. Sorry, I misspoke, actually by ‘sound’ I mean auditory experiences. [some time later] To recall, by ‘sound’ I mean auditory experiences.” The first “mean” does not mean “mean_stipulated” since Bob hadn’t stipulated any meanings yet when he said that. The second “mean” does not mean “mean_stipulated” since otherwise that sentence would just be stating a plain falsehood. The third “mean” must mean the same thing as the second “mean”, so it’s also not “mean_stipulated”.
To continue along this line, suppose Alice inserts after the first sentence, “Bob, that sounds wrong. I think by ‘sound’ you mean auditory experiences.” Obviously not “mean_stipulated” here. Alternatively, suppose Bob only says the first sentence, and nobody bothers to correct him because they’ve all heard the lecture several times and know that Bob means auditory experiences by ‘sound’, and think that everyone else knows. Except that Carol is new and doesn’t know, and write in her notes “In this lecture, ‘sound’ means acoustic waves.” in her notebook. Later on Alice tells Carol what everyone else knows, and Carol corrects the sentence. If “mean” means “mean_stipulated” in that sentence, then it would be true and there would be no need to correct it.
Since people have different ideas about what ‘meaning’ is, I’m skipping past that worry by tabooing ‘meaning.’
Taboo seems to be a tool that needs to be wielded very carefully, and wanting to “skip pass worry” is probably not the right frame of mind for wielding it. One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
I’m not sure that “mean” can ever be correctly taboo’ed into “mean_stipulated”.
It seems a desperate move to say that stipulative meaning just isn’t a kind of meaning wielded by humans. I use it all the time, it’s used in law, it’s used in other fields, it’s taught in textbooks… If you think stipulative meaning just isn’t a legitimate kind of meaning commonly used by humans, I don’t know what to say.
One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
I agree, but ‘tabooing’ ‘meaning’ to mean (in some cases) ‘stipulated meaning’ shouldn’t be objectionable because, as I said above, it’s a very commonly used kind of ‘meaning.’ We can also taboo ‘meaning’ to refer to other types of meaning.
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations). This is precisely the kind of use for which playing Taboo was originally proposed:
the principle [of Tabooing] applies much more broadly:
Albert: “A tree falling in a deserted forest makes a sound.”
Barry: “A tree falling in a deserted forest does not make a sound.”
Clearly, since one says “sound” and one says “~sound”, we must have a contradiction, right? But suppose that they both dereference their pointers before speaking:
Albert: “A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations].”
Barry: “A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].”
Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound. If “acoustic vibrations” came into dispute, we would just play Taboo again and say “pressure waves in a material medium”; if necessary we would play Taboo again on the word “wave” and replace it with the wave equation. (Play Taboo on “auditory experience” and you get “That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes.”)
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations).
To come back to this point, what if we can’t translate a disagreement into disagreement over anticipations (which is the case in many debates over rationality and morality), nor do the participants know how to correctly Taboo (i.e., they don’t know how to capture the meanings of certain key words), but there still seems to be substantive disagreement or the participants themselves claim they do have a substantive disagreement?
Earlier, in another context, I suggested that we extend Eliezer’s “make beliefs pay rent in anticipated experiences” into “make beliefs pay rent in decision making”. Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance. What do you think?
But I missed your point in the previous response. The idea of disagreement about decisions in the same sense as usual disagreement about anticipation caused by errors/uncertainty is interesting. This is not bargaining about outcome, for the object under consideration is agents’ belief, not the fact the belief is about. The agents could work on correct belief about a fact even in the absence of reliable access to the fact itself, reaching agreement.
Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance.
It seems that “what to do” has to refer to properties of a fixed fact, so disagreement is bargaining over what actually gets determined, and so probably doesn’t even involve different anticipations.
Both your suggestions sound plausible. I’ll have to think about it more when I have time to work more on this problem, probably in the context of a planned LW post on Chalmer’s Verbal Disputes paper. Right now I have to get back to some other projects.
I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations).
But that assumes that two sides of the disagreement are both Taboo’ing correctly. How can you tell? (You do agree that Taboo is hard and people can easily get it wrong, yes?)
ETA: Do you want to try to hash this out via online chat? I added you to my Google Chat contacts a few days ago, but it’s still showing “awaiting authorization”.
But that assumes that two sides of the disagreement are both Taboo’ing correctly.
Not sure what ‘correctly’ means, here. I’d feel safer saying they were both Tabooing ‘acceptably’. In the above example, Albert and Barry were both Tabooing ‘acceptably.’ It would have been strange and unhelpful if one of them had Tabooed ‘sound’ to mean ‘rodents on the moon’. But Tabooing ‘sounds’ to talk about auditory experiences or acoustic vibrations is fine, because those are two commonly used meanings for ‘sound’. Likewise, ‘stipulated meaning’ and ‘intuitive meaning’ and a few other things are commonly used meanings of ‘meaning.’
If you’re saying that there’s “only one correct meaning for ‘meaning’” or “only one correct meaning for ‘ought’”, then I’m not sure what to make of that, since humans employ the word-tool ‘meaning’ and the word-tool ‘ought’ in a variety of ways. If whatever you’re saying predicts otherwise, then what you’re saying is empirically incorrect. But that’s so obvious that I keep assuming you must be saying something else.
Just because there’s a word “art” doesn’t mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.
Wondering how to define a word means you’re looking at the problem the wrong way—searching for the mysterious essence of what is, in fact, a communication signal.
Another point. Switching back to a particular ‘conventional’ meaning that doesn’t match the stipulative meaning you just gave a word is one of the ways words can be wrong (#4).
And frankly, I’m worried that we are falling prey to the 14th way words can be wrong:
You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what’s left to ask by arguing, “Is it a blegg?” But if your brain’s categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there’s a leftover question.
And, the 17th way words can be wrong:
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we’re hard to stop; if we have no common language, we’ll draw pictures in sand. When you each understand what is in the other’s mind, you are done.
Now, I suspect you may be trying to say that I’m committing mistake #20:
You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.
But I’ve pointed out that, for example, stipulative meaning is a very common usage of ‘meaning’...
That’s a great example. I’ll reproduce it here for readability of this thread:
Consider a hypothetical debate between two decision theorists who happen to be Taboo fans:
A: It’s rational to two-box in Newcomb’s problem.
B: No, one-boxing is rational.
A: Let’s taboo “rational” and replace it with math instead. What I meant was that two-boxing is what CDT recommends.
B: Oh, what I meant was that one-boxing is what EDT recommends.
A: Great, it looks like we don’t disagree after all!
What did these two Taboo’ers do wrong, exactly?
I’d rather not talk about ‘wrong’; that makes things messier. But let me offer a few comments on what happened:
If this conversation occurred at a decision theory meetup known to have an even mix of CDTers and EDTers, then it was perhaps inefficient (for communication) for either of them to use ‘rational’ to mean either CDT-rational or EDT-rational. That strategy was only going to cause confusion until Tabooing occurred.
If this conversation occurred at a decision theory meetup for CDTers, then person A might be forgiven for assuming the other person would think of ‘rational’ in terms of ‘CDT-rational’. But then person A used Tabooing to discover that an EDTer had snuck into the party, and they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT.
In either case, once they’ve had the conversation quoted above, they are correct that they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT. Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma. Now that they’ve cleared up their momentary confusion about ‘rational’, they can move on to discuss the point at which they really do disagree. Tabooing for the win.
They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma.
An action does not naturally “have” an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can’t describe their disagreement as “about what action has the highest expected value”. It seems that we can only describe their disagreement as about “what is rational” or “what is the correct decision theory” because we don’t know how to Taboo “rational” or “correct” in a way that preserves the substantive nature of their disagreement. (BTW, I guess we could define “have” to mean “assigned by the correct decision theory/prior/utility function” but that doesn’t help.)
Now that they’ve cleared up their momentary confusion about ‘rational’, they can move on to discuss the point at which they really do disagree.
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree. It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
ETA:
Tabooing for the win.
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right? In the case of “morality”, why do you trust the process of Tabooing so much that you do not give this possibility much credence?
An action does not naturally “have” an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can’t describe their disagreement as “about what action has the highest expected value”.
Fair enough. Let me try again: “They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb’s dilemma.” Or something like that.
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree.
According to their Taboo transcript, they don’t disagree over the solutions of Newcomb’s problem recommended by EDT and CDT. But they might still disagree about whether EDT or CDT is most likely to fulfill the agent’s desires when faced with Newcomb’s problem.
It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
Yes. Ask about anticipations.
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right?
That didn’t happen in this example. They do not, in fact, disagree over the solutions to Newcomb’s problem recommended by EDT and CDT. If they disagree, it’s about something else, like who is the tallest living person on Earth or which action is most likely to fulfill an agent’s desires when faced with Newcomb’s dilemma.
Of course Tabooing can go wrong, but it’s a useful tool. So is testing for differences of anticipation, though that can also go wrong.
In the case of “morality”, why do you trust the process of Tabooing so much that you do not give this possibility much credence?
No, I think it’s quite plausible that Tabooing can be done wrong when talking about morality. In fact, it may be more likely to go wrong there than anywhere else. But it’s also better to Taboo than to simply not use such a test for surface-level confusion. It’s also another option to not Taboo and instead propose that we try to decode the cognitive algorithms involved in order to get a clearer picture of our intuitive notion of moral terms than we can get using introspection and intuition.
“They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb’s dilemma.”
This introduces even more assumptions into the picture. Why fulfillment of desires or specifically agent desires is relevant? Why is “most likely” in there? You are trying to make things precise at the expense of accuracy, that’s the big taboo failure mode, increasingly obscure lost purposes.
I’m just providing an example. It’s not my story. I invite you or Wei Dai to say what it is the two speakers disagree about even after they agree about the conclusions of CDT and EDT for Newcomb’s problem. If all you can say is that they disagree about what they ‘should’ do, or what it would be ‘rational’ to do, then we’ll have to talk about things at that level of understanding, but that will be tricky.
If all you can say is that they disagree about what they ‘should’ do, or what it would be ‘rational’ to do, then we’ll have to talk about things at that level of understanding, but that will be tricky.
What other levels of understanding do we have? The question needs to be addressed on its own terms. Very tricky. There are ways of making this better, platonism extended to everything seems to help a lot, for example. Toy models of epistemic and decision-theoretic primitives also clarify things, training intuition.
We’re making progress on what it means for brains to value things, for example. Or we can talk in an ends-relational sense, and specify ends. Or we can keep things even more vague but then we can’t say much at all about ‘ought’ or ‘rational’.
The problem is that it doesn’t look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
The problem is that it doesn’t look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
If by ‘should’ in this sense you mean the ‘intended’ meaning of ‘should’ that we don’t have access too, then I agree.
Note: Wei Dai and I chatted for a while, and this resulted in three new clarifying paragraphs at the end of the is-ought section of my post ’Pluralistic Moral Reductionism.
Even given your disclaimer, I suspect we still disagree on the merits of Taboo as it apply to metaethics. Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On “morality” we don’t have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
Yes. The most common result is that people come to realize they don’t know what they mean by ‘morally good’, unless they are theists.
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On “morality” we don’t have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
If it looks like I’m focusing on neuroscience, I think that’s an accident of looking at work I’ve produced in a 4-month period rather than over a longer period (that hasn’t occurred yet). I don’t think neuroscience is as central to metaethics or rationality as my recent output might suggest. Humans with meat-brains are strange agents who will make up a tiny minority of rational and moral agents in the history of intelligent agents in our light-cone (unless we bring an end to intelligent agents in our light-cone).
Yes. The most common result is that people come to realize they don’t know what they mean by ‘morally good’, unless they are theists.
Huh, I think that would have been good to mention in one of your posts. (Unless you did and I failed to notice it.)
It occurs to me that with a bit of tweaking to Austere Metaethics (which I’ll call Interim Metaethics), we can help everyone realize that they don’t know what they mean by “morally good”.
For example:
Deontologist: Should we build a deontic seed AI?
Interim Metaethicist: What do you mean by “should X”?
Deontologist: “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.”
Interim Metaethicist: Are you sure? If that’s really what you mean, then when a consequentialist says “should X” he probably means “X maximizes expected utility according to decision theory Y and utility function Z”. In which case the two of you do not actually disagree. But you do disagree with him, right?
Deontologist: Good point. I guess I don’t really mean that by “should”. I’m confused.
(Doesn’t that seem like an improvement over Austere Metaethics?)
I guess one difference between us is that I don’t see anything particularly ‘wrong’ with using stipulative definitions as long as you’re aware that they don’t match the intended meaning (that we don’t have access to yet), whereas you like to characterize stipulative definitions as ‘wrong’ when they don’t match the intended meaning.
But perhaps I should add one post before my empathic metaethics post which stresses that the stipulative definitions of ‘austere metaethics’ don’t match the intended meaning—and we can make this point by using all the standard thought experiments that deontologists and utilitarians and virtue ethicists and contractarian theorists use against each other.
After the above conversation, wouldn’t the deontologist want to figure out what he actually means by “should” and what its properties are? Why would he want to continue to use the stipulated definition that he knows he doesn’t actually mean? I mean I can imagine something like:
Deontologist: I guess I don’t really mean that by “should”, but I need to publish a few more papers for tenure, so please just help me figure out whether we should build an deontic seed AI under that stipulated definition of “should”, so I can finish my paper and submit it to the Journal of Machine Deontology.
But even in this case it would make more sense for him to avoid “stipulative definition” and instead say
Deontologist: Ok, by “should” I actually mean a concept that I can’t define at this point. But I guess it has something to do with deontic logic, and it would be useful to explore the properties of deontic logic in more detail. So, can you please help me figure out whether building a deontic seed AI is obligatory (by deontic logic) if we assume axiomatic imperatives Y and Z?
This way, he clarifies to himself and others that “”X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” is not what he means by “should X”, but instead a guess about the nature of morality (a concept that we can’t yet precisely define).
Perhaps you’d answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of “guess” is more appropriate here than “meaning”.
Perhaps you’d answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of “guess” is more appropriate here than “meaning”.
The problem is that we have to act in the world now. We can’t wait around for metaethics and decision theory to be solved. Thus, science books have glossaries in the back full of highly useful operationalized and stiuplated definitions for hundreds of terms, whether or not they match the intended meanings (that we don’t have access to) of those terms for person A, or the intended meanings of those terms for person B, or the intended meanings for those terms for person C.
I think this glossary business is a familiar enough practice that calling that thing a glossary of ‘meanings’ instead of a glossary of ‘guesses at meanings’ is fine. Maybe ‘meaning’ doesn’t have the connotations for me that it has for you.
Science needs doing, laws need to be written and enforced, narrow AIs need to be programmed, best practices in medicine need to be written, agents need to act… all before metaethics and decision theory are solved. In a great many cases, we need to have meaning_stipulated before we can figure out meaning_intended.
Sigh… Maybe I should just put a sticky note on my monitor that says
REMEMBER: You probably don’t actually disagree with Luke, because whenever he says “X means Z by Y”, he might just mean “X stipulated Y to mean Z”, which in turn is just another way of saying “X guesses that the nature of Y is Z”.
We humans have different intuitions about the meanings of terms and the nature of meaning itself, and thus we’re all speaking slightly different languages. We always need to translate between our languages, which is where Taboo and testing for anticipations come in handy.
I’m using the concept of meaning from linguistics, which seems fair to me. In linguistics, stipulated meaning is most definitely a kind of meaning (and not merely a kind of guessing at meaning), for it is often “what is expressed by the writer or speaker, and what is conveyed to the reader or listener, provided that they talk about the same thing.”
Whatever the case, this language looks confusing/misleading enough to avoid. It conflates the actual search for intended meaning with all those irrelevant stipulations, and assigns misleading connotations to the words referring to these things. In Eliezer’s sequences, the term was “fake utility function”. The presence of “fake” in the term is important, it reminds of incorrectness of the view.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
Perhaps, though I’ve gotten comments from others that it was highly clarifying for them. Maybe they’re more used to the meaning of ‘meaning’ from linguistics.
But one must not fall into the trap of thinking that a definition you’ve stipulated (aloud or in your head) for ‘ought’ must match up to your intuitive concept of ‘ought’. In fact, I suspect it never does, which is why the conceptual analysis of ‘ought’ language can go in circles for thousands of years, and why any stipulated meaning of ‘ought’ is a fake utility function. To see clearly to our intuitive concept of ought, we’ll have to try empathic metaethics (see below).
It’s not clear from this paragraph whether “intuitive concept” refers to the oafish tools in human brain (which have the same problems as stipulated definitions, including irrelevance) or the intended meaning that those tools seek. Conceptual analysis, as I understand, is concerned with analysis of the imperfect intuitive tools, so it’s also unclear in what capacity you mention conceptual analysis here.
(I do think this and other changes will probably make new readers less confused.)
Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version:
But one must not fall into the trap of thinking that a definition you’ve stipulated (aloud or in your head) for ‘ought’ must match up to your intended meaning of ‘ought’ (to which you don’t have introspective access). In fact, I suspect it never does, which is why the conceptual analysis of ‘ought’ language can go in circles for centuries, and why any stipulated meaning of ‘ought’ is a fake utility function. To see clearly to our intuitive concept of ought, we’ll have to try empathic metaethics (see below).
I’ve been veryclearmanytimes that ‘austere metaethics’ is for clearing up certain types of confusions, but won’t do anything to solve FAI, which is why we need ‘empathic metaethics’.
I was discussing that particular comment, not rehashing the intention behind ‘austere metaethics’.
More specifically, you made a statement “We can’t wait around for metaethics and decision theory to be solved.” It’s not clear to me what purpose is being served by what alternative action to “waiting around for metaethics to be solved”. It looks like you were responding to Wei’s invitation to justify the use of word “meaning” instead of “guess”, but it’s not clear how your response relates to that question.
Like I said over here, I’m using the concept of ‘meaning’ from linguistics. I’m hoping that fewer people are confused by my use of ‘meaning’ as employed in the field that studies meaning than if I had used ‘meaning’ in a more narrow and less standard way, like Wei Dai’s. Perhaps I’m wrong about that, but I’m not sure.
My comment above about how “we have to act in the world now” gives one reason why, I suspect, the linguist’s sense of ‘meaning’ includes stipulated meaning, and why stipulated meaning is so common.
In any case, I think you and Wei Dai have helped me think about how to be more clear to more people by adding such clarifications as this.
In those paragraphs, you add intuition as an alternative to stipulated meaning. But this is not what we are talking about, we are talking about some unknown, but normative meaning that can’t be presently stipulated, and is referred partly through intuition in a way that is more accurate than any currently available stipulation. What intuition tells is as irrelevant as what the various stipulations tell, what matters is the thing that the imperfect intuition refers. This idea doesn’t require a notion of automated stipulation (“empathic” discussion).
“some unknown, but normative meaning that can’t be presently stipulated” is what I meant by “intuitive meaning” in this case.
automated stipulation (“empathic” discussion)
I’ve never thought of ‘empathic’ discussion as ‘automated stipulation’. What do you mean by that?
Even our stipulated definitions are only promissory notes for meaning. Luckily, stipulated definitions can be quite useful for achieving our goals. Figuring out what we ‘really want’, or what we ‘rationally ought to do’ when faced with Newcomb’s problem, would also be useful. Such terms are carry even more vague promissory notes for meaning than stipulated definitions, and yet they are worth pursuing.
Treat intuition as just another stipulated definition, that happens to be expressed as a pattern of mind activity, as opposed to a sequence of words. The intuition itself doesn’t define the thing it refers to, it can be slightly wrong, or very wrong. The same goes for words. Both intuition and various words we might find are tools for referring to some abstract structure (intended meaning), that is not accurately captured by any of these tools. The purpose of intuition, and of words, is in capturing this structure accurately, accessing its properties. We can develop better understanding by inventing new words, training new intuitions, etc.
None of these tools hold a privileged position with respect to the target structure, some of them just happen to more carefully refer to it. At the beginning of any investigation, we would typically only have intuitions, which specify the problem that needs solving. They are inaccurate fuzzy lumps of confusion, too. At the same time, any early attempt at finding better tools will be unsuccessful, explicit definitions will fail to capture the intended meaning, even as intuition doesn’t capture it precisely. Attempts at guiding intuition to better precision can likewise make it a less accurate tool for accessing the original meaning. On the other hand, when the topic is well-understood, we might find an explicit definition that is much better than the original intuition. We might train new intuitions that reflect the new explicit definition, and are much better tools than the original intuition.
And as far as I can tell, you don’t agree. You express agreement too much, like your stipulated-meaning thought experiments, this is one of the problems. But I’d probably need a significantly more clear presentation of what feels wrong to make progress on our disagreement.
Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma.
I agree with Wei. There is no reason to talk about “highest expected value” specifically, that would be merely a less clear option on the same list as CDT and EDT recommendations. We need to find the correct decision instead, expected value or not.
Playing Eliezer-post-ping-pong, you are almost demanding “But what do you mean by truth?”. When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
you are almost demanding “But what do you mean by truth?”. When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
No, I agree there are important things to investigate for which we don’t have clear definitions. That’s why I keep talking about ‘empathic metaethics.’
Also, by ‘less accurate definition’ do you just mean that a stipulated definition can differ from the intuitive definition that we don’t have access to? Well of course. But why privilege the intuitive definition by saying a stipulated definition is ‘less accurate’ than it is? I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions. Example: ‘planet’.
Also, by ‘less accurate definition’ do you just mean that a stipulated definition can differ from the intuitive definition that we don’t have access to?
Not “just”. Not every change is an improvement, but every improvement is a change. There can be better definitions of whatever the intuitions are talking about, and they will differ from the intuitive definitions. But when the purpose of discussion is referred by an unclear intuition with no other easy ways to reach it, stipulating a different definition would normally be a change that is not an improvement.
I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions.
It’s not easy to find a more successful definition of the same thing. You can’t always just say “taboo” and pick the best thought that decades of careful research failed to rule out. Sometimes the intuitive definition is still better, or, more to the point, the precise explicit definition still misses the point.
An analogy for “sharing common understanding of morality”. In the sound example, even though the arguers talk about different situations in a confusingly ambiguous way, they share a common understanding of what facts hold in reality. If they were additionally ignorant about reality in different ways (even though there would still be the same truth about reality, they just wouldn’t have reliable access to it), that would bring the situation closer to what we have with morality.
If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers.
Even by getting such confused answers out in the open, we might get them to break out of complacency and recognize the presence of confusion. (Fat chance, of course.)
The point of pluralistic moral reductionism (austere metaethics) is to resolve lots of confused debates in metaethics that arise from doing metaethics (implicitly or explicitly) in the context of traditional conceptual analysis. It’s clearing away the dust and confusion from such debates so that we can move on to figure out what I think is more important: empathic metaethics.
This makes sense. My impression of the part of the sequence written so far would’ve been significantly affected if I had this intention understood (I don’t fully believe it now, but more so than I did before reading your comment).
I don’t fully believe it now, but more so than I did before reading your comment)
What is ‘it’, here? My intention? If you have doubts that my intention has been (for many months) to first clear away the dust and confusion of mainstream metaethics so that we can focus more clearly on the more important problems of metaethics, you can ask Anna Salamon, because I spoke to her about my intentions for the sequence before I put up the first post in the sequence. I think I spoke to others about my intentions, too, but I can’t remember which parts of my intentions I spoke about to which people (besides Anna). There’s also this comment from me more than a month ago.
I believe that you believe it, but I’m not sure it’s so. There are many reasons for any event. Specifically, you use austere debating in real arguments, which suggests that you place more weight on the method than just as a tool for exposing confusion.
(You seem to have reacted emotionally to a question of simple fact, and thus conflated the fact with your position on the fact, which status intuitions love to make people do. I think it’s a bad practice.)
What do you mean by ‘austere debating’? Do you just mean tabooing terms and then arguing about facts and anticipations? If so then yes, I do that all the time...
I’m not sure if we totally agree, but if there is any disagreement left in this thread, I don’t think it’s substantial enough to keep discussing at this point. I’d rather that we move on to talking about how you propose to do empathic metaethics.
BTW, I’d like to give another example that shows the difficulty of reducing (some usages of) normative language to math/physics.
Suppose I’m facing Newcomb’s problem, and I say to my friend Bob, “I’m confused. What should I do?” Bob happens to be a causal decision theorist, so he says “You should two-box.” It’s clear that Bob can not just mean “the arg max of … is ‘two-box’” (where … is the formula given by CDT), since presumably “you should two-box” is false and “the arg max of … is ‘two-box’” is true. Instead he probably means something like “CDT is the correct decision theory, and the arg max of … is ‘two-box’”, but how do we reduce the first part of this sentence to physics/math?
I’m not saying that reducing to physics/math is easy. Even ought language stipulated to refer to, say, the well-being of conscious creatures is pretty hard to reduce. We just don’t have that understanding yet. But it sure seems to be pointing to things that are computed by physics. We just don’t know the details.
I’m just trying to say that if I’m right about reductionism, and somebody uses ought language in a way that isn’t likely to reduce to physics/math, then their ought language isn’t likely to refer successfully.
We can hold off the rest of the dialogue until after another post or two; I appreciate your help so far. As a result of my dialogue with you, Sawin, and Nesov, I’m going to rewrite the is-ought part of ‘Pluralistic Moral Reductionism’ for clarity.
For example I could use a variant of CEV (call it Coherent Extrapolated Pi Estimation) to answer “What is the trillionth digit of pi?” but that doesn’t imply that by “the trillionth digit of pi” I actually mean “the output of CEPE”
(I notice an interesting subtlety here. Even though what I infer from “you should order X” is (1) “according to Bob’s computation, the arg max of … is X”, what Bob means by “you should order X” must be (2) “the arg max of … is X”, because if he means (1), then “you should order X” would be true even if Bob made an error in his computation.)
Do you accept the conclusion I draw from my version of this argument?
But this is certainly not the definition of water! Imagine if Bob used this criterion to evaluate what was and was not water. He would suffer from an infinite regress. The definition of water is something else. The statement “This is water” reduces to a set of facts about this, not a set of facts about this and Bob’s head.
But I’m confused by the rest of your argument, and don’t understand what conclusion you’re trying to draw apart from “CEV can’t be the definition of morality”. For example you say:
Well, why does it have a long definition? It has a long definition because that’s what we believe is important.
I don’t understand why believing something to be important implies that it has a long definition.
If you say “I define should as [Eliezers long list of human values]”
then I say: “That’s a long definition. How did you pick that definition?”
and you say: ’Well, I took whatever I thought was morally important, and put it into the definition.”
In the part you quote I am arguing that (or at least claiming that) other responses to my query are wrong.
I would then continue:
“Using the long definition is obscuring what you really mean when you say ‘should’. You really mean ‘what’s important’, not [the long list of things I think are important]. So why not just define it as that?”
One more way to describe this idea. I ask, “What is morality?”, and you say, “I don’t know, but I use this brain thing here to figure out facts about it; it errs sometimes, but can provide limited guidance. Why do I believe this “brain” is talking about morality? It says it does, and it doesn’t know of a better tool for that purpose presently available. By the way, it’s reporting that are morally relevant, and is probably right.”
By the way, it’s reporting that are morally relevant, and is probably right.
Where do you get “is probably right” from? I don’t think you can get that if you take an outside view and consider how often a human brain is right when it reports on philosophical matters in a similar state of confusion...
Salt to taste, the specific estimate is irrelevant to my point, so long as the brain is seen as collecting at least some moral information, and not defining the whole of morality. The level of certainty in brain’s moral judgment won’t be stellar, but more reliable for simpler judgments. Here, I referred “morally relevant”, which is a rather weak matter-of-priority kind of judgment, as opposed to deciding which of the given options are better.
Maybe this is because I’m fairly confident of physicalism? Of course I’ll change my mind if presented with enough evidence, but I’m not anticipating such a surprise.
You’d need the FAI able to change its mind as well, which requires that you retain this option in its epistemology. To attack the communication issue from a different angle, could you give examples of the kinds of facts you deny? (Don’t say “god” or “magic”, give a concrete example.)
Yes, we need the FAI to be able to change its mind about physicalism.
I don’t think I’ve ever been clear about what people mean to assert when they talk about things that don’t reduce to physics/math.
Rather, people describe something non-natural or supernatural and I think, “Yeah, that just sounds confused.” Specific examples of things I deny because of my physicalism are Moore’s non-natural goods and Chalmers’ conception of consciousness.
I don’t think I’ve ever been clear about what people mean to assert when they talk about things that don’t reduce to physics/math.
SInce you can’t actually reduce[*] 99.99% of your vocabulary, you’re either so confused you couldn’t possibly think or communicate...or you’re only confused about the nature of confusion.
[*] Try reducing “shopping” to quarks, electrons and photons.You can’t do it, and if you could, it would tell you nothing useful. Yet there is nothing that is not made
of quarks,electrons and photons involved.
Is this because you’re not familiar with Moore on non-natural goods and Chalmers on consciousness, or because you agree with me that those ideas are just confused?
They are not precise enough to carefully examine. I can understand the distinction between a crumbling bridge and 3^^^^3>3^^^3, it’s much less clear what kind of thing “Chalmers’ view on consciousness” is. I guess I could say that I don’t see these things as facts at all unless I understand them, and some things are too confusing to expect understanding them (my superpower is to remain confused by things I haven’t properly understood!).
(To compare, a lot of trouble with words is incorrectly assuming that they mean the same thing in different contexts, and then trying to answer questions about their meaning. But they might lack a fixed meaning, or any meaning at all. So the first step before trying to figure out whether something is true is understanding what is meant by that something.)
(No new idea is going to be precise, because precise definitions come
from established theories, and established theories come from speculative theories,
and speculative theories are theories about something that is defined relatively
vaguely. The Oxygen theory of combustion was a theory about “how burning works”--
it was not, circularly, the Oxygen theory of Oxidisation).
If you’re talking about something that doesn’t reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about? Fiction? Magic?
That’s making a pre-existing assumption that everyone speaks in physics language. It’s circular.
Speaking in physic language about something that isn’t in the actual physics is fiction. I’m not sure what magic is.
What is physics language? Physics language consists of statements that you can cash out, along with a physical world, to get “true” or “false”
What is moral language? Moral language consists of statements that you can cash out, along with a preference order on the set of physical worlds, to get “true” or “false”.
ETA: IF you don’t accept this, the first step is accepting that the statement “Flibber fladoo.” does not refer to anything in physics, and is not a fiction.
That’s making a pre-existing assumption that everyone speaks in physics language
No, of course lots of people use ‘ought’ terms and other terms without any reduction to physics in mind. All I’m saying is that if I’m right about reductionism, those uses of ought language will fail to refer.
What is moral language? Moral language consists of statements that you can cash out, along with a preference order on the set of physical worlds, to get “true” or “false”.
Sure, that’s one way to use moral language. And your preference order is computed by physics.
I think I must be using the term ‘reduction’ in a broader sense than you are. By reduction I just mean the translation of (in this case) normative language to natural language—cashing things out in terms of lower-level natural statements.
But you can’t reduce an arbitrary statement. You can only do so when you have a definition that allows you to reduce it. There are several potential functions from {statements in moral language} to {statements in physical language}. You are proposing that for each meaningful use of moral language, one such function must be correct by definition.
I am saying, no, you can just make statements in moral language which do not correspond to any statements in physical language.
You are proposing that for each meaningful use of moral language, one such function must be correct by definition
Not what I meant to propose. I don’t agree with that.
you can just make statements in moral language which do not correspond to any statements in physical language.
Of course you can. People do it all the time. But if you’re a physicalist (by which I mean to include Tegmarkian radical platonists), then those statements fail to successfully refer. That’s all I’m saying.
I am standing up for the usefulness and well-definedness of statements that fail to successfully refer.
Okay, we’re getting nearer to understanding each other, thanks. :)
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you’re coming from.
Elsewhere, you said:
The problem is that the word “ought” has multiple definitions. You are observing that all the other definitions of ought are physically reducible. That puts them on the “is” side. But now there is a gap between hypothetical-ought-statements and categorical-ought-statements, and it’s just the same size as before. You can reduce the word “ought” in the following sentence: “If ‘ought’ means ‘popcorn’, then I am eating ought right now.” It doesn’t help.
Goodness, no. I’m not arguing that all translations of ‘ought’ are equally useful as long as they successfully refer!
But now you’re talking about something different than the is-ought gap. You’re talking about a gap between “hypothetical-ought-statements and categorical-ought-statements.” Could you describe the gap, please? ‘Categorical ought’ in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
I genuinely appreciate you sticking this out with me. I know it’s taking time for us to understand each other, but I expect serious fruit to come of mutual understanding.
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you’re coming from.
I don’t think any exist, so I could not do so.
Goodness, no. I’m not arguing that all translations of ‘ought’ are equally useful as long as they successfully refer!
I’m saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Could you describe the gap, please? ‘Categorical ought’ in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
Hypothetical-ought statements are a certain kind of statement about the physical world. They’re the kind that contain the word “ought”, but they’re just an arbitrary subset of the “is”-statements.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
But no fact can alone imply anything (in this sense), it’s not a point specific to moral values, and in any case a trivial uninteresting point that is easily confused with a refutation of the statement I noted in the grandparent.
No fact alone can imply anything: true and important. For example, a description of my brain at the neuronal level does not imply that I’m awake. To get the implication, we need to add a definition (or at least some rule) of “awake” in neuronal terms. And this definition will not capture the meaning of “awake.” We could ask, “given that a brain is , is it awake?” and intuition will tell us that it is an open question.
But that is beside the point, if what we want to know is whether the definition succeeds. The definition does not have to capture the meaning of “awake”. It only needs to get the reference correct.
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
Great question. It seems to me that normative ethics involves reducing the term “moral” without necessarily capturing the meaning, whereas metaethics involves capturing the meaning of the term. And the reason we want to capture the meaning is so that we know what it means to do normative ethics correctly (instead of just doing it by intuition, as we do now). It would also allow an AI to perform normative ethics (i.e., reduce “moral”) for us, instead of humans reducing the term and programming a specific normative ethical theory into the AI.
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don’t see that as a problem. It can still shed light on issues of epistemics, ontology, semantics, etc. And if you want help from an AI, any reduction that gets the reference correct will do, regardless of whether meaning is captured. A reduction need not be a full-blown normative ethical theory. It just needs to imply one, when combined with other truths.
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don’t see that as a problem.
This is not a problem in the same sense as astronomical waste that will occur during the rest of this year is not a problem: it’s not possible to do something about it.
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
A formal logical definition often won’t capture the full meaning of a mathematical structure (there may be non-standard models of the logical theory, and true statements it won’t infer), yet it has the special power of allowing you to correctly infer lots of facts about that structure without knowing anything else about the intended meaning. If we are given just a little bit less, then the power to infer stuff gets reduced dramatically.
It’s important to get a definition of morality in a similar sense and for similar reasons: it won’t capture the whole thing, yet it must be good enough to generate right actions even in currently unimaginable contexts.
Formal logic does seem very powerful, yet incomplete. Would you be willing to create an AI with such limited understanding of math or morality (assuming we can formalize an understanding of morality on par with math), given that it could well obtain supervisory power over humanity? One might justify it by arguing that it’s better than the alternative of trying to achieve and capture fuller understanding, which would involve further delay and risk. See for example Tim Freeman’s argument in this line, or my own.
Another alternative is to build an upload-based FAI instead, like Stuart Armstrong’s recent proposal. That is, use uploads as components in a larger system, with lots of safety checks. In a way Eliezer’s FAI ideas can also be seen as heavily upload based, since CEV can be interpreted (as you did before) as uploads with safety checks. (So the question I’m asking can be phrased as, instead of just punting normative ethics to CEV, why not punt all of meta-math, decision theory, meta-ethics, etc., to a CEV-like construct?)
Of course you’re probably just as unsure of these issues as I am, but I’m curious what your current thoughts are.
Humans are also incomplete in this sense. We already have no way of capturing the whole problem statement. The goal is to capture it as well as possible using some reflective trick of looking at our own brains or behavior, which is probably way better than what an upload singleton that doesn’t build a FAI is capable of.
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn’t get us any closer to the solution. There should probably be a charity dedicated to designing upload-based singletons as a kind of high-impact applied normative ethics effort (and SIAI might want to spawn one, since rational thinking about morality is important for this task; we don’t want fatalistic acceptance of a possible Malthusian dystopia or unchecked moral drift), but this is not the same problem as FAI.
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is. Assuming that we don’t solve meta-philosophy first, any de novo AIs we build will be more incomplete than humans. Do you agree?
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn’t get us any closer to the solution.
It gets closer to the solution in the sense that there is no longer a time pressure, since it’s easier for an upload-singleton to ensure their own value stability, and they don’t have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
It gets closer to the solution in the sense that there is no longer a time pressure, since it’s easier for an upload-singleton to ensure their own value stability, and they don’t have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
There is a time pressure from existential risk (also, astronomical waste). Just as in FAI vs. AGI race, we would have a race between FAI-building and AGI-building uploads (in the sense of “who runs first”, but also literally while restricted by speed and costs). And fast-running uploads pose other risks as well, for example they could form an unfriendly singleton without even solving AGI, or build runaway nanotech.
(Planning to make sure that we run a prepared upload FAI team before a singleton of any other nature can prevent it is an important contingency, someone should get on that in the coming decades, and better metaethical theory and rationality education can help in that task.)
I should have made myself clearer. What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won’t be facing competition from other uploads (since that’s what “singleton” means). It will be facing significantly less time pressure than a similar organization trying to build FAI directly. (Delay will still cause astronomical waste due to physical resources falling away into event horizons and the like, but that seems negligible compared to the existential risks that we face now.)
What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won’t be facing competition from other uploads.
But this assumption is rather unlikely/difficult to implement, so in the situation where we count on it, we’ve already lost a large portion of the future. Also, this course of action (unlikely to succeed as it is in any case) significantly benefits from massive funding to buy computational resources, which is a race. The other alternative, which is educating people in a way that increases the chances of a positive upload-driven outcome, is also a race, for development of better understanding of metaethics/rationality and for educating more people better.
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is.
Philosophical progress is just a special kind of physical action that we can perform, valuable for abstract reasons that feed into what constitutes our values. I don’t see how this feature is fundamentally different from pointing to any other complicated aspect of human values and saying that AI must be able to make that distinction or destroy all value with its mining claws. Of course it must.
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
I’m saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Agreed.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in “I support preference-ordering X”, as opposed to a statement about support as in “preference-ordering X is ‘good’ if ‘good’ is defined as ‘maximizes Y’”?
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
What do you mean by ‘preference order’ such that no fact can imply a preference order? I’m thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
Because a positive (’is”) statement + a normative (“ought) statement is enough information to determine an action, and once actions are determined you don’t need further information.
“information” may not be the right word.
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in “I support preference-ordering X”, as opposed to a statement about support as in “preference-ordering X is ‘good’ if ‘good’ is defined as ‘maximizes Y’”?
I believe “I ought to do X” if and only if I support preference-ordering X.
What do you mean by ‘preference order’ such that no fact can imply a preference order? I’m thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
I’m thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Much like how:
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
What if it’s encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication “28” on display iff the fact is true? Or, say, I know that my computer contains a copy of “Understand” by Ted Chiang, even though I don’t remember its complete text. Finally, some parts of my brain don’t know what other parts of my brain know. The brain doesn’t hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.
Well if I see the screen then there’s an encoding of “28” in my brain. Not of the reason why 28 is true, but at least that the answer is “28″.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Obviously, on the level of detail in which the notion of “belief” starts breaking down, the notion of “belief” starts breaking down.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
Yet you might not know the question. “28” only certifies that the question makes a true statement.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statemen
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don’t have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.
The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn’t fundamentally distinguish the moral facts from the facts about the world, so it’s unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.
My ideas work unless it’s impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
It’s the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn’t draw this boundary.
There is an asymmetry. We can only make physical observations, not moral observations.
This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.
We can only make physical observations, not moral observations.
To the extent we haven’t defined what “moral observations” are exactly, so that the possibility isn’t ruled out in a clear sense, I’d say that we can make moral observations, in the same sense in which we can make arithmetical observations by looking at a calculator display or consulting own understanding of mathematical facts maintained by brain.
That is, by deducing mathematical facts from new physical facts.
Not necessarily, you can just use physical equipment without having any understanding of how it operates or what it is, and the only facts you reason about are non-physical (even though you interact with physical facts, without reasoning about them).
Can you deduce physical facts from new moral facts?
Can you deduce physical facts from new moral facts? >
Why not?
Because your only sources of new facts are your senses.
You can’t infer new (to you) facts from information you already have? You can’t
just be told things? A martian,. being told that pre marital sex became less of an issue after the sixities might be able to deduce the physical fact that contraceptive technology was improved in the sixities.
I guess you could but you couldn’t be a perfect Bayesian.
Generally, when one is told something, one becomes aware of this from one’s senses, and then infers things from the physical fact that one is told.
I’m definitely not saying this right. The larger point I’m trying to make is that it makes sense to consider an agent’s physical beliefs and ignore their moral beliefs. That is a well-defined thing to do.
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
All the same comprehension of the state of the world, including how beliefs about “true morals” remain accessible. They are simply considered to be physical facts about the construction of certain agents.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
Physical beliefs are constructed from physical facts. Just like everything else!
You can predict that (physical) human babies won’t be eaten too often. Or that a calculator will have a physical configuration displaying something that you inferred abstractly.
Also, I should clarify that when I talk about reducing ought statements into physical statements, I’m including logic. On my view, logic is just a feature of the language we use to talk about physical facts.
Logic can be used to talk about non-physical facts. Do you allow referring to logic even where the logic is talking about non-physical facts, or do you only allow referring to the logic that is talking about physical facts? Or maybe you taboo intended interpretation, however non-physical, but still allow the symbolic game itself to be morally relevant?
Alas, I think this is getting us into the problem of universals. :)
With you, too, Vladimir, I suspect our anticipations do not differ, but our language for talking about these subtle things is slightly different, and thus it takes a bit of work for us to understand each other.
By “logic referring to non-physical facts”, do you have in mind something like “20+7=27″?
Things for which you can’t build a trivial analogy out of physical objects, like a pile of 27 rocks (which are not themselves simple, but this is not easy to appreciate in the context of this comparison).
Certainly, one could reduce normative language into purely logical-mathematical facts, if that was how one was using normative language. But I haven’t heard of people doing this. Have you? Would a reduction of ‘ought’ into purely mathematical statements ever connect up again to physics in a possible world? If so, could you give an example—even a silly one?
Since it’s hard to convey tone through text, let me explicitly state that my tone is a genuinely curious and collaboratively truth-seeking one. I suspect you’ve done more and better thinking on metaethics than I have, so I’m trying to gain what contributions from you I can.
Certainly, one could reduce normative language into purely logical-mathematical facts, if that was how one was using normative language.
Why do you talk of “language” so much? Suppose we didn’t have language (and there was only ever a single person), I don’t think the problem changes.
Would a reduction of ‘ought’ into purely mathematical statements ever connect up again to physics in a possible world?
Say, I would like to minimize ((X-2)*(X-2)+3)^^^3, where X is the number I’m going to observe on the screen. This is a pretty self-contained specification, and yet it refers to the world. The “logical” side of this can be regarded as a recipe, a symbolic representation of your goals. It also talks about a number that is too big to fit into the physical world.
Say, I would like to minimize ((X-2)*(X-2)+3)^^^3, where X is the number I’m going to observe on the screen. This is a pretty self-contained specification, and yet it refers to the world. The “logical” side of this can be regarded as a recipe, a symbolic representation of your goals. It also talks about a number that is too big to fit into the physical world.
With you, too, Vladimir, I suspect our anticipations do not differ, but our language for talking about these subtle things is slightly different, and thus it takes a bit of work for us to understand each other.
This would require that we both have positions that accurately reflect reality, or are somehow synchronously deluded. This is a confusing territory, I know that I don’t know enough to be anywhere confident in my position, and even that position is too vague to be worth systematically communicating, or to describe some important phenomena (I’m working on that). I appreciate the difficulty of communication, but I don’t believe that we would magically meet at the end without having to change our ideas in nontrivial ways.
I just mean that our anticipations do not differ in a very local sense. As an example, imagine that we were using ‘sound’ in different ways like Albert and Barry. Surely Albert and Barry have different anticipations in many ways, but not with respect to the specific events closely related to the tree falling in a forest when nobody is around.
If you’re talking about something that doesn’t reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about? Fiction? Magic?
I’ll first try to restate your position in order to check my understanding. Let me know if I don’t do it justice.
People use “should” in several different ways. Most of these ways can be “reducible to physics”, or in other words can be restated as talking about how our universe is, without losing any of the intended meaning. Some of these ways can’t be so reduced (they are talking about the world of “is not”) but those usages are simply meaningless and can be safely ignored.
I agree that many usages of “should” can be reduced to physics. (Or perhaps instead to mathematics.) But there may be other usages that can’t be so reduced, and which are not clearly safe to ignore. Originally I was planning to wait for you to list the usages of “should” that can be reduced, and then show that there are other usages that are not obviously talking about “the world of is” but are not clearly meaningless either. (Of course I hope that your reductions do cover all of the important/interesting usages, but I’m not expecting that to be the case.)
Since you ask for my criticism now, I’ll just give an example that seems to be one of the hardest to reduce: “Should I consider the lives of random strangers to have (terminal) value?”
(Eliezer’s proposal is that what I’m really asking when I ask that question is “Does my CEV think the lives of random strangers should have (terminal) value?” I’ve given various arguments why I find this solution unsatisfactory. One that is currently fresh on my mind is that “coherent extrapolation” is merely a practical way to find the answer to any given question, but should not be used as the definition of what the question means. For example I could use a variant of CEV (call it Coherent Extrapolated Pi Estimation) to answer “What is the trillionth digit of pi?” but that doesn’t imply that by “the trillionth digit of pi” I actually mean “the output of CEPE”.)
I’m not planning to list all the reductions of normative language. There are too many. People use normative language in too many ways.
Also, I should clarify that when I talk about reducing ought statements into physical statements, I’m including logic. On my view, logic is just a feature of the language we use to talk about physical facts. (More on that if needed.)
I’m not sure I would say “most.”
What do you mean by “safe to ignore”?
If you’re talking about something that doesn’t reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about? Fiction? Magic? Those are fine things to talk about, as long as we understand we’re talking about fiction or magic.
What about this is hard to reduce? We can ask for what you mean by ‘should’ in this question, and reduce it if possible. Perhaps what you have in mind isn’t reducible (divine commands), but then your question is without an answer.
Or perhaps you’re asking the question in the sense of “Please fix my broken question for me. I don’t know what I mean by ‘should’. Would you please do a stack trace on the cognitive algorithms that generated that question, fix my question, and then answer it for me?” And in that case we’re doing empathic metaethics.
I’m still confused as to what your objection is. Will you clarify?
You said that you’re not interested in an “ought” sentence if it reduces to talking about the world of is not. I was trying to make the same point by “safe to ignore”.
I don’t know, but I don’t think it’s a good idea to assume that only things that are reducible to physics and/or math are worth talking about. I mean it’s a good working assumption to guide your search for possible meanings of “should”, but why declare that you’re not “interested” in anything else? Couldn’t you make that decision on a case by case basis, just in case there is a meaning of “should” that talks about something else besides physics and/or math and its interestingness will be apparent once you see it?
Maybe I should have waited until you finish your sequence after all, because I don’t know what “doing empathic metaethics” actually entails at this point. How are you proposing to “fix my question”? It’s not as if there is a design spec buried somewhere in my brain, and you can check my actual code against the design spec to see where the bug is… Do you want to pick up this conversation after you explain it in more detail?
Maybe this is because I’m fairly confident of physicalism? Of course I’ll change my mind if presented with enough evidence, but I’m not anticipating such a surprise.
‘Interest’ wasn’t the best word for me to use. I’ll have to fix that. All I was trying to say is that if somebody uses ‘ought’ to refer to something that isn’t physical or logical, then this punts the discussion back to a debate over physicalism, which isn’t the topic of my already-too-long ‘Pluralistic Moral Reductionism’ post.
Surely, many people use ‘ought’ to refer to things non-reducible to physics or logic, and they may even be interesting (as in fiction), but in the search for true statements that use ‘ought’ language they are not ‘interesting’, unless physicalism is false (which is a different discussion, then).
Does that make sense? I’ll explain empathic metaethics in more detail later, but I hope we can get some clarity on this part right now.
First I would call myself a radical platonist instead of a physicalist. (If all universes that exist mathematically also exist physically, perhaps it could be said that there is no difference between platonism and physicalism, but I think most people who call themselves physicalists would deny that premise.) So I think it’s likely that everything “interesting” can be reduced to math, but given the history of philosophy I don’t think I should be very confident in that. See my recent How To Be More Confident… That You’re Wrong.
Right, I’m pretty partial to Tegmark, too. So what I call physicalism is compatible with Tegmark. But could you perhaps give an example of what it would mean to reduce normative language to a logical-mathematical function—even a silly one?
(It’s late and I’m thinking up this example on the spot, so let me know if it doesn’t make sense.)
Suppose I’m in a restaurant and I say to my dinner companion Bob, “I’m too tired to think tonight. You know me pretty well. What do you think I should order?” From the answer I get, I can infer (when I’m not so tired) a set of joint constraints on what Bob believes to be my preferences, what decision theory he applied on my behalf, and the outcome of his (possibly subconscious) computation. If there is little uncertainty about my preferences and the decision theory involved, then the information conveyed by “you should order X” in this context just reduces to a mathematical statement about (for example) what the arg max of a set of weighted averages is.
(I notice an interesting subtlety here. Even though what I infer from “you should order X” is (1) “according to Bob’s computation, the arg max of … is X”, what Bob means by “you should order X” must be (2) “the arg max of … is X”, because if he means (1), then “you should order X” would be true even if Bob made an error in his computation.)
Yeah, that’s definitely compatible with what I’m talking about when I talk about reducing normative language to natural language (that is, to math/logic + physics).
Do you think any disagreements or confusion remains in this thread?
Having thought more about these matters over the last couple of weeks, I’ve come to realize that my analysis in the grandparent comment is not very good, and also that I’m confused about the relationship between semantics (i.e., study of meaning) and reductionism.
First, I learned that it’s important (and I failed) to distinguish between (A) the meaning of a sentence (in some context), (B) the set of inferences that can be drawn from it, and (C) what information the speaker intends to convey.
For example, suppose Alice says to Bob, “It’s raining outside. You should wear your rainboots.” The information that Alice really wants to convey by “it’s raining outside” is that there are puddles on the ground. That, along with for example “it’s probably not sunny” and “I will get wet if I don’t use an umbrella”, belongs to the set of inferences that can be drawn from the sentence. But clearly the meaning of “it’s raining outside” is distinct from either of these. Similarly, the fact that Bob can infer that there are puddles on the ground from “you should wear your rainboots” does not show that “you should wear your rainboots” means “there are puddles on the ground”.
Nor does it seem to make sense to say that “you should wear your rainboots” reduces to “there are puddles on the ground” (why should it, when clearly “it’s raining outside” doesn’t reduce that way?), which, by analogy, calls into question my claim in the grandparent comment that “you should order X” reduces to “the arg max of … is X”.
But I’m confused about what reductionism even means in the context of semantics. The Eliezer post that you linked to from Pluralistic Moral Reductionism defined “reductionism” as:
But that appears to be a position about ontology, and it not clear to me what implications it has for semantics, especially for the semantics of normative language. (I know you posted a reading list for reductionism, which I have not gone though except to skim the encyclopedia entry. Please let me know if the answer will be apparent once I do read them, or if there is a more specific reference you can point me to that will answer this immediate question.)
Excellent. We should totally be clarifying such things.
There are many things we might intend to communicate when we talk about the ‘meaning’ of a word or phrase or sentence. Let’s consider some possible concepts of ‘the meaning of a sentence’, in the context of declarative sentences only:
(1) The ‘meaning of a sentence’ is what the speaker intended to assert, that assertion being captured by truth conditions the speaker would endorse when asked for them.
(2) The ‘meaning of a sentence’ is what the sentence asserts if the assertion is captured by truth conditions that are fixed by the sentence’s syntax and the first definition of each word that is provided by the Oxford English Dictionary.
(3) The ‘meaning of a sentence’ is what the speaker intended to assert, that assertion being captured by truth conditions determined by a full analysis of the cognitive algorithms that produced the sentence (which are not accessible to the speaker).
There are several other possibilities, even just for declarative sentences.
I tried to make it clear that when doing austere metaethics, I was taking #1 to be the meaning of a declarative moral judgment (e.g. “Murder is wrong!”), at least when the speaker of such sentences intended them to be declarative (rather than intending them to be, say, merely emotive or in other ways ‘non-cognitive’).
The advantage of this is that we can actually answer (to some degree, in many cases) the question of what a moral judgment ‘means’ (in the austere metaethics sense), and thus evaluate whether it is true or untrue. After some questioning of the speaker, we might determine that meaning~1 of “Murder is wrong” in a particular case is actually “Murder is forbidden by Yahweh”, in which case we can evaluate the speaker’s sentence as untrue given its truth conditions (given its meaning~1).
But we may very well want to know instead what is ‘right’ or ‘wrong’ or ‘good’ or ‘bad’ when evaluating sentences that use those words using the third sense of ‘the meaning of a sentence’ listed above. Though my third sense of meaning above is left a bit vague for now, that’s roughly what I’ll be doing when I start talking about empathic metaethics.
Will Sawin has been talking about the ‘meaning’ of ‘ought’ sentences in a fourth sense of the word ‘meaning’ that is related to but not identical to meaning~3 I gave above. I might interpret Will as saying that:
I am not going to do a thousand years of conceptual analysis on the English word-tool ‘meaning.’ I’m not going to survey which definition of ‘meaning’ is consistent with the greatest number of our intuitions about its meaning given a certain set of hypothetical scenarios in which we might use the term. Instead, I’m going to taboo ‘meaning’ so that I can use the word along with others to transfer ideas from my head into the heads of others, and take ideas from their heads into mine. If there’s an objection to this, I’ll be tempte to invent a new word-tool that I can use in the circumstances where I currently want to use the word-tool ‘meaning’ to transfer ideas between brains.
In discussing austere metaethics, I’m considering the ‘meaning’ of declarative moral judgment sentences as meaning~1. In discussing empathic metaethics, I’m considering the ‘meaning’ of declarative moral judgment sentences as (something like) meaning~3. I’m also happy to have additional discussions about ‘ought’ when considering the meaning of ‘ought’ as meaning~4, though the empirical assumptions underlying meaning~4 might turn out to be false. We could discuss ‘meaning’ as meaning~2, too, but I’m personally not that interested to do so.
Before I talk about reductionism, does this comment about meaning make sense?
As I indicated in a recent comment, I don’t really see the point of austere metaethics. Meaning~1 just doesn’t seem that interesting, given that meaning~1 is not likely to be closely related to actual meaning, as in your example when someone thinks that by “Murder is wrong” they are asserting “Murder is forbidden by Yahweh”.
Empathic metaethics is much more interesting, of course, but I do not understand why you seem to assume that if we delve into the cognitive algorithms that produce a sentence like “murder is wrong” we will be able to obtain a list of truth conditions. For example if I examine the algorithms behind an Eliza bot that sometimes says “murder is wrong” I’m certainly not going to obtain a list of truth conditions. It seems clear that information/beliefs about math and physics definitely influence the production of normative sentences in humans, but it’s much less clear that those sentences can be said to assert facts about math and physics.
Can you show me an example of such idea transfer? (Depending on what ideas you want to transfer, perhaps you do not need to “fully” solve metaethics, in which case our interests might diverge at some point.)
This is probably a good idea. (Nesov previously made a general suggestion along those lines.)
What do you mean by ‘actual meaning’?
The point of pluralistic moral reductionism (austere metaethics) is to resolve lots of confused debates in metaethics that arise from doing metaethics (implicitly or explicitly) in the context of traditional conceptual analysis. It’s clearing away the dust and confusion from such debates so that we can move on to figure out what I think is more important: empathic metaethics.
I don’t assume this. Whether this can be done is an open research question.
My entire post ‘Pluralistic Moral Reductionism’ is an example of such idea transfer. First I specified that one way we can talk about morality is to stipulate what we mean by terms like ‘morally good’, so as to resolve debates about morality in the same way that we resolve a hypothetical debate about ‘sound’ by stipulating our definitions of ‘sound.’ Then I worked through the implications of that approach to metaethics, and suggested toward the end that it wasn’t the only approach to metaethics, and that we’ll explore empathic metaethics in a later post.
I don’t know how to explain “actual meaning”, but it seems intuitively obvious to me that the actual meaning of “murder is wrong” is not “murder is forbidden by Yahweh”, even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh. Do you disagree with this?
But the way we actually resolved the debate about ‘sound’ is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for ‘sound’ when they use it, they will give you confused answers. I think saying “let’s resolve confusions in metaethics by asking people to stipulating definitions for ‘morally good’”, before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
That doesn’t seem intuitively obvious to me, which illustrates one reason why I prefer to taboo terms rather than bash my intuitions against the intuitions of others in an endless game of intuitionist conceptual analysis. :)
Perhaps the most common ‘foundational’ family of theories of meaning in linguistics and philosophy of language belong to the mentalist program, according to which semantic content is determined by the mental contents of the speaker, not by an abstract analysis of symbol forms taken out of context from their speaker. One straightforward application of a mentalist approach to meaning would conclude that if the speaker was assuming (or mentally representing) a judgment of moral wrongness in the sense of forbidden-by-God, then the meaning of the speaker’s sentence refers in part to the demands of an imagined deity.
But “reaching this understanding” with regard to morality was precisely the goal of ‘Conceptual Analysis and Moral Theory’ and ‘Pluralistic Moral Reductionism.’ I repeatedly made the point that people regularly use a narrow family of signifiers (‘morally good’, ‘morally right’, etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier ‘sound’ to call upon two distinct concepts (acoustic vibrations and auditory experience).
With regard to “sound”, the two concepts are complementary, and people can easily agree that “sound” sometimes refers to one or the other or often both of these concepts. The same is not true in the “morality” case. The concepts you list seem mutually exclusive, and most people have a strong intuition that “morality” can correctly refer to at most one of them. For example a consequentialist will argue that a deontologist is wrong when he asserts that “morality” means “adhering to rules X, Y, Z”. Similarly a divine command theorist will not answer “well, that’s true” if an egoist says “murdering Bob (in a way that serves my interests) is right, and I stipulate ‘right’ to mean ‘serving my interests’”.
It appears to me confusion here is not being caused mainly by linguistic ambiguity, i.e., people using the same word to refer to different things, which can be easily cleared up once pointed out. I see the situation as being closer to the following: in many cases, people are using “morality” to refer to the same concept, and are disagreeing over the nature of that concept. Some people think it’s equivalent to or closely related to the concept of divine attitudes, and others think it has more to do with well-being of conscious creatures, etc.
When many people agree that murder is wrong but disagree on the reasons why, you can argue that they’re referring to the same concept of morality but confused about its nature. But what about less clear-cut statements, like “women should be able to vote”? Many people in the past would’ve disagreed with that. Would you say they’re referring to a different concept of morality?
I’m not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
I tried to explain some of the cause of persistent moral debate (as opposed to e.g. sound debate) in this way:
Let me try an analogy. Consider someone who believes in the phlogiston theory of fire, and another person who believes in the oxidation theory. They are having a substantive disagreement about the nature of fire, and not merely causing unnecessary confusion by using the same word “fire” to refer to different things. And if the phlogiston theorist were to say “by ‘fire’ I mean the release of phlogiston” then that would just be wrong, and would be adding to the confusion instead of helping to resolve it.
I think the situation with “morality” is closer to this than to the “sound” example.
(ETA: I could also try to define “same concept” more directly, for example as occupying roughly the same position in the graph of relationships between one’s concepts, or playing approximately the same role in one’s cognitive algorithms, but I’d rather not take an exact position on what “same concept” means if I can avoid it, since I have mostly just an intuitive understanding of it.)
This is the exact debate currently being hashed out by Richard Joyce and Stephen Finlay (whom I interviewed here). A while back I wrote an article that can serve as a good entry point into the debate, here. A response from Joyce is here and here. Finlay replies again here.
I tend to side with Finlay, though I suspect not for all the same reasons. Recently, Joyce has admitted that both languages can work, but he’ll (personally) talk the language of error theory rather than the language of moral naturalism.
I’m having trouble understanding how the debate between Joyce and Finlay, over Error Theory, is the same as ours. (Did you perhaps reply to the wrong comment?)
Sorry, let me make it clearer...
The core of their debate concerns whether certain features are ‘essential’ to the concept of morality, and thus concerns whether people share the same concept of morality, and what it would mean to say that people share the concept of morality, and what the implications of that are. Phlogiston is even one of the primary examples used throughout the debate. (Also, witches!)
I’m still not getting it. From what I can tell, both Joyce and Finlay implicitly assume that most people are referring to the same concept by “morality”. They do use phlogiston as an example, but seemingly in a very different way from me, to illustrate different points. Also, two of the papers you link to by Joyce don’t cite Finlay at all and I think may not even be part of the debate. Actually the last paper you link to by Joyce (which doesn’t cite Finlay) does seem relevant to our discussion. For example this paragraph:
I will read that paper over more carefully, and in the mean time, please let me know if you still think the other papers are also relevant, and point to specific passages if yes.
This article by Joyce doesn’t cite Finlay, but its central topic is ‘concessive strategies’ for responding to Mackie, and Finlay is a leading figure in concessive strategies for responding to Mackie. Joyce also doesn’t cite Finlay here, but it discusses how two people who accept that Mackie’s suspect properties fail to refer might nevertheless speak two different languages about whether moral properties exist (as Joyce and Finlay do).
One way of expressing the central debate between them is to say that they are arguing over whether certain features (like moral ‘absolutism’ or ‘objectivity’) are ‘essential’ to moral concepts. (Without the assumption of absolutism, is X a ‘moral’ concept?) Another way to say that is to say that they are arguing over the boundaries of moral concepts; whether people can be said to share the ‘same’ concept of morality but disagree on some of its features, or whether this disagreement means they have ‘different’ concepts of morality.
But really, I’m just trying to get clear on what you might mean by saying that people have the ‘same’ concept of morality while disagreeing on fundamental features, and what you think the implications are. I’m sorry my pointers to the literature weren’t too helpful.
Unfortunately I’m not sure how to explain it better than I already did. But I did notice that Richard Chappell made a similar point (while criticizing Eliezer):
Does his version makes any more sense?
Chappell’s discussion makes more and more sense to me lately. Many previously central reasons for disagreement turn out to be my misunderstanding, but I haven’t re-read enough to form a new opinion yet.
Sure, except he doesn’t make any arguments for his position. He just says:
I don’t think normative debates are always “merely verbal”. I just think they are very often ‘merely verbal’, and that there are multiple concepts of normativity in use. Chappel and I, for example, seem to have different intuitions (see comments) about what normativity amounts to.
Let’s say a deontologist and a consequentialist are on the board of SIAI, and they are debating which kind of seed AI the Institute should build.
D: We should build a deontic AI.
C: We should build a consequentialist AI.
Surely their disagreement is substantive. But if by “we should do X”, the deontologist just means “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” and the consequentialist just means “X maximizes expected utility under utility function Y according to decision theory Z” then they are talking past each other and their disagreement is “merely verbal”. Yet these are the kinds of meanings you seem to think their normative language do have. Don’t you think there’s something wrong about that?
(ETA: To any bystanders still following this argument, I feel like I’m starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
I completely agree with what you are saying. Disagreement requires shared meaning. Cons. and Deont. are rival theories, not alternative meanings.
Good question. There’s a lot of momentum behind the “meaning theory”.
If the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, then they aren’t necessarily disagreeing with each other by having one state D and the other state C.
But perhaps we aren’t considering propositions D and C using meaning_stipulated. Perhaps we decide to consider propositions D and C using meaning-cognitive-algorithm. And perhaps a completed cognitive neuroscience would show us that they both mean the same thing by ‘should’ in the meaning-cognitive-algorithm sense. And in that case they would be having a substantive disagreement, when using meaning-cognitive-algorithm to determine the truth conditions of D and C.
Thus:
meaning-stipulated of D is X, meaning-stipulated of C is Y, but X and Y need not be mutually exclusive.
meaning-cognitive-algorithm of D is A, meaning-cognitive-algorithm of C is B, and in my story above A and B are mutually exclusive.
Since people have different ideas about what ‘meaning’ is, I’m skipping past that worry by tabooing ‘meaning.’
[Damn I wish LW would let me use underscores or subscripts instead of hyphens!]
You_can_do_that, just use backslash ‘\’ to escape ‘\_’ the underscores, although people quoting your text would need to repeat the trick.
Thanks!
Suppose the deontologist and the consequentialist have previously stipulated different definitions for ‘should’ as used in sentences D and C, but if you ask them they also say that they are disagreeing with each other in a substantive way. They must be wrong about either what their sentences mean, or about whether their disagreement is substantive, right? (*) I think it’s more likely that they’re wrong about what their sentences mean, because meanings of normative sentences are confusing and lack of substantive disagreement in this particular scenario seems very unlikely.
(*) If we replace “mean” in this sentence by “mean_stipulated”, then it no longer makes sense, since clearly it’s possible that their sentences mean_stipulated D and C, and that their disagreement is substantive. Actually now that I think about it, I’m not sure that “mean” can ever be correctly taboo’ed into “mean_stipulated”. For example, suppose Bob says “By ‘sound’ I mean acoustic waves. Sorry, I misspoke, actually by ‘sound’ I mean auditory experiences. [some time later] To recall, by ‘sound’ I mean auditory experiences.” The first “mean” does not mean “mean_stipulated” since Bob hadn’t stipulated any meanings yet when he said that. The second “mean” does not mean “mean_stipulated” since otherwise that sentence would just be stating a plain falsehood. The third “mean” must mean the same thing as the second “mean”, so it’s also not “mean_stipulated”.
To continue along this line, suppose Alice inserts after the first sentence, “Bob, that sounds wrong. I think by ‘sound’ you mean auditory experiences.” Obviously not “mean_stipulated” here. Alternatively, suppose Bob only says the first sentence, and nobody bothers to correct him because they’ve all heard the lecture several times and know that Bob means auditory experiences by ‘sound’, and think that everyone else knows. Except that Carol is new and doesn’t know, and write in her notes “In this lecture, ‘sound’ means acoustic waves.” in her notebook. Later on Alice tells Carol what everyone else knows, and Carol corrects the sentence. If “mean” means “mean_stipulated” in that sentence, then it would be true and there would be no need to correct it.
Taboo seems to be a tool that needs to be wielded very carefully, and wanting to “skip pass worry” is probably not the right frame of mind for wielding it. One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
It seems a desperate move to say that stipulative meaning just isn’t a kind of meaning wielded by humans. I use it all the time, it’s used in law, it’s used in other fields, it’s taught in textbooks… If you think stipulative meaning just isn’t a legitimate kind of meaning commonly used by humans, I don’t know what to say.
I agree, but ‘tabooing’ ‘meaning’ to mean (in some cases) ‘stipulated meaning’ shouldn’t be objectionable because, as I said above, it’s a very commonly used kind of ‘meaning.’ We can also taboo ‘meaning’ to refer to other types of meaning.
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn’t substantive disagreement, and we can figure out whether or not we’re having a substantive disagreement by playing a little Taboo (and by checking anticipations). This is precisely the kind of use for which playing Taboo was originally proposed:
To come back to this point, what if we can’t translate a disagreement into disagreement over anticipations (which is the case in many debates over rationality and morality), nor do the participants know how to correctly Taboo (i.e., they don’t know how to capture the meanings of certain key words), but there still seems to be substantive disagreement or the participants themselves claim they do have a substantive disagreement?
Earlier, in another context, I suggested that we extend Eliezer’s “make beliefs pay rent in anticipated experiences” into “make beliefs pay rent in decision making”. Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance. What do you think?
But I missed your point in the previous response. The idea of disagreement about decisions in the same sense as usual disagreement about anticipation caused by errors/uncertainty is interesting. This is not bargaining about outcome, for the object under consideration is agents’ belief, not the fact the belief is about. The agents could work on correct belief about a fact even in the absence of reliable access to the fact itself, reaching agreement.
It seems that “what to do” has to refer to properties of a fixed fact, so disagreement is bargaining over what actually gets determined, and so probably doesn’t even involve different anticipations.
Wei Dai & Vladimir Nesov,
Both your suggestions sound plausible. I’ll have to think about it more when I have time to work more on this problem, probably in the context of a planned LW post on Chalmer’s Verbal Disputes paper. Right now I have to get back to some other projects.
Also perhaps of interest is Schroeder’s paper, A Recipe for Concept Similarity.
But that assumes that two sides of the disagreement are both Taboo’ing correctly. How can you tell? (You do agree that Taboo is hard and people can easily get it wrong, yes?)
ETA: Do you want to try to hash this out via online chat? I added you to my Google Chat contacts a few days ago, but it’s still showing “awaiting authorization”.
Not sure what ‘correctly’ means, here. I’d feel safer saying they were both Tabooing ‘acceptably’. In the above example, Albert and Barry were both Tabooing ‘acceptably.’ It would have been strange and unhelpful if one of them had Tabooed ‘sound’ to mean ‘rodents on the moon’. But Tabooing ‘sounds’ to talk about auditory experiences or acoustic vibrations is fine, because those are two commonly used meanings for ‘sound’. Likewise, ‘stipulated meaning’ and ‘intuitive meaning’ and a few other things are commonly used meanings of ‘meaning.’
If you’re saying that there’s “only one correct meaning for ‘meaning’” or “only one correct meaning for ‘ought’”, then I’m not sure what to make of that, since humans employ the word-tool ‘meaning’ and the word-tool ‘ought’ in a variety of ways. If whatever you’re saying predicts otherwise, then what you’re saying is empirically incorrect. But that’s so obvious that I keep assuming you must be saying something else.
Also relevant:
Another point. Switching back to a particular ‘conventional’ meaning that doesn’t match the stipulative meaning you just gave a word is one of the ways words can be wrong (#4).
And frankly, I’m worried that we are falling prey to the 14th way words can be wrong:
And, the 17th way words can be wrong:
Now, I suspect you may be trying to say that I’m committing mistake #20:
But I’ve pointed out that, for example, stipulative meaning is a very common usage of ‘meaning’...
Could you please take a look at this example, and tell me whether you think they are Tabooing “acceptably”?
That’s a great example. I’ll reproduce it here for readability of this thread:
I’d rather not talk about ‘wrong’; that makes things messier. But let me offer a few comments on what happened:
If this conversation occurred at a decision theory meetup known to have an even mix of CDTers and EDTers, then it was perhaps inefficient (for communication) for either of them to use ‘rational’ to mean either CDT-rational or EDT-rational. That strategy was only going to cause confusion until Tabooing occurred.
If this conversation occurred at a decision theory meetup for CDTers, then person A might be forgiven for assuming the other person would think of ‘rational’ in terms of ‘CDT-rational’. But then person A used Tabooing to discover that an EDTer had snuck into the party, and they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT.
In either case, once they’ve had the conversation quoted above, they are correct that they don’t disagree about the solutions to Newcomb’s problem recommended by EDT and CDT. Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb’s dilemma. Now that they’ve cleared up their momentary confusion about ‘rational’, they can move on to discuss the point at which they really do disagree. Tabooing for the win.
An action does not naturally “have” an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can’t describe their disagreement as “about what action has the highest expected value”. It seems that we can only describe their disagreement as about “what is rational” or “what is the correct decision theory” because we don’t know how to Taboo “rational” or “correct” in a way that preserves the substantive nature of their disagreement. (BTW, I guess we could define “have” to mean “assigned by the correct decision theory/prior/utility function” but that doesn’t help.)
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree. It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
ETA:
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right? In the case of “morality”, why do you trust the process of Tabooing so much that you do not give this possibility much credence?
Fair enough. Let me try again: “They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb’s dilemma.” Or something like that.
According to their Taboo transcript, they don’t disagree over the solutions of Newcomb’s problem recommended by EDT and CDT. But they might still disagree about whether EDT or CDT is most likely to fulfill the agent’s desires when faced with Newcomb’s problem.
Yes. Ask about anticipations.
That didn’t happen in this example. They do not, in fact, disagree over the solutions to Newcomb’s problem recommended by EDT and CDT. If they disagree, it’s about something else, like who is the tallest living person on Earth or which action is most likely to fulfill an agent’s desires when faced with Newcomb’s dilemma.
Of course Tabooing can go wrong, but it’s a useful tool. So is testing for differences of anticipation, though that can also go wrong.
No, I think it’s quite plausible that Tabooing can be done wrong when talking about morality. In fact, it may be more likely to go wrong there than anywhere else. But it’s also better to Taboo than to simply not use such a test for surface-level confusion. It’s also another option to not Taboo and instead propose that we try to decode the cognitive algorithms involved in order to get a clearer picture of our intuitive notion of moral terms than we can get using introspection and intuition.
This introduces even more assumptions into the picture. Why fulfillment of desires or specifically agent desires is relevant? Why is “most likely” in there? You are trying to make things precise at the expense of accuracy, that’s the big taboo failure mode, increasingly obscure lost purposes.
I’m just providing an example. It’s not my story. I invite you or Wei Dai to say what it is the two speakers disagree about even after they agree about the conclusions of CDT and EDT for Newcomb’s problem. If all you can say is that they disagree about what they ‘should’ do, or what it would be ‘rational’ to do, then we’ll have to talk about things at that level of understanding, but that will be tricky.
What other levels of understanding do we have? The question needs to be addressed on its own terms. Very tricky. There are ways of making this better, platonism extended to everything seems to help a lot, for example. Toy models of epistemic and decision-theoretic primitives also clarify things, training intuition.
We’re making progress on what it means for brains to value things, for example. Or we can talk in an ends-relational sense, and specify ends. Or we can keep things even more vague but then we can’t say much at all about ‘ought’ or ‘rational’.
The problem is that it doesn’t look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
If by ‘should’ in this sense you mean the ‘intended’ meaning of ‘should’ that we don’t have access too, then I agree.
Note: Wei Dai and I chatted for a while, and this resulted in three new clarifying paragraphs at the end of the is-ought section of my post ’Pluralistic Moral Reductionism.
Some remaining issues:
Even given your disclaimer, I suspect we still disagree on the merits of Taboo as it apply to metaethics. Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On “morality” we don’t have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
Yes. The most common result is that people come to realize they don’t know what they mean by ‘morally good’, unless they are theists.
If it looks like I’m focusing on neuroscience, I think that’s an accident of looking at work I’ve produced in a 4-month period rather than over a longer period (that hasn’t occurred yet). I don’t think neuroscience is as central to metaethics or rationality as my recent output might suggest. Humans with meat-brains are strange agents who will make up a tiny minority of rational and moral agents in the history of intelligent agents in our light-cone (unless we bring an end to intelligent agents in our light-cone).
Huh, I think that would have been good to mention in one of your posts. (Unless you did and I failed to notice it.)
It occurs to me that with a bit of tweaking to Austere Metaethics (which I’ll call Interim Metaethics), we can help everyone realize that they don’t know what they mean by “morally good”.
For example:
Deontologist: Should we build a deontic seed AI?
Interim Metaethicist: What do you mean by “should X”?
Deontologist: “X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.”
Interim Metaethicist: Are you sure? If that’s really what you mean, then when a consequentialist says “should X” he probably means “X maximizes expected utility according to decision theory Y and utility function Z”. In which case the two of you do not actually disagree. But you do disagree with him, right?
Deontologist: Good point. I guess I don’t really mean that by “should”. I’m confused.
(Doesn’t that seem like an improvement over Austere Metaethics?)
I guess one difference between us is that I don’t see anything particularly ‘wrong’ with using stipulative definitions as long as you’re aware that they don’t match the intended meaning (that we don’t have access to yet), whereas you like to characterize stipulative definitions as ‘wrong’ when they don’t match the intended meaning.
But perhaps I should add one post before my empathic metaethics post which stresses that the stipulative definitions of ‘austere metaethics’ don’t match the intended meaning—and we can make this point by using all the standard thought experiments that deontologists and utilitarians and virtue ethicists and contractarian theorists use against each other.
After the above conversation, wouldn’t the deontologist want to figure out what he actually means by “should” and what its properties are? Why would he want to continue to use the stipulated definition that he knows he doesn’t actually mean? I mean I can imagine something like:
Deontologist: I guess I don’t really mean that by “should”, but I need to publish a few more papers for tenure, so please just help me figure out whether we should build an deontic seed AI under that stipulated definition of “should”, so I can finish my paper and submit it to the Journal of Machine Deontology.
But even in this case it would make more sense for him to avoid “stipulative definition” and instead say
Deontologist: Ok, by “should” I actually mean a concept that I can’t define at this point. But I guess it has something to do with deontic logic, and it would be useful to explore the properties of deontic logic in more detail. So, can you please help me figure out whether building a deontic seed AI is obligatory (by deontic logic) if we assume axiomatic imperatives Y and Z?
This way, he clarifies to himself and others that “”X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z.” is not what he means by “should X”, but instead a guess about the nature of morality (a concept that we can’t yet precisely define).
Perhaps you’d answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of “guess” is more appropriate here than “meaning”.
The problem is that we have to act in the world now. We can’t wait around for metaethics and decision theory to be solved. Thus, science books have glossaries in the back full of highly useful operationalized and stiuplated definitions for hundreds of terms, whether or not they match the intended meanings (that we don’t have access to) of those terms for person A, or the intended meanings of those terms for person B, or the intended meanings for those terms for person C.
I think this glossary business is a familiar enough practice that calling that thing a glossary of ‘meanings’ instead of a glossary of ‘guesses at meanings’ is fine. Maybe ‘meaning’ doesn’t have the connotations for me that it has for you.
Science needs doing, laws need to be written and enforced, narrow AIs need to be programmed, best practices in medicine need to be written, agents need to act… all before metaethics and decision theory are solved. In a great many cases, we need to have meaning_stipulated before we can figure out meaning_intended.
Sigh… Maybe I should just put a sticky note on my monitor that says
REMEMBER: You probably don’t actually disagree with Luke, because whenever he says “X means Z by Y”, he might just mean “X stipulated Y to mean Z”, which in turn is just another way of saying “X guesses that the nature of Y is Z”.
That might work.
We humans have different intuitions about the meanings of terms and the nature of meaning itself, and thus we’re all speaking slightly different languages. We always need to translate between our languages, which is where Taboo and testing for anticipations come in handy.
I’m using the concept of meaning from linguistics, which seems fair to me. In linguistics, stipulated meaning is most definitely a kind of meaning (and not merely a kind of guessing at meaning), for it is often “what is expressed by the writer or speaker, and what is conveyed to the reader or listener, provided that they talk about the same thing.”
Whatever the case, this language looks confusing/misleading enough to avoid. It conflates the actual search for intended meaning with all those irrelevant stipulations, and assigns misleading connotations to the words referring to these things. In Eliezer’s sequences, the term was “fake utility function”. The presence of “fake” in the term is important, it reminds of incorrectness of the view.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
Perhaps, though I’ve gotten comments from others that it was highly clarifying for them. Maybe they’re more used to the meaning of ‘meaning’ from linguistics.
Does this new paragraph at the end of this section in PMR help?
It’s not clear from this paragraph whether “intuitive concept” refers to the oafish tools in human brain (which have the same problems as stipulated definitions, including irrelevance) or the intended meaning that those tools seek. Conceptual analysis, as I understand, is concerned with analysis of the imperfect intuitive tools, so it’s also unclear in what capacity you mention conceptual analysis here.
(I do think this and other changes will probably make new readers less confused.)
Here’s the way I’m thinking about it.
Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Does that make sense?
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version:
You think this applies to figuring out decision theory for FAI? If not, how is that relevant in this context?
Vladimir,
I’ve been very clear many times that ‘austere metaethics’ is for clearing up certain types of confusions, but won’t do anything to solve FAI, which is why we need ‘empathic metaethics’.
I was discussing that particular comment, not rehashing the intention behind ‘austere metaethics’.
More specifically, you made a statement “We can’t wait around for metaethics and decision theory to be solved.” It’s not clear to me what purpose is being served by what alternative action to “waiting around for metaethics to be solved”. It looks like you were responding to Wei’s invitation to justify the use of word “meaning” instead of “guess”, but it’s not clear how your response relates to that question.
Like I said over here, I’m using the concept of ‘meaning’ from linguistics. I’m hoping that fewer people are confused by my use of ‘meaning’ as employed in the field that studies meaning than if I had used ‘meaning’ in a more narrow and less standard way, like Wei Dai’s. Perhaps I’m wrong about that, but I’m not sure.
My comment above about how “we have to act in the world now” gives one reason why, I suspect, the linguist’s sense of ‘meaning’ includes stipulated meaning, and why stipulated meaning is so common.
In any case, I think you and Wei Dai have helped me think about how to be more clear to more people by adding such clarifications as this.
(This is similar to my reaction expressed here.)
In those paragraphs, you add intuition as an alternative to stipulated meaning. But this is not what we are talking about, we are talking about some unknown, but normative meaning that can’t be presently stipulated, and is referred partly through intuition in a way that is more accurate than any currently available stipulation. What intuition tells is as irrelevant as what the various stipulations tell, what matters is the thing that the imperfect intuition refers. This idea doesn’t require a notion of automated stipulation (“empathic” discussion).
“some unknown, but normative meaning that can’t be presently stipulated” is what I meant by “intuitive meaning” in this case.
I’ve never thought of ‘empathic’ discussion as ‘automated stipulation’. What do you mean by that?
Even our stipulated definitions are only promissory notes for meaning. Luckily, stipulated definitions can be quite useful for achieving our goals. Figuring out what we ‘really want’, or what we ‘rationally ought to do’ when faced with Newcomb’s problem, would also be useful. Such terms are carry even more vague promissory notes for meaning than stipulated definitions, and yet they are worth pursuing.
My understanding of this topic is as follows.
Treat intuition as just another stipulated definition, that happens to be expressed as a pattern of mind activity, as opposed to a sequence of words. The intuition itself doesn’t define the thing it refers to, it can be slightly wrong, or very wrong. The same goes for words. Both intuition and various words we might find are tools for referring to some abstract structure (intended meaning), that is not accurately captured by any of these tools. The purpose of intuition, and of words, is in capturing this structure accurately, accessing its properties. We can develop better understanding by inventing new words, training new intuitions, etc.
None of these tools hold a privileged position with respect to the target structure, some of them just happen to more carefully refer to it. At the beginning of any investigation, we would typically only have intuitions, which specify the problem that needs solving. They are inaccurate fuzzy lumps of confusion, too. At the same time, any early attempt at finding better tools will be unsuccessful, explicit definitions will fail to capture the intended meaning, even as intuition doesn’t capture it precisely. Attempts at guiding intuition to better precision can likewise make it a less accurate tool for accessing the original meaning. On the other hand, when the topic is well-understood, we might find an explicit definition that is much better than the original intuition. We might train new intuitions that reflect the new explicit definition, and are much better tools than the original intuition.
As far as I can tell, I agree with all of this.
And as far as I can tell, you don’t agree. You express agreement too much, like your stipulated-meaning thought experiments, this is one of the problems. But I’d probably need a significantly more clear presentation of what feels wrong to make progress on our disagreement.
I look forward to it.
I’m not sure what you mean by “you agree too much”, though. Like I said, as far as I can tell I agree with everything in this comment of yours.
I agree with Wei. There is no reason to talk about “highest expected value” specifically, that would be merely a less clear option on the same list as CDT and EDT recommendations. We need to find the correct decision instead, expected value or not.
Playing Eliezer-post-ping-pong, you are almost demanding “But what do you mean by truth?”. When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
I updated the bit about expected value here.
No, I agree there are important things to investigate for which we don’t have clear definitions. That’s why I keep talking about ‘empathic metaethics.’
Also, by ‘less accurate definition’ do you just mean that a stipulated definition can differ from the intuitive definition that we don’t have access to? Well of course. But why privilege the intuitive definition by saying a stipulated definition is ‘less accurate’ than it is? I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions. Example: ‘planet’.
Not “just”. Not every change is an improvement, but every improvement is a change. There can be better definitions of whatever the intuitions are talking about, and they will differ from the intuitive definitions. But when the purpose of discussion is referred by an unclear intuition with no other easy ways to reach it, stipulating a different definition would normally be a change that is not an improvement.
It’s not easy to find a more successful definition of the same thing. You can’t always just say “taboo” and pick the best thought that decades of careful research failed to rule out. Sometimes the intuitive definition is still better, or, more to the point, the precise explicit definition still misses the point.
(They perhaps shouldn’t have done that.)
An analogy for “sharing common understanding of morality”. In the sound example, even though the arguers talk about different situations in a confusingly ambiguous way, they share a common understanding of what facts hold in reality. If they were additionally ignorant about reality in different ways (even though there would still be the same truth about reality, they just wouldn’t have reliable access to it), that would bring the situation closer to what we have with morality.
Can you elaborate this a bit more? I don’t follow.
Everyone understands “moral” to entail “should be praised/encouraged” and everyone understands “immoral” to entail “should be blamed/discouraged”
“Should”?
Of course “should”. It’s a definition, not a reduction
Even by getting such confused answers out in the open, we might get them to break out of complacency and recognize the presence of confusion. (Fat chance, of course.)
This makes sense. My impression of the part of the sequence written so far would’ve been significantly affected if I had this intention understood (I don’t fully believe it now, but more so than I did before reading your comment).
What is ‘it’, here? My intention? If you have doubts that my intention has been (for many months) to first clear away the dust and confusion of mainstream metaethics so that we can focus more clearly on the more important problems of metaethics, you can ask Anna Salamon, because I spoke to her about my intentions for the sequence before I put up the first post in the sequence. I think I spoke to others about my intentions, too, but I can’t remember which parts of my intentions I spoke about to which people (besides Anna). There’s also this comment from me more than a month ago.
I believe that you believe it, but I’m not sure it’s so. There are many reasons for any event. Specifically, you use austere debating in real arguments, which suggests that you place more weight on the method than just as a tool for exposing confusion.
(You seem to have reacted emotionally to a question of simple fact, and thus conflated the fact with your position on the fact, which status intuitions love to make people do. I think it’s a bad practice.)
What do you mean by ‘austere debating’? Do you just mean tabooing terms and then arguing about facts and anticipations? If so then yes, I do that all the time...
I’m not sure if we totally agree, but if there is any disagreement left in this thread, I don’t think it’s substantial enough to keep discussing at this point. I’d rather that we move on to talking about how you propose to do empathic metaethics.
BTW, I’d like to give another example that shows the difficulty of reducing (some usages of) normative language to math/physics.
Suppose I’m facing Newcomb’s problem, and I say to my friend Bob, “I’m confused. What should I do?” Bob happens to be a causal decision theorist, so he says “You should two-box.” It’s clear that Bob can not just mean “the arg max of … is ‘two-box’” (where … is the formula given by CDT), since presumably “you should two-box” is false and “the arg max of … is ‘two-box’” is true. Instead he probably means something like “CDT is the correct decision theory, and the arg max of … is ‘two-box’”, but how do we reduce the first part of this sentence to physics/math?
I’m not saying that reducing to physics/math is easy. Even ought language stipulated to refer to, say, the well-being of conscious creatures is pretty hard to reduce. We just don’t have that understanding yet. But it sure seems to be pointing to things that are computed by physics. We just don’t know the details.
I’m just trying to say that if I’m right about reductionism, and somebody uses ought language in a way that isn’t likely to reduce to physics/math, then their ought language isn’t likely to refer successfully.
We can hold off the rest of the dialogue until after another post or two; I appreciate your help so far. As a result of my dialogue with you, Sawin, and Nesov, I’m going to rewrite the is-ought part of ‘Pluralistic Moral Reductionism’ for clarity.
Do you accept the conclusion I draw from my version of this argument?
I agree with you up to this part:
I made the same argument (perhaps not very clearly) at http://lesswrong.com/lw/44i/another_argument_against_eliezers_metaethics/
But I’m confused by the rest of your argument, and don’t understand what conclusion you’re trying to draw apart from “CEV can’t be the definition of morality”. For example you say:
I don’t understand why believing something to be important implies that it has a long definition.
Ah. So this is what I am saying.
If you say “I define should as [Eliezers long list of human values]”
then I say: “That’s a long definition. How did you pick that definition?”
and you say: ’Well, I took whatever I thought was morally important, and put it into the definition.”
In the part you quote I am arguing that (or at least claiming that) other responses to my query are wrong.
I would then continue:
“Using the long definition is obscuring what you really mean when you say ‘should’. You really mean ‘what’s important’, not [the long list of things I think are important]. So why not just define it as that?”
One more way to describe this idea. I ask, “What is morality?”, and you say, “I don’t know, but I use this brain thing here to figure out facts about it; it errs sometimes, but can provide limited guidance. Why do I believe this “brain” is talking about morality? It says it does, and it doesn’t know of a better tool for that purpose presently available. By the way, it’s reporting that are morally relevant, and is probably right.”
Where do you get “is probably right” from? I don’t think you can get that if you take an outside view and consider how often a human brain is right when it reports on philosophical matters in a similar state of confusion...
Salt to taste, the specific estimate is irrelevant to my point, so long as the brain is seen as collecting at least some moral information, and not defining the whole of morality. The level of certainty in brain’s moral judgment won’t be stellar, but more reliable for simpler judgments. Here, I referred “morally relevant”, which is a rather weak matter-of-priority kind of judgment, as opposed to deciding which of the given options are better.
Beautiful. I would draw more attention to the “Why.… ? It says it does” bit, but that seems right.
You’d need the FAI able to change its mind as well, which requires that you retain this option in its epistemology. To attack the communication issue from a different angle, could you give examples of the kinds of facts you deny? (Don’t say “god” or “magic”, give a concrete example.)
Yes, we need the FAI to be able to change its mind about physicalism.
I don’t think I’ve ever been clear about what people mean to assert when they talk about things that don’t reduce to physics/math.
Rather, people describe something non-natural or supernatural and I think, “Yeah, that just sounds confused.” Specific examples of things I deny because of my physicalism are Moore’s non-natural goods and Chalmers’ conception of consciousness.
SInce you can’t actually reduce[*] 99.99% of your vocabulary, you’re either so confused you couldn’t possibly think or communicate...or you’re only confused about the nature of confusion.
[*] Try reducing “shopping” to quarks, electrons and photons.You can’t do it, and if you could, it would tell you nothing useful. Yet there is nothing that is not made of quarks,electrons and photons involved.
Not much better than “magic”, doesn’t help.
Is this because you’re not familiar with Moore on non-natural goods and Chalmers on consciousness, or because you agree with me that those ideas are just confused?
They are not precise enough to carefully examine. I can understand the distinction between a crumbling bridge and 3^^^^3>3^^^3, it’s much less clear what kind of thing “Chalmers’ view on consciousness” is. I guess I could say that I don’t see these things as facts at all unless I understand them, and some things are too confusing to expect understanding them (my superpower is to remain confused by things I haven’t properly understood!).
(To compare, a lot of trouble with words is incorrectly assuming that they mean the same thing in different contexts, and then trying to answer questions about their meaning. But they might lack a fixed meaning, or any meaning at all. So the first step before trying to figure out whether something is true is understanding what is meant by that something.)
How are you on dark matter?
(No new idea is going to be precise, because precise definitions come from established theories, and established theories come from speculative theories, and speculative theories are theories about something that is defined relatively vaguely. The Oxygen theory of combustion was a theory about “how burning works”-- it was not, circularly, the Oxygen theory of Oxidisation).
Dude, you really need to start distinguishing between reducible-in-principle and usefully-reducible and doesn’t need-reducing.
That’s making a pre-existing assumption that everyone speaks in physics language. It’s circular.
Speaking in physic language about something that isn’t in the actual physics is fiction. I’m not sure what magic is.
What is physics language? Physics language consists of statements that you can cash out, along with a physical world, to get “true” or “false”
What is moral language? Moral language consists of statements that you can cash out, along with a preference order on the set of physical worlds, to get “true” or “false”.
ETA: IF you don’t accept this, the first step is accepting that the statement “Flibber fladoo.” does not refer to anything in physics, and is not a fiction.
No, of course lots of people use ‘ought’ terms and other terms without any reduction to physics in mind. All I’m saying is that if I’m right about reductionism, those uses of ought language will fail to refer.
Sure, that’s one way to use moral language. And your preference order is computed by physics.
That’s the way I’m talking about, so you should be able to ignore the other ways in your discussion with me.
You are proposing a function MyOrder from {states of the world} to {preference orders}
This gives you a natural function from {statements in moral language} to {statements in physical language}
but this is not a reduction, it’s not what those statements mean, because it’s not what they’re defined to mean.
I think I must be using the term ‘reduction’ in a broader sense than you are. By reduction I just mean the translation of (in this case) normative language to natural language—cashing things out in terms of lower-level natural statements.
But you can’t reduce an arbitrary statement. You can only do so when you have a definition that allows you to reduce it. There are several potential functions from {statements in moral language} to {statements in physical language}. You are proposing that for each meaningful use of moral language, one such function must be correct by definition.
I am saying, no, you can just make statements in moral language which do not correspond to any statements in physical language.
Not what I meant to propose. I don’t agree with that.
Of course you can. People do it all the time. But if you’re a physicalist (by which I mean to include Tegmarkian radical platonists), then those statements fail to successfully refer. That’s all I’m saying.
I am standing up for the usefulness and well-definedness of statements that fail to successfully refer.
Okay, we’re getting nearer to understanding each other, thanks. :)
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you’re coming from.
Elsewhere, you said:
Goodness, no. I’m not arguing that all translations of ‘ought’ are equally useful as long as they successfully refer!
But now you’re talking about something different than the is-ought gap. You’re talking about a gap between “hypothetical-ought-statements and categorical-ought-statements.” Could you describe the gap, please? ‘Categorical ought’ in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
I genuinely appreciate you sticking this out with me. I know it’s taking time for us to understand each other, but I expect serious fruit to come of mutual understanding.
I don’t think any exist, so I could not do so.
I’m saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Hypothetical-ought statements are a certain kind of statement about the physical world. They’re the kind that contain the word “ought”, but they’re just an arbitrary subset of the “is”-statements.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
(Physical facts can inform you about what the right preference order is, if you expect that they are related to the moral facts.)
perhaps the right thing to say is “No fact can alone imply a preference order.”
But no fact can alone imply anything (in this sense), it’s not a point specific to moral values, and in any case a trivial uninteresting point that is easily confused with a refutation of the statement I noted in the grandparent.
No fact alone can imply anything: true and important. For example, a description of my brain at the neuronal level does not imply that I’m awake. To get the implication, we need to add a definition (or at least some rule) of “awake” in neuronal terms. And this definition will not capture the meaning of “awake.” We could ask, “given that a brain is , is it awake?” and intuition will tell us that it is an open question.
But that is beside the point, if what we want to know is whether the definition succeeds. The definition does not have to capture the meaning of “awake”. It only needs to get the reference correct.
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
Great question. It seems to me that normative ethics involves reducing the term “moral” without necessarily capturing the meaning, whereas metaethics involves capturing the meaning of the term. And the reason we want to capture the meaning is so that we know what it means to do normative ethics correctly (instead of just doing it by intuition, as we do now). It would also allow an AI to perform normative ethics (i.e., reduce “moral”) for us, instead of humans reducing the term and programming a specific normative ethical theory into the AI.
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don’t see that as a problem. It can still shed light on issues of epistemics, ontology, semantics, etc. And if you want help from an AI, any reduction that gets the reference correct will do, regardless of whether meaning is captured. A reduction need not be a full-blown normative ethical theory. It just needs to imply one, when combined with other truths.
This is not a problem in the same sense as astronomical waste that will occur during the rest of this year is not a problem: it’s not possible to do something about it.
(I agree with your comment.)
A formal logical definition often won’t capture the full meaning of a mathematical structure (there may be non-standard models of the logical theory, and true statements it won’t infer), yet it has the special power of allowing you to correctly infer lots of facts about that structure without knowing anything else about the intended meaning. If we are given just a little bit less, then the power to infer stuff gets reduced dramatically.
It’s important to get a definition of morality in a similar sense and for similar reasons: it won’t capture the whole thing, yet it must be good enough to generate right actions even in currently unimaginable contexts.
Formal logic does seem very powerful, yet incomplete. Would you be willing to create an AI with such limited understanding of math or morality (assuming we can formalize an understanding of morality on par with math), given that it could well obtain supervisory power over humanity? One might justify it by arguing that it’s better than the alternative of trying to achieve and capture fuller understanding, which would involve further delay and risk. See for example Tim Freeman’s argument in this line, or my own.
Another alternative is to build an upload-based FAI instead, like Stuart Armstrong’s recent proposal. That is, use uploads as components in a larger system, with lots of safety checks. In a way Eliezer’s FAI ideas can also be seen as heavily upload based, since CEV can be interpreted (as you did before) as uploads with safety checks. (So the question I’m asking can be phrased as, instead of just punting normative ethics to CEV, why not punt all of meta-math, decision theory, meta-ethics, etc., to a CEV-like construct?)
Of course you’re probably just as unsure of these issues as I am, but I’m curious what your current thoughts are.
Humans are also incomplete in this sense. We already have no way of capturing the whole problem statement. The goal is to capture it as well as possible using some reflective trick of looking at our own brains or behavior, which is probably way better than what an upload singleton that doesn’t build a FAI is capable of.
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn’t get us any closer to the solution. There should probably be a charity dedicated to designing upload-based singletons as a kind of high-impact applied normative ethics effort (and SIAI might want to spawn one, since rational thinking about morality is important for this task; we don’t want fatalistic acceptance of a possible Malthusian dystopia or unchecked moral drift), but this is not the same problem as FAI.
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is. Assuming that we don’t solve meta-philosophy first, any de novo AIs we build will be more incomplete than humans. Do you agree?
It gets closer to the solution in the sense that there is no longer a time pressure, since it’s easier for an upload-singleton to ensure their own value stability, and they don’t have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
There is a time pressure from existential risk (also, astronomical waste). Just as in FAI vs. AGI race, we would have a race between FAI-building and AGI-building uploads (in the sense of “who runs first”, but also literally while restricted by speed and costs). And fast-running uploads pose other risks as well, for example they could form an unfriendly singleton without even solving AGI, or build runaway nanotech.
(Planning to make sure that we run a prepared upload FAI team before a singleton of any other nature can prevent it is an important contingency, someone should get on that in the coming decades, and better metaethical theory and rationality education can help in that task.)
I should have made myself clearer. What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won’t be facing competition from other uploads (since that’s what “singleton” means). It will be facing significantly less time pressure than a similar organization trying to build FAI directly. (Delay will still cause astronomical waste due to physical resources falling away into event horizons and the like, but that seems negligible compared to the existential risks that we face now.)
But this assumption is rather unlikely/difficult to implement, so in the situation where we count on it, we’ve already lost a large portion of the future. Also, this course of action (unlikely to succeed as it is in any case) significantly benefits from massive funding to buy computational resources, which is a race. The other alternative, which is educating people in a way that increases the chances of a positive upload-driven outcome, is also a race, for development of better understanding of metaethics/rationality and for educating more people better.
Philosophical progress is just a special kind of physical action that we can perform, valuable for abstract reasons that feed into what constitutes our values. I don’t see how this feature is fundamentally different from pointing to any other complicated aspect of human values and saying that AI must be able to make that distinction or destroy all value with its mining claws. Of course it must.
Agreed, however, it is somewhat useful in pointing out a specific, common, type of bad argument.
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
Agreed.
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in “I support preference-ordering X”, as opposed to a statement about support as in “preference-ordering X is ‘good’ if ‘good’ is defined as ‘maximizes Y’”?
What do you mean by ‘preference order’ such that no fact can imply a preference order? I’m thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
Because a positive (’is”) statement + a normative (“ought) statement is enough information to determine an action, and once actions are determined you don’t need further information.
“information” may not be the right word.
I believe “I ought to do X” if and only if I support preference-ordering X.
I’m thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Much like how:
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
I’ve continued our dialogue here.
What if it’s encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication “28” on display iff the fact is true? Or, say, I know that my computer contains a copy of “Understand” by Ted Chiang, even though I don’t remember its complete text. Finally, some parts of my brain don’t know what other parts of my brain know. The brain doesn’t hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.
Well if I see the screen then there’s an encoding of “28” in my brain. Not of the reason why 28 is true, but at least that the answer is “28″.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Obviously, on the level of detail in which the notion of “belief” starts breaking down, the notion of “belief” starts breaking down.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
Yet you might not know the question. “28” only certifies that the question makes a true statement.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
So you say. Many would say that you need the argument (proof, justification, evidence) for a true belief for it to qualify as knowledge.
Obviously, this doesn’t prevent me from saying that I know something without an argument.
You can say that you are the Queen of Sheba.
It remains the case that knowledge is not lucky guessing, so an argument, evidence or some other justification is required.
Yes, but this is completely and totally irrelevant to the point I was making, that:
I will profess that a statement, X, is true, if and only if “X” is encoded in a certain manner in my brain.
Yet “X is true” does not mean “X is encoded in this manner in my brain.”
Been really busy, will respond to this in about a week. I want to read your earlier discussion post, first, too.
Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don’t have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.
The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn’t fundamentally distinguish the moral facts from the facts about the world, so it’s unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.
My ideas work unless it’s impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
Is it? If it’s impossible, why?
It’s the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn’t draw this boundary.
There is an asymmetry. We can only make physical observations, not moral observations.
This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.
To the extent we haven’t defined what “moral observations” are exactly, so that the possibility isn’t ruled out in a clear sense, I’d say that we can make moral observations, in the same sense in which we can make arithmetical observations by looking at a calculator display or consulting own understanding of mathematical facts maintained by brain.
That is, by deducing mathematical facts from new physical facts.
Can you deduce physical facts from new moral facts?
Not necessarily, you can just use physical equipment without having any understanding of how it operates or what it is, and the only facts you reason about are non-physical (even though you interact with physical facts, without reasoning about them).
Why not?
Because your only sources of new facts are your senses.
You can’t infer new (to you) facts from information you already have? You can’t just be told things? A martian,. being told that pre marital sex became less of an issue after the sixities might be able to deduce the physical fact that contraceptive technology was improved in the sixities.
I guess you could but you couldn’t be a perfect Bayesian.
Generally, when one is told something, one becomes aware of this from one’s senses, and then infers things from the physical fact that one is told.
I’m definitely not saying this right. The larger point I’m trying to make is that it makes sense to consider an agent’s physical beliefs and ignore their moral beliefs. That is a well-defined thing to do.
Where does it say that? One needs good information, but the sense can err, and hearsay can be reliable.
The sense are of course involved in acquiring second hand information, but there is still a categoreal difference between showing and telling.
In order to achieve what?
Simplicity, maybe?
A simple way of doing what?
Answering questions like “What are true beliefs? What is knowledge? How does science work?′
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
Well, that’s one of the things you can’t do whilst ignoring moral beliefs.
All the same comprehension of the state of the world, including how beliefs about “true morals” remain accessible. They are simply considered to be physical facts about the construction of certain agents.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
Physical beliefs are constructed from physical facts. Just like everything else!
But the context of the discussion was what can be inferred from physical beliefs.
Also your thoughts, your reasoning, which is machinery for perceiving abstract facts, including moral facts.
How might one deduce new physical facts from new moral facts produced by abstract reasoning?
You can predict that (physical) human babies won’t be eaten too often. Or that a calculator will have a physical configuration displaying something that you inferred abstractly.
You can make those arguments in an entirely physical fashion. You don’t need the morality.
You do need the mathematical abstraction to bundle and unbundle physical facts.
You can use calculators without knowing abstract math too, but it makes sense to talk of mathematical facts independent of calculators.
But it also makes sense to talk about calculators without abstract math.
That’s all I’m saying.
I agree. But it’s probably not all that you’re saying, since this possibility doesn’t reveal problems with inferring physical facts from moral facts.
There is a mapping from physical+moral belief structures to just-physical belief structures.
Correct physical-moral deductions map to correct physical deductions.
The end physical beliefs are purely explained by the beginning physical beliefs + new physical observations.
Meaning what? Are you saying you can get oughts form ises?
No, I’m saying you can distinguish oughts from ises.
I am saying that you can move from is to is to is and never touch upon oughts.
That you can solve all is-problems while ignoring oughts.
Logic can be used to talk about non-physical facts. Do you allow referring to logic even where the logic is talking about non-physical facts, or do you only allow referring to the logic that is talking about physical facts? Or maybe you taboo intended interpretation, however non-physical, but still allow the symbolic game itself to be morally relevant?
Alas, I think this is getting us into the problem of universals. :)
With you, too, Vladimir, I suspect our anticipations do not differ, but our language for talking about these subtle things is slightly different, and thus it takes a bit of work for us to understand each other.
By “logic referring to non-physical facts”, do you have in mind something like “20+7=27″?
“3^^^^3 > 3^^^3”, properties of higher cardinals, hyperreal numbers, facts about a GoL world, about universes with various oracles we don’t have.
Things for which you can’t build a trivial analogy out of physical objects, like a pile of 27 rocks (which are not themselves simple, but this is not easy to appreciate in the context of this comparison).
Certainly, one could reduce normative language into purely logical-mathematical facts, if that was how one was using normative language. But I haven’t heard of people doing this. Have you? Would a reduction of ‘ought’ into purely mathematical statements ever connect up again to physics in a possible world? If so, could you give an example—even a silly one?
Since it’s hard to convey tone through text, let me explicitly state that my tone is a genuinely curious and collaboratively truth-seeking one. I suspect you’ve done more and better thinking on metaethics than I have, so I’m trying to gain what contributions from you I can.
Why do you talk of “language” so much? Suppose we didn’t have language (and there was only ever a single person), I don’t think the problem changes.
Say, I would like to minimize ((X-2)*(X-2)+3)^^^3, where X is the number I’m going to observe on the screen. This is a pretty self-contained specification, and yet it refers to the world. The “logical” side of this can be regarded as a recipe, a symbolic representation of your goals. It also talks about a number that is too big to fit into the physical world.
Okay, sure. We agree about this, then.
This would require that we both have positions that accurately reflect reality, or are somehow synchronously deluded. This is a confusing territory, I know that I don’t know enough to be anywhere confident in my position, and even that position is too vague to be worth systematically communicating, or to describe some important phenomena (I’m working on that). I appreciate the difficulty of communication, but I don’t believe that we would magically meet at the end without having to change our ideas in nontrivial ways.
I just mean that our anticipations do not differ in a very local sense. As an example, imagine that we were using ‘sound’ in different ways like Albert and Barry. Surely Albert and Barry have different anticipations in many ways, but not with respect to the specific events closely related to the tree falling in a forest when nobody is around.
Or maybe things that just don’t usefully reduce.