Taboo both “morality” and “logical” and you may find that you and Eliezer have no disagreement.
LessWrongers routinely disagree on what is meant by “morality”. If you think “morality” is ambiguous, then stipulate a meaning (‘morality₁ is...‘) and carry on. If you think people’s disagreement about the content of “morality” makes it gibberish, then denying that there are moral truths, or that those truths are “logical,” will equally be gibberish. Eliezer’s general practice is to reason carefully but informally with something in the neighborhood of our colloquial meanings of terms, when it’s clear that we could stipulate a precise definition that adequately approximates what most people mean. Words like ‘dog’ and ‘country’ and ‘number’ and ‘curry’ and ‘fairness’ are fuzzy (if not outright ambiguous) in natural language, but we can construct more rigorous definitions that aren’t completely semantically alien.
Surprisingly, we seem to be even less clear about what is meant by “logic”. A logic, simply put, is a set of explicit rules for generating lines in a proof. And “logic,” as a human practice, is the use and creation of such rules. But people informally speak of things as “logical” whenever they have a ‘logicalish vibe,’ i.e., whenever they involve especially rigorous abstract reasoning.
Eliezer’s standard use of ‘logical’ takes the ‘abstract’ part of logicalish vibes and runs with them; he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is ‘logical,’ whereas reasoning about concrete things-in-the-world is ‘physical.’ Of course, in practice our reasoning is usually a mix of logical and physical; but Eliezer’s convention gives us a heuristic for determining whether some x that we appeal to in reasoning is logical (i.e., abstract, nonspatial) or physical (i.e., concrete, spatially located). We can easily see that if the word ‘fairness’ denotes anything (i.e., it’s not like ‘unicorn’ or ‘square circle’), it must be denoting a logical/abstract sort of thingie, since fairness isn’t somewhere. (Fairness, unlike houses and candy, does not decompose into quarks and electrons.)
By the same reasoning, it becomes clear that things like ‘difficulty’ and ‘the average South African male’ and ‘the set of prime numbers’ and ‘the legal system of Switzerland’ are not physical objects; there isn’t any place where difficulty literally is, as though it were a Large Object hiding someplace just out of view. It’s an abstraction (or, in EY’s idiom, a ‘logical’ construct) our brains posit as a tool for thinking, in the same fundamental way that we posit numbers, sets, axioms, and possible worlds. The posits of literary theory are frequently ‘logical’ (i.e., abstract) in Eliezer’s sense, when they have semantic candidates we can stipulate as having adequately precise characteristics. Eliezer’s happy to be big-tent here, because he’s doing domain-general epistemology and (meta)physics, not trying to lay out the precise distinctions between different fields in academia. And doing so highlights the important point that reasoning about what’s moral is not categorically unlike reasoning about what’s difficult, or what’s a planet, or what’s desirable, or what’s common, or what’s illegal; our natural-language lay-usage may underdetermine the answers to those questions, but there are much more rigorous formulations in the same semantic neighborhood that we can put to very good use.
So if we mostly just mean ‘abstract’ and ‘concrete,’ why talk about ‘logical’ and ‘physical’ at all? Well, I think EY is trying to constrain what sorts of abstract and concrete posits we take seriously. Various concepts of God, for instance, qualify as ‘abstract’ in the sense that they are not spatial; and psychic rays qualify as ‘concrete’ in the sense that they occur in specific places; but based on a variety of principles (e.g., ‘abstract things have no causal power in their own right’ and ‘concrete things do not travel faster than light’ or ‘concrete things are not irreducibly “mental”’), he seeks to tersely rule out the less realistic spatial and non-spatial posits some people make, so that the epistemic grown-ups can have a more serious discussion amongst themselves.
he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is ‘logical,’
If this is the case, then I think he has failed to show that morality is logic, unless he’s using an extremely lax standard of “sufficiently careful”. For example, I think that “sufficiently careful” reasoning must at a minimum be using a method of reasoning that is not sensitive to the order in which one encounters arguments, and is not sensitive to the mood one is in when considering those arguments. Do you think Eliezer has shown this? Or alternatively, what standard of “sufficiently careful” do you think Eliezer is using when he says “morality is logic”?
I’d split up Eliezer’s view into several distinct claims:
A semantic thesis: Logically regimented versions of fairness, harm, obligation, etc. are reasonable semantic candidates for moral terms. They may not be what everyone actually means by ‘fair’ and ‘virtuous’ and so on, but they’re modest improvements in the same way that a rigorous genome-based definition of Canis lupus familiaris would be a reasonable improvement upon our casual, everyday concept of ‘dog,’ or that a clear set of thermodynamic thresholds would be a reasonable regimentation of our everyday concept ‘hot.’
A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible ‘Oughtness’ Properties In Our Fundamental Physics. All they commit us to are the ordinary objects of physics, logic, and mathematics, e.g., sets, functions, and causal relationships; and sets, functions, and causality are not metaphysically objectionable.
A normative thesis: It is useful to adopt moralityspeak ourselves, provided we do so using a usefully regimented semantics. The reasons to refuse to talk in a moral idiom are, in part thanks to 1 and 2, not strong enough to outweigh the rhetorical and self-motivational advantages of adopting such an idiom.
It seems clear to me that you disagree with thesis 1; but if you granted 1 (e.g., granted that ‘a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof’ is not a crazy candidate meaning for the English word ‘fairness’), would you still disagree with 2 and 3? And do you think that morality is unusual in failing 1-style regimentation, or do you think that we’ll eventually need to ditch nearly all English-language terms if we are to attain rigor?
Eliezer’s standard use of ‘logical’ takes the ‘abstract’ part of logicalish vibes and runs with them; he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is ‘logical,’ whereas reasoning about concrete things-in-the-world is ‘physical.’
I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we’re thinning down a universe of possible models using axioms.
One thing I didn’t go into, in this epistemology sequence, is the notion of ‘effectiveness’ or ‘formality’, which is important but I didn’t go into as much because my take on it feels much more standard—I’m not sure I have anything more to say about what constitutes an ‘effective’ formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain’s native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we’re done being effective, there’s still the question of whether we’re navigating to a part of the physical universe, or narrowing down mathematical models, and by ‘logical’ I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by ‘effective’ as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on.
I also don’t claim to have given morality an effective description—my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms—but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).
Let me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you’ve introduced a bunch of “moving parts” for your metaethical theory:
moral arguments
implicit reasons-for-action
effective descriptions of reasons-for-action
utility function
But I don’t understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What’s the “controlling algorithm” that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function?
As you argued in Unnatural Categories (which I keep citing recently), reasons-for-action can’t be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved.
Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let’s figure out both how they are supposed to work internally, and how they are supposed to fit together?
my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
these reasons-for-action both have an effective description (descriptively speaking)
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
any idealized or normative version of them would still have an effective description (normatively speaking).
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?
but if you granted 1 (e.g., granted that ‘a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof’ is not a crazy candidate meaning for the English word ‘fairness’), would you still disagree with 2 and 3?
If I grant 1, I currently can’t think of any objections to 2 and 3 (which doesn’t mean that I won’t if I took 1 more seriously and therefore had more incentive to look for such objections).
And do you think that morality is unusual in failing 1-style regimentation, or do you think that we’ll eventually need to ditch nearly all English-language terms if we are to attain rigor?
I think at a minimum, it’s unusually difficult to do 1-style regimentation for morality (and Eliezer himself explained why in Unnatural Categories). I guess one point I’m trying to make is that whatever kind of reasoning we’re using to attempt this kind of regimentation is not the same kind of reasoning that we use to think about some logical object after we have regimented it. Does that make sense?
A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible ‘Oughtness’ Properties
If oughtness, nornmativity, isn’t irteducible, it’s either reducible or nonexistent. If it’s nonexistent, how can you have morality at all? If it’s reducible, where’s the reduction?
RobbBB probably knows this, but I’d just like to mention that the three claims listed above, at least as stated there, are common to many metaethical approaches, not just Eliezer’s. Desirism is one example. Other examples include the moral reductionisms of Richard Brandt, Peter Railton, and Frank Jackson.
By “morality” you seem to mean something like ‘the set of judgments about mass wellbeing ordinary untrained humans arrive at when prompted.’ This is about like denying the possibility of arithmetic because people systematically make errors in mathematical reasoning. When the Pythagoreans reasoned about numbers, they were not being ‘sufficiently careful;’ they did not rigorously define what it took for something to be a number or to have a solution, or stipulate exactly what operations are possible; and they did not have a clear notion of the abstract/concrete distinction, or of which of these two domains ‘number’ should belong to. Quite plausibly, Pythagoreans would arrive at different solutions in some cases based on their state of mind or the problems’ framing; and certainly Pythagoreans ran into disagreements they could not resolve and fell into warring camps as a result, e.g., over whether there are irrational numbers.
But the unreasonableness of the disputants, no matter how extreme, cannot infect the subject matter and make that subject matter intrinsically impossible to carefully reason with. No matter how extreme we make the Pythagoreans’ eccentricities, as long as they continue to do something math-ish, it would remain possible for a Euclid or Yudkowsky to arise from the sea-foam and propose a regimentation of their intuitions, a more carefully formalized version of their concepts of ‘number,’ ‘ratio,’ ‘proof,’ etc.
I take it that Eliezer thinks we are very much in the position today of inhabiting a global, heavily schismatized network of Pythagorean Cults of Morality. Those cults are irrational, and their favored concepts would need to be made more precise and careful before the questions they ask could be assigned determinate answers (even in principle). But the subject matter those cults are talking about—how to cultivate human well-being, how to distribute resources equitably, how to balance preferences in a way most people would prefer, etc. -- is not intrinsically irrational or mystical or ineffable. The categories in question are tracking real property clusters, though perhaps not yet with complete applicability-to-any-old-case; no matter how much of a moral anti-realist you are, for instance, you can’t reasonably hold that ‘fairness’ doesn’t have its own set of satisfaction conditions that fail to coincide with other moral (or physical, mathematical, etc.) concepts.
Another way of motivating the idea that morality is ‘logical’: Decision theory is ‘logical’, and morality is a special sort of decision theory. If we can carefully regiment the satisfaction conditions for an individual’s preferences, then we can regiment the satisfaction conditions for the preferences of people generally; and we can isolate the preferences that people consider moral vs. amoral; and if we can do all that, what skeptical challenge could block an algorithm that recognizably maps what we call ‘fair’ and ‘unfair’ and ‘moral’ and ‘immoral,’ that couldn’t equally well block an algorithm that recognizably maps what we call ‘preferred’ and ‘distasteful’ and ‘delicious’...? How carelessly do people have to reason with x such that we can conclude that it’s impossible to reason carefully with x?
But the unreasonableness of the disputants, no matter how extreme, cannot infect the subject matter and make that subject matter intrinsically impossible to carefully reason with.
I think I’ve been careful not to claim that morality is impossible to carefully reason with, but just that we don’t know how to carefully reason with it yet and given our current state of knowledge, it may turn out to be impossible to carefully reason with.
Another way of motivating the idea that morality is ‘logical’: Decision theory is ‘logical’, and morality is a special sort of decision theory.
With decision theory, we’re also in a “non-logical” state of reasoning, where we don’t yet have a logical definition of what constitutes correct decision theory and therefore can’t just apply logical reasoning. What’s helpful in the case of decision theory is that it seems reasonable to assume that when we do come up with such a logical definition, it will be relatively simple. This helps tremendously in guiding our search, and partly compensates for the fact that we do not know how to reason carefully during this search. But with “morality”, we don’t have this crutch since we think it may well be the case that “value is complex”.
we don’t know how to carefully reason with it yet and given our current state of knowledge, it may turn out to be impossible to carefully reason with.
I agree that it’s going to take a lot of work to fully clarify our concepts. I might be able to assign a less remote probability to ‘morality turns out to be impossible to carefully reason with’ if you could give an example of a similarly complex human discourse that turned out in the past to be ‘impossible to carefully reason with’.
High-quality theology is an example of the opposite; we turned out to be able to reason very carefully (though admittedly most theology is subpar) with slightly regimented versions of concepts in natural religion. At least, there are some cases where the regimentation was not completely perverse, though the crazier examples may be more salient in our memories. But the biggest problem with was metaphysical, not semantic; there just weren’t any things in the neighborhood of our categories for us to refer to. If you have no metaphysical objections to Eliezer’s treatment of morality beyond your semantic objections, then you don’t think a regimented morality would be problematic for the reasons a regimented theology would be. So what’s a better example of a regimentation that would fail because we just can’t be careful about the topic in question? What symptoms and causes would be diagnostic of such cases?
What’s helpful in the case of decision theory is that it seems reasonable to assume that when we do come up with such a logical definition, it will be relatively simple.
By comparison, perhaps. But it depends a whole lot on what we mean by ‘morality’. For instance, do we mean:?
Morality is the hypothetical decision procedure that, if followed, tends to maximize the amount of positively valenced experience in the universe relative to negatively valenced experience, to a greater extent than any other decision procedure.
Morality is the hypothetical decision procedure that, if followed, tends to maximize the occurrence of states of affairs that agents prefer relative to states they do not prefer (taking into account that agents generally prefer not to have their preferences radically altered).
Morality is any decision procedure that anyone wants people in general to follow.
Morality is the human tendency to construct and prescribe rules they want people in general to follow.
Morality is anything that English-language speakers call “morality” with a certain high frequency.
If “value is complex,” that’s a problem for prudential decision theories based on individual preferences, just as much as it is for agent-general moral decision theories. But I think we agree both there’s a long way to go in regimenting decision theory, and that there’s some initial plausibility and utility in trying to regiment a moralizing class of decision theories; whether we call this regimenting procedure ‘logicizing’ is just a terminological issue.
But it depends a whole lot on what we mean by ‘morality’.
What I mean by “morality” is the part of normativity (“what you really ought, all things considered, to do”) that has to do with values (as opposed to rationality).
I agree that it’s going to take a lot of work to fully clarify our concepts. I might be able to assign a less remote probability to ‘morality turns out to be impossible to carefully reason with’ if you could give an example of a similarly complex human discourse that turned out in the past to be ‘impossible to carefully reason with’.
In general, I’m not sure how to show a negative like “it’s impossible to reason carefully about subject X”, so the best I can do is exhibit some subject that people don’t know how to reason carefully about and intuitively seems like it may be impossible to reason carefully about. Take the question, “Which sets really exist?” (Do large cardinals exist, for example?) Is this a convincing example to you of another subject that may be impossible to reason carefully about?
I take it that Eliezer thinks we are very much in the position today of inhabiting a global, heavily schismatized network of Pythagorean Cults of Morality. Those cults are irrational, and their favored concepts would need to be made more precise and careful before the questions they ask could be assigned determinate answers (even in principle). But the subject matter those cults are talking about—how to cultivate human well-being, how to distribute resources equitably, -- is not intrinsically irrational or mystical or ineffable. The categories in question are tracking real property clusters, though perhaps not yet with complete applicability-to-any-old-case; no matter how much of a moral anti-realist you are, for instance, you can’t reasonably hold that ‘fairness’ doesn’t have its own set of satisfaction conditions that fail to coincide with other moral (or physical, mathematical, etc.) concepts.
Haven’t we been in this position since before mathematics was a thing. The lack of progress towards consensus in that period of time seems disheartening.
The natural number line is one of the simplest structures a human being is capable of conceiving. The idea of a human preference is one of the most complex structures a human being has yet encountered. And we have a lot more emotional investment and evolutionary baggage interfering with carefully axiomatizing our preferences than with carefully axiomatizing the numbers. Why should we be surprised that we’ve made more progress with regimenting number theory than with regimenting morality or decision theory in the last few thousand years?
In terms of moral theory, we appear to have made no progress at all. We don’t even agree on definitions.
Mathematics may or might not be an empirical discipline, but if you get your math wrong badly enough, you lose the ability to pay rent.
If morality paid rent in anticipated experience, I’d expect societies that had more correct morality to do better and societies with less correct morality to do worse. Morality is so important that I expect marginal differences to have major impact. And I just don’t see the evidence that such an impact is or ever did happen.
So, have I misread history? Or have I made a mistake in predicting that chance differences in morality should have major impacts on the prosperity of a society? (Or some other error?)
In terms of moral theory, we appear to have made no progress at all. We don’t even agree on definitions.
But defining terms is the trivial part of any theory; if you concede that we haven’t even gotten that far (and that term-defining is trivial), then you’ll have a much harder time arguing that if we did agree on definitions we’d still have made no progress. You can’t argue that, because if we all have differing term definitions, then that on its own predicts radical disagreement about almost anything; there is no need to posit a further explanation.
If morality paid rent in anticipated experience
Morality pays rent in anticipated experience in the same three basic ways that mathematics does:
Knowing about morality helps us predict the behavior of moralists, just as knowing about mathematics helps us predict the behavior of mathematicians (including their creations). If you know that people think murder is bad, you can help predict why murder is so rare; just as knowing mathematicians’ beliefs about natural numbers helps us predict what funny squiggly lines will occur on calculators. This, of course, doesn’t require any commitment to moral realism, just as it doesn’t require a commitment to mathematical realism.
Inasmuch as the structure of moral reasoning mirrors the structure of physical systems, we can predict how physical systems will change based on what our moral axioms output. For instance, if our moral axioms are carefully tuned to parallel the distribution of suffering in the world, we can use them to predict what sorts of brain-states will be physically instantiated if we perform certain behaviors. Similarly, if our number axioms are carefully tuned to parallel the changes in physical objects (and heaps thereof) in the world, we can use them to predict how physical objects will change when we translate them in spacetime.
Inasmuch as our intuitions give rise to our convictions about mathematics and morality, we can use the aforementioned convictions to predict our own future intuitions. In particular, an especially regimented mathematics or morality, that arises from highly intuitive axioms we accept, will often allow us to algorithmically generate what we would reflectively find most intuitive before we can even process the information sufficiently to generate the intuition. A calculator gives us the most intuitive and reflectively stable value for 142857 times 7 before we’ve gone to the trouble of understanding why or that this is the most intuitive value; similarly, a sufficiently advanced utility-calculator, programmed with the rules you find most reflectively intuitive, would generate the ultimately intuitive answers for moral dilemmas before you’d even gone to the trouble of figuring out on your own what you find most intuitive. And your future intuitions are future experiences; so the propositions of mathematics and morality, interestingly enough, serve as predictors for your own future mental states, at least when those mental states are sufficiently careful and thought out.
But all of these are to some extent indirect. It’s not as though we directly observe that SSSSSSS0 is prime, any more than we directly observe that murder is bad. We either take it as a given, or derive it from something else we take as a given; but regardless, there can be plenty of indirect ways that the ‘logical’ discourse in question helps us better navigate, manipulate, and predict our environments.
If morality paid rent in anticipated experience, I’d expect societies that had more correct morality to do better and societies with less correct morality to do worse.
There’s a problem here: What are we using to evaluate ‘doing better’ vs. ‘doing worse’? We often use moral superiority itself as an important measure of ‘betterness;’ we think it’s morally right or optimal to maximize human well-being, so we judge societies that do a good job of this as ‘better.’ At the very least, moral considerations like this seem to be a part of what we mean by ‘better.’ If you’re trying to bracket that kind of success, then it’s not clear to me what you even mean by ‘better’ or ‘prosperity’ here. Are you asking whether moral fortitude correlates with GDP?
(Some common senses of “moral fortitude” definitely cause GDP, at minimum in the form of trust between businesspeople and less predatory bureaucrats. But this part is equally true of Babyeaters.)
Taboo both “morality” and “logical” and you may find that you and Eliezer have no disagreement.
LessWrongers routinely disagree on what is meant by “morality”. If you think “morality” is ambiguous, then stipulate a meaning (‘morality₁ is...‘) and carry on. If you think people’s disagreement about the content of “morality” makes it gibberish, then denying that there are moral truths, or that those truths are “logical,” will equally be gibberish. Eliezer’s general practice is to reason carefully but informally with something in the neighborhood of our colloquial meanings of terms, when it’s clear that we could stipulate a precise definition that adequately approximates what most people mean. Words like ‘dog’ and ‘country’ and ‘number’ and ‘curry’ and ‘fairness’ are fuzzy (if not outright ambiguous) in natural language, but we can construct more rigorous definitions that aren’t completely semantically alien.
Surprisingly, we seem to be even less clear about what is meant by “logic”. A logic, simply put, is a set of explicit rules for generating lines in a proof. And “logic,” as a human practice, is the use and creation of such rules. But people informally speak of things as “logical” whenever they have a ‘logicalish vibe,’ i.e., whenever they involve especially rigorous abstract reasoning.
Eliezer’s standard use of ‘logical’ takes the ‘abstract’ part of logicalish vibes and runs with them; he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is ‘logical,’ whereas reasoning about concrete things-in-the-world is ‘physical.’ Of course, in practice our reasoning is usually a mix of logical and physical; but Eliezer’s convention gives us a heuristic for determining whether some x that we appeal to in reasoning is logical (i.e., abstract, nonspatial) or physical (i.e., concrete, spatially located). We can easily see that if the word ‘fairness’ denotes anything (i.e., it’s not like ‘unicorn’ or ‘square circle’), it must be denoting a logical/abstract sort of thingie, since fairness isn’t somewhere. (Fairness, unlike houses and candy, does not decompose into quarks and electrons.)
By the same reasoning, it becomes clear that things like ‘difficulty’ and ‘the average South African male’ and ‘the set of prime numbers’ and ‘the legal system of Switzerland’ are not physical objects; there isn’t any place where difficulty literally is, as though it were a Large Object hiding someplace just out of view. It’s an abstraction (or, in EY’s idiom, a ‘logical’ construct) our brains posit as a tool for thinking, in the same fundamental way that we posit numbers, sets, axioms, and possible worlds. The posits of literary theory are frequently ‘logical’ (i.e., abstract) in Eliezer’s sense, when they have semantic candidates we can stipulate as having adequately precise characteristics. Eliezer’s happy to be big-tent here, because he’s doing domain-general epistemology and (meta)physics, not trying to lay out the precise distinctions between different fields in academia. And doing so highlights the important point that reasoning about what’s moral is not categorically unlike reasoning about what’s difficult, or what’s a planet, or what’s desirable, or what’s common, or what’s illegal; our natural-language lay-usage may underdetermine the answers to those questions, but there are much more rigorous formulations in the same semantic neighborhood that we can put to very good use.
So if we mostly just mean ‘abstract’ and ‘concrete,’ why talk about ‘logical’ and ‘physical’ at all? Well, I think EY is trying to constrain what sorts of abstract and concrete posits we take seriously. Various concepts of God, for instance, qualify as ‘abstract’ in the sense that they are not spatial; and psychic rays qualify as ‘concrete’ in the sense that they occur in specific places; but based on a variety of principles (e.g., ‘abstract things have no causal power in their own right’ and ‘concrete things do not travel faster than light’ or ‘concrete things are not irreducibly “mental”’), he seeks to tersely rule out the less realistic spatial and non-spatial posits some people make, so that the epistemic grown-ups can have a more serious discussion amongst themselves.
If this is the case, then I think he has failed to show that morality is logic, unless he’s using an extremely lax standard of “sufficiently careful”. For example, I think that “sufficiently careful” reasoning must at a minimum be using a method of reasoning that is not sensitive to the order in which one encounters arguments, and is not sensitive to the mood one is in when considering those arguments. Do you think Eliezer has shown this? Or alternatively, what standard of “sufficiently careful” do you think Eliezer is using when he says “morality is logic”?
I’d split up Eliezer’s view into several distinct claims:
A semantic thesis: Logically regimented versions of fairness, harm, obligation, etc. are reasonable semantic candidates for moral terms. They may not be what everyone actually means by ‘fair’ and ‘virtuous’ and so on, but they’re modest improvements in the same way that a rigorous genome-based definition of Canis lupus familiaris would be a reasonable improvement upon our casual, everyday concept of ‘dog,’ or that a clear set of thermodynamic thresholds would be a reasonable regimentation of our everyday concept ‘hot.’
A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible ‘Oughtness’ Properties In Our Fundamental Physics. All they commit us to are the ordinary objects of physics, logic, and mathematics, e.g., sets, functions, and causal relationships; and sets, functions, and causality are not metaphysically objectionable.
A normative thesis: It is useful to adopt moralityspeak ourselves, provided we do so using a usefully regimented semantics. The reasons to refuse to talk in a moral idiom are, in part thanks to 1 and 2, not strong enough to outweigh the rhetorical and self-motivational advantages of adopting such an idiom.
It seems clear to me that you disagree with thesis 1; but if you granted 1 (e.g., granted that ‘a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof’ is not a crazy candidate meaning for the English word ‘fairness’), would you still disagree with 2 and 3? And do you think that morality is unusual in failing 1-style regimentation, or do you think that we’ll eventually need to ditch nearly all English-language terms if we are to attain rigor?
I like this splitup!
(From the great-grandparent.)
I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we’re thinning down a universe of possible models using axioms.
One thing I didn’t go into, in this epistemology sequence, is the notion of ‘effectiveness’ or ‘formality’, which is important but I didn’t go into as much because my take on it feels much more standard—I’m not sure I have anything more to say about what constitutes an ‘effective’ formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain’s native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we’re done being effective, there’s still the question of whether we’re navigating to a part of the physical universe, or narrowing down mathematical models, and by ‘logical’ I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by ‘effective’ as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on.
I also don’t claim to have given morality an effective description—my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms—but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).
Let me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you’ve introduced a bunch of “moving parts” for your metaethical theory:
moral arguments
implicit reasons-for-action
effective descriptions of reasons-for-action
utility function
But I don’t understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What’s the “controlling algorithm” that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function?
As you argued in Unnatural Categories (which I keep citing recently), reasons-for-action can’t be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved.
Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let’s figure out both how they are supposed to work internally, and how they are supposed to fit together?
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?
If I grant 1, I currently can’t think of any objections to 2 and 3 (which doesn’t mean that I won’t if I took 1 more seriously and therefore had more incentive to look for such objections).
I think at a minimum, it’s unusually difficult to do 1-style regimentation for morality (and Eliezer himself explained why in Unnatural Categories). I guess one point I’m trying to make is that whatever kind of reasoning we’re using to attempt this kind of regimentation is not the same kind of reasoning that we use to think about some logical object after we have regimented it. Does that make sense?
If oughtness, nornmativity, isn’t irteducible, it’s either reducible or nonexistent. If it’s nonexistent, how can you have morality at all? If it’s reducible, where’s the reduction?
RobbBB probably knows this, but I’d just like to mention that the three claims listed above, at least as stated there, are common to many metaethical approaches, not just Eliezer’s. Desirism is one example. Other examples include the moral reductionisms of Richard Brandt, Peter Railton, and Frank Jackson.
By “morality” you seem to mean something like ‘the set of judgments about mass wellbeing ordinary untrained humans arrive at when prompted.’ This is about like denying the possibility of arithmetic because people systematically make errors in mathematical reasoning. When the Pythagoreans reasoned about numbers, they were not being ‘sufficiently careful;’ they did not rigorously define what it took for something to be a number or to have a solution, or stipulate exactly what operations are possible; and they did not have a clear notion of the abstract/concrete distinction, or of which of these two domains ‘number’ should belong to. Quite plausibly, Pythagoreans would arrive at different solutions in some cases based on their state of mind or the problems’ framing; and certainly Pythagoreans ran into disagreements they could not resolve and fell into warring camps as a result, e.g., over whether there are irrational numbers.
But the unreasonableness of the disputants, no matter how extreme, cannot infect the subject matter and make that subject matter intrinsically impossible to carefully reason with. No matter how extreme we make the Pythagoreans’ eccentricities, as long as they continue to do something math-ish, it would remain possible for a Euclid or Yudkowsky to arise from the sea-foam and propose a regimentation of their intuitions, a more carefully formalized version of their concepts of ‘number,’ ‘ratio,’ ‘proof,’ etc.
I take it that Eliezer thinks we are very much in the position today of inhabiting a global, heavily schismatized network of Pythagorean Cults of Morality. Those cults are irrational, and their favored concepts would need to be made more precise and careful before the questions they ask could be assigned determinate answers (even in principle). But the subject matter those cults are talking about—how to cultivate human well-being, how to distribute resources equitably, how to balance preferences in a way most people would prefer, etc. -- is not intrinsically irrational or mystical or ineffable. The categories in question are tracking real property clusters, though perhaps not yet with complete applicability-to-any-old-case; no matter how much of a moral anti-realist you are, for instance, you can’t reasonably hold that ‘fairness’ doesn’t have its own set of satisfaction conditions that fail to coincide with other moral (or physical, mathematical, etc.) concepts.
Another way of motivating the idea that morality is ‘logical’: Decision theory is ‘logical’, and morality is a special sort of decision theory. If we can carefully regiment the satisfaction conditions for an individual’s preferences, then we can regiment the satisfaction conditions for the preferences of people generally; and we can isolate the preferences that people consider moral vs. amoral; and if we can do all that, what skeptical challenge could block an algorithm that recognizably maps what we call ‘fair’ and ‘unfair’ and ‘moral’ and ‘immoral,’ that couldn’t equally well block an algorithm that recognizably maps what we call ‘preferred’ and ‘distasteful’ and ‘delicious’...? How carelessly do people have to reason with x such that we can conclude that it’s impossible to reason carefully with x?
I think I’ve been careful not to claim that morality is impossible to carefully reason with, but just that we don’t know how to carefully reason with it yet and given our current state of knowledge, it may turn out to be impossible to carefully reason with.
With decision theory, we’re also in a “non-logical” state of reasoning, where we don’t yet have a logical definition of what constitutes correct decision theory and therefore can’t just apply logical reasoning. What’s helpful in the case of decision theory is that it seems reasonable to assume that when we do come up with such a logical definition, it will be relatively simple. This helps tremendously in guiding our search, and partly compensates for the fact that we do not know how to reason carefully during this search. But with “morality”, we don’t have this crutch since we think it may well be the case that “value is complex”.
I agree that it’s going to take a lot of work to fully clarify our concepts. I might be able to assign a less remote probability to ‘morality turns out to be impossible to carefully reason with’ if you could give an example of a similarly complex human discourse that turned out in the past to be ‘impossible to carefully reason with’.
High-quality theology is an example of the opposite; we turned out to be able to reason very carefully (though admittedly most theology is subpar) with slightly regimented versions of concepts in natural religion. At least, there are some cases where the regimentation was not completely perverse, though the crazier examples may be more salient in our memories. But the biggest problem with was metaphysical, not semantic; there just weren’t any things in the neighborhood of our categories for us to refer to. If you have no metaphysical objections to Eliezer’s treatment of morality beyond your semantic objections, then you don’t think a regimented morality would be problematic for the reasons a regimented theology would be. So what’s a better example of a regimentation that would fail because we just can’t be careful about the topic in question? What symptoms and causes would be diagnostic of such cases?
By comparison, perhaps. But it depends a whole lot on what we mean by ‘morality’. For instance, do we mean:?
Morality is the hypothetical decision procedure that, if followed, tends to maximize the amount of positively valenced experience in the universe relative to negatively valenced experience, to a greater extent than any other decision procedure.
Morality is the hypothetical decision procedure that, if followed, tends to maximize the occurrence of states of affairs that agents prefer relative to states they do not prefer (taking into account that agents generally prefer not to have their preferences radically altered).
Morality is any decision procedure that anyone wants people in general to follow.
Morality is the human tendency to construct and prescribe rules they want people in general to follow.
Morality is anything that English-language speakers call “morality” with a certain high frequency.
If “value is complex,” that’s a problem for prudential decision theories based on individual preferences, just as much as it is for agent-general moral decision theories. But I think we agree both there’s a long way to go in regimenting decision theory, and that there’s some initial plausibility and utility in trying to regiment a moralizing class of decision theories; whether we call this regimenting procedure ‘logicizing’ is just a terminological issue.
What I mean by “morality” is the part of normativity (“what you really ought, all things considered, to do”) that has to do with values (as opposed to rationality).
In general, I’m not sure how to show a negative like “it’s impossible to reason carefully about subject X”, so the best I can do is exhibit some subject that people don’t know how to reason carefully about and intuitively seems like it may be impossible to reason carefully about. Take the question, “Which sets really exist?” (Do large cardinals exist, for example?) Is this a convincing example to you of another subject that may be impossible to reason carefully about?
Haven’t we been in this position since before mathematics was a thing. The lack of progress towards consensus in that period of time seems disheartening.
The natural number line is one of the simplest structures a human being is capable of conceiving. The idea of a human preference is one of the most complex structures a human being has yet encountered. And we have a lot more emotional investment and evolutionary baggage interfering with carefully axiomatizing our preferences than with carefully axiomatizing the numbers. Why should we be surprised that we’ve made more progress with regimenting number theory than with regimenting morality or decision theory in the last few thousand years?
In terms of moral theory, we appear to have made no progress at all. We don’t even agree on definitions.
Mathematics may or might not be an empirical discipline, but if you get your math wrong badly enough, you lose the ability to pay rent.
If morality paid rent in anticipated experience, I’d expect societies that had more correct morality to do better and societies with less correct morality to do worse. Morality is so important that I expect marginal differences to have major impact. And I just don’t see the evidence that such an impact is or ever did happen.
So, have I misread history? Or have I made a mistake in predicting that chance differences in morality should have major impacts on the prosperity of a society? (Or some other error?)
But defining terms is the trivial part of any theory; if you concede that we haven’t even gotten that far (and that term-defining is trivial), then you’ll have a much harder time arguing that if we did agree on definitions we’d still have made no progress. You can’t argue that, because if we all have differing term definitions, then that on its own predicts radical disagreement about almost anything; there is no need to posit a further explanation.
Morality pays rent in anticipated experience in the same three basic ways that mathematics does:
Knowing about morality helps us predict the behavior of moralists, just as knowing about mathematics helps us predict the behavior of mathematicians (including their creations). If you know that people think murder is bad, you can help predict why murder is so rare; just as knowing mathematicians’ beliefs about natural numbers helps us predict what funny squiggly lines will occur on calculators. This, of course, doesn’t require any commitment to moral realism, just as it doesn’t require a commitment to mathematical realism.
Inasmuch as the structure of moral reasoning mirrors the structure of physical systems, we can predict how physical systems will change based on what our moral axioms output. For instance, if our moral axioms are carefully tuned to parallel the distribution of suffering in the world, we can use them to predict what sorts of brain-states will be physically instantiated if we perform certain behaviors. Similarly, if our number axioms are carefully tuned to parallel the changes in physical objects (and heaps thereof) in the world, we can use them to predict how physical objects will change when we translate them in spacetime.
Inasmuch as our intuitions give rise to our convictions about mathematics and morality, we can use the aforementioned convictions to predict our own future intuitions. In particular, an especially regimented mathematics or morality, that arises from highly intuitive axioms we accept, will often allow us to algorithmically generate what we would reflectively find most intuitive before we can even process the information sufficiently to generate the intuition. A calculator gives us the most intuitive and reflectively stable value for 142857 times 7 before we’ve gone to the trouble of understanding why or that this is the most intuitive value; similarly, a sufficiently advanced utility-calculator, programmed with the rules you find most reflectively intuitive, would generate the ultimately intuitive answers for moral dilemmas before you’d even gone to the trouble of figuring out on your own what you find most intuitive. And your future intuitions are future experiences; so the propositions of mathematics and morality, interestingly enough, serve as predictors for your own future mental states, at least when those mental states are sufficiently careful and thought out.
But all of these are to some extent indirect. It’s not as though we directly observe that SSSSSSS0 is prime, any more than we directly observe that murder is bad. We either take it as a given, or derive it from something else we take as a given; but regardless, there can be plenty of indirect ways that the ‘logical’ discourse in question helps us better navigate, manipulate, and predict our environments.
There’s a problem here: What are we using to evaluate ‘doing better’ vs. ‘doing worse’? We often use moral superiority itself as an important measure of ‘betterness;’ we think it’s morally right or optimal to maximize human well-being, so we judge societies that do a good job of this as ‘better.’ At the very least, moral considerations like this seem to be a part of what we mean by ‘better.’ If you’re trying to bracket that kind of success, then it’s not clear to me what you even mean by ‘better’ or ‘prosperity’ here. Are you asking whether moral fortitude correlates with GDP?
(Some common senses of “moral fortitude” definitely cause GDP, at minimum in the form of trust between businesspeople and less predatory bureaucrats. But this part is equally true of Babyeaters.)