I’d split up Eliezer’s view into several distinct claims:
A semantic thesis: Logically regimented versions of fairness, harm, obligation, etc. are reasonable semantic candidates for moral terms. They may not be what everyone actually means by ‘fair’ and ‘virtuous’ and so on, but they’re modest improvements in the same way that a rigorous genome-based definition of Canis lupus familiaris would be a reasonable improvement upon our casual, everyday concept of ‘dog,’ or that a clear set of thermodynamic thresholds would be a reasonable regimentation of our everyday concept ‘hot.’
A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible ‘Oughtness’ Properties In Our Fundamental Physics. All they commit us to are the ordinary objects of physics, logic, and mathematics, e.g., sets, functions, and causal relationships; and sets, functions, and causality are not metaphysically objectionable.
A normative thesis: It is useful to adopt moralityspeak ourselves, provided we do so using a usefully regimented semantics. The reasons to refuse to talk in a moral idiom are, in part thanks to 1 and 2, not strong enough to outweigh the rhetorical and self-motivational advantages of adopting such an idiom.
It seems clear to me that you disagree with thesis 1; but if you granted 1 (e.g., granted that ‘a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof’ is not a crazy candidate meaning for the English word ‘fairness’), would you still disagree with 2 and 3? And do you think that morality is unusual in failing 1-style regimentation, or do you think that we’ll eventually need to ditch nearly all English-language terms if we are to attain rigor?
Eliezer’s standard use of ‘logical’ takes the ‘abstract’ part of logicalish vibes and runs with them; he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is ‘logical,’ whereas reasoning about concrete things-in-the-world is ‘physical.’
I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we’re thinning down a universe of possible models using axioms.
One thing I didn’t go into, in this epistemology sequence, is the notion of ‘effectiveness’ or ‘formality’, which is important but I didn’t go into as much because my take on it feels much more standard—I’m not sure I have anything more to say about what constitutes an ‘effective’ formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain’s native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we’re done being effective, there’s still the question of whether we’re navigating to a part of the physical universe, or narrowing down mathematical models, and by ‘logical’ I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by ‘effective’ as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on.
I also don’t claim to have given morality an effective description—my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms—but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).
Let me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you’ve introduced a bunch of “moving parts” for your metaethical theory:
moral arguments
implicit reasons-for-action
effective descriptions of reasons-for-action
utility function
But I don’t understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What’s the “controlling algorithm” that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function?
As you argued in Unnatural Categories (which I keep citing recently), reasons-for-action can’t be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved.
Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let’s figure out both how they are supposed to work internally, and how they are supposed to fit together?
my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
these reasons-for-action both have an effective description (descriptively speaking)
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
any idealized or normative version of them would still have an effective description (normatively speaking).
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?
but if you granted 1 (e.g., granted that ‘a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof’ is not a crazy candidate meaning for the English word ‘fairness’), would you still disagree with 2 and 3?
If I grant 1, I currently can’t think of any objections to 2 and 3 (which doesn’t mean that I won’t if I took 1 more seriously and therefore had more incentive to look for such objections).
And do you think that morality is unusual in failing 1-style regimentation, or do you think that we’ll eventually need to ditch nearly all English-language terms if we are to attain rigor?
I think at a minimum, it’s unusually difficult to do 1-style regimentation for morality (and Eliezer himself explained why in Unnatural Categories). I guess one point I’m trying to make is that whatever kind of reasoning we’re using to attempt this kind of regimentation is not the same kind of reasoning that we use to think about some logical object after we have regimented it. Does that make sense?
A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible ‘Oughtness’ Properties
If oughtness, nornmativity, isn’t irteducible, it’s either reducible or nonexistent. If it’s nonexistent, how can you have morality at all? If it’s reducible, where’s the reduction?
RobbBB probably knows this, but I’d just like to mention that the three claims listed above, at least as stated there, are common to many metaethical approaches, not just Eliezer’s. Desirism is one example. Other examples include the moral reductionisms of Richard Brandt, Peter Railton, and Frank Jackson.
I’d split up Eliezer’s view into several distinct claims:
A semantic thesis: Logically regimented versions of fairness, harm, obligation, etc. are reasonable semantic candidates for moral terms. They may not be what everyone actually means by ‘fair’ and ‘virtuous’ and so on, but they’re modest improvements in the same way that a rigorous genome-based definition of Canis lupus familiaris would be a reasonable improvement upon our casual, everyday concept of ‘dog,’ or that a clear set of thermodynamic thresholds would be a reasonable regimentation of our everyday concept ‘hot.’
A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible ‘Oughtness’ Properties In Our Fundamental Physics. All they commit us to are the ordinary objects of physics, logic, and mathematics, e.g., sets, functions, and causal relationships; and sets, functions, and causality are not metaphysically objectionable.
A normative thesis: It is useful to adopt moralityspeak ourselves, provided we do so using a usefully regimented semantics. The reasons to refuse to talk in a moral idiom are, in part thanks to 1 and 2, not strong enough to outweigh the rhetorical and self-motivational advantages of adopting such an idiom.
It seems clear to me that you disagree with thesis 1; but if you granted 1 (e.g., granted that ‘a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof’ is not a crazy candidate meaning for the English word ‘fairness’), would you still disagree with 2 and 3? And do you think that morality is unusual in failing 1-style regimentation, or do you think that we’ll eventually need to ditch nearly all English-language terms if we are to attain rigor?
I like this splitup!
(From the great-grandparent.)
I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we’re thinning down a universe of possible models using axioms.
One thing I didn’t go into, in this epistemology sequence, is the notion of ‘effectiveness’ or ‘formality’, which is important but I didn’t go into as much because my take on it feels much more standard—I’m not sure I have anything more to say about what constitutes an ‘effective’ formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain’s native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we’re done being effective, there’s still the question of whether we’re navigating to a part of the physical universe, or narrowing down mathematical models, and by ‘logical’ I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by ‘effective’ as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on.
I also don’t claim to have given morality an effective description—my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms—but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).
Let me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you’ve introduced a bunch of “moving parts” for your metaethical theory:
moral arguments
implicit reasons-for-action
effective descriptions of reasons-for-action
utility function
But I don’t understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What’s the “controlling algorithm” that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function?
As you argued in Unnatural Categories (which I keep citing recently), reasons-for-action can’t be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved.
Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let’s figure out both how they are supposed to work internally, and how they are supposed to fit together?
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?
If I grant 1, I currently can’t think of any objections to 2 and 3 (which doesn’t mean that I won’t if I took 1 more seriously and therefore had more incentive to look for such objections).
I think at a minimum, it’s unusually difficult to do 1-style regimentation for morality (and Eliezer himself explained why in Unnatural Categories). I guess one point I’m trying to make is that whatever kind of reasoning we’re using to attempt this kind of regimentation is not the same kind of reasoning that we use to think about some logical object after we have regimented it. Does that make sense?
If oughtness, nornmativity, isn’t irteducible, it’s either reducible or nonexistent. If it’s nonexistent, how can you have morality at all? If it’s reducible, where’s the reduction?
RobbBB probably knows this, but I’d just like to mention that the three claims listed above, at least as stated there, are common to many metaethical approaches, not just Eliezer’s. Desirism is one example. Other examples include the moral reductionisms of Richard Brandt, Peter Railton, and Frank Jackson.