I take it that Eliezer thinks we are very much in the position today of inhabiting a global, heavily schismatized network of Pythagorean Cults of Morality. Those cults are irrational, and their favored concepts would need to be made more precise and careful before the questions they ask could be assigned determinate answers (even in principle). But the subject matter those cults are talking about—how to cultivate human well-being, how to distribute resources equitably, -- is not intrinsically irrational or mystical or ineffable. The categories in question are tracking real property clusters, though perhaps not yet with complete applicability-to-any-old-case; no matter how much of a moral anti-realist you are, for instance, you can’t reasonably hold that ‘fairness’ doesn’t have its own set of satisfaction conditions that fail to coincide with other moral (or physical, mathematical, etc.) concepts.
Haven’t we been in this position since before mathematics was a thing. The lack of progress towards consensus in that period of time seems disheartening.
The natural number line is one of the simplest structures a human being is capable of conceiving. The idea of a human preference is one of the most complex structures a human being has yet encountered. And we have a lot more emotional investment and evolutionary baggage interfering with carefully axiomatizing our preferences than with carefully axiomatizing the numbers. Why should we be surprised that we’ve made more progress with regimenting number theory than with regimenting morality or decision theory in the last few thousand years?
In terms of moral theory, we appear to have made no progress at all. We don’t even agree on definitions.
Mathematics may or might not be an empirical discipline, but if you get your math wrong badly enough, you lose the ability to pay rent.
If morality paid rent in anticipated experience, I’d expect societies that had more correct morality to do better and societies with less correct morality to do worse. Morality is so important that I expect marginal differences to have major impact. And I just don’t see the evidence that such an impact is or ever did happen.
So, have I misread history? Or have I made a mistake in predicting that chance differences in morality should have major impacts on the prosperity of a society? (Or some other error?)
In terms of moral theory, we appear to have made no progress at all. We don’t even agree on definitions.
But defining terms is the trivial part of any theory; if you concede that we haven’t even gotten that far (and that term-defining is trivial), then you’ll have a much harder time arguing that if we did agree on definitions we’d still have made no progress. You can’t argue that, because if we all have differing term definitions, then that on its own predicts radical disagreement about almost anything; there is no need to posit a further explanation.
If morality paid rent in anticipated experience
Morality pays rent in anticipated experience in the same three basic ways that mathematics does:
Knowing about morality helps us predict the behavior of moralists, just as knowing about mathematics helps us predict the behavior of mathematicians (including their creations). If you know that people think murder is bad, you can help predict why murder is so rare; just as knowing mathematicians’ beliefs about natural numbers helps us predict what funny squiggly lines will occur on calculators. This, of course, doesn’t require any commitment to moral realism, just as it doesn’t require a commitment to mathematical realism.
Inasmuch as the structure of moral reasoning mirrors the structure of physical systems, we can predict how physical systems will change based on what our moral axioms output. For instance, if our moral axioms are carefully tuned to parallel the distribution of suffering in the world, we can use them to predict what sorts of brain-states will be physically instantiated if we perform certain behaviors. Similarly, if our number axioms are carefully tuned to parallel the changes in physical objects (and heaps thereof) in the world, we can use them to predict how physical objects will change when we translate them in spacetime.
Inasmuch as our intuitions give rise to our convictions about mathematics and morality, we can use the aforementioned convictions to predict our own future intuitions. In particular, an especially regimented mathematics or morality, that arises from highly intuitive axioms we accept, will often allow us to algorithmically generate what we would reflectively find most intuitive before we can even process the information sufficiently to generate the intuition. A calculator gives us the most intuitive and reflectively stable value for 142857 times 7 before we’ve gone to the trouble of understanding why or that this is the most intuitive value; similarly, a sufficiently advanced utility-calculator, programmed with the rules you find most reflectively intuitive, would generate the ultimately intuitive answers for moral dilemmas before you’d even gone to the trouble of figuring out on your own what you find most intuitive. And your future intuitions are future experiences; so the propositions of mathematics and morality, interestingly enough, serve as predictors for your own future mental states, at least when those mental states are sufficiently careful and thought out.
But all of these are to some extent indirect. It’s not as though we directly observe that SSSSSSS0 is prime, any more than we directly observe that murder is bad. We either take it as a given, or derive it from something else we take as a given; but regardless, there can be plenty of indirect ways that the ‘logical’ discourse in question helps us better navigate, manipulate, and predict our environments.
If morality paid rent in anticipated experience, I’d expect societies that had more correct morality to do better and societies with less correct morality to do worse.
There’s a problem here: What are we using to evaluate ‘doing better’ vs. ‘doing worse’? We often use moral superiority itself as an important measure of ‘betterness;’ we think it’s morally right or optimal to maximize human well-being, so we judge societies that do a good job of this as ‘better.’ At the very least, moral considerations like this seem to be a part of what we mean by ‘better.’ If you’re trying to bracket that kind of success, then it’s not clear to me what you even mean by ‘better’ or ‘prosperity’ here. Are you asking whether moral fortitude correlates with GDP?
(Some common senses of “moral fortitude” definitely cause GDP, at minimum in the form of trust between businesspeople and less predatory bureaucrats. But this part is equally true of Babyeaters.)
Haven’t we been in this position since before mathematics was a thing. The lack of progress towards consensus in that period of time seems disheartening.
The natural number line is one of the simplest structures a human being is capable of conceiving. The idea of a human preference is one of the most complex structures a human being has yet encountered. And we have a lot more emotional investment and evolutionary baggage interfering with carefully axiomatizing our preferences than with carefully axiomatizing the numbers. Why should we be surprised that we’ve made more progress with regimenting number theory than with regimenting morality or decision theory in the last few thousand years?
In terms of moral theory, we appear to have made no progress at all. We don’t even agree on definitions.
Mathematics may or might not be an empirical discipline, but if you get your math wrong badly enough, you lose the ability to pay rent.
If morality paid rent in anticipated experience, I’d expect societies that had more correct morality to do better and societies with less correct morality to do worse. Morality is so important that I expect marginal differences to have major impact. And I just don’t see the evidence that such an impact is or ever did happen.
So, have I misread history? Or have I made a mistake in predicting that chance differences in morality should have major impacts on the prosperity of a society? (Or some other error?)
But defining terms is the trivial part of any theory; if you concede that we haven’t even gotten that far (and that term-defining is trivial), then you’ll have a much harder time arguing that if we did agree on definitions we’d still have made no progress. You can’t argue that, because if we all have differing term definitions, then that on its own predicts radical disagreement about almost anything; there is no need to posit a further explanation.
Morality pays rent in anticipated experience in the same three basic ways that mathematics does:
Knowing about morality helps us predict the behavior of moralists, just as knowing about mathematics helps us predict the behavior of mathematicians (including their creations). If you know that people think murder is bad, you can help predict why murder is so rare; just as knowing mathematicians’ beliefs about natural numbers helps us predict what funny squiggly lines will occur on calculators. This, of course, doesn’t require any commitment to moral realism, just as it doesn’t require a commitment to mathematical realism.
Inasmuch as the structure of moral reasoning mirrors the structure of physical systems, we can predict how physical systems will change based on what our moral axioms output. For instance, if our moral axioms are carefully tuned to parallel the distribution of suffering in the world, we can use them to predict what sorts of brain-states will be physically instantiated if we perform certain behaviors. Similarly, if our number axioms are carefully tuned to parallel the changes in physical objects (and heaps thereof) in the world, we can use them to predict how physical objects will change when we translate them in spacetime.
Inasmuch as our intuitions give rise to our convictions about mathematics and morality, we can use the aforementioned convictions to predict our own future intuitions. In particular, an especially regimented mathematics or morality, that arises from highly intuitive axioms we accept, will often allow us to algorithmically generate what we would reflectively find most intuitive before we can even process the information sufficiently to generate the intuition. A calculator gives us the most intuitive and reflectively stable value for 142857 times 7 before we’ve gone to the trouble of understanding why or that this is the most intuitive value; similarly, a sufficiently advanced utility-calculator, programmed with the rules you find most reflectively intuitive, would generate the ultimately intuitive answers for moral dilemmas before you’d even gone to the trouble of figuring out on your own what you find most intuitive. And your future intuitions are future experiences; so the propositions of mathematics and morality, interestingly enough, serve as predictors for your own future mental states, at least when those mental states are sufficiently careful and thought out.
But all of these are to some extent indirect. It’s not as though we directly observe that SSSSSSS0 is prime, any more than we directly observe that murder is bad. We either take it as a given, or derive it from something else we take as a given; but regardless, there can be plenty of indirect ways that the ‘logical’ discourse in question helps us better navigate, manipulate, and predict our environments.
There’s a problem here: What are we using to evaluate ‘doing better’ vs. ‘doing worse’? We often use moral superiority itself as an important measure of ‘betterness;’ we think it’s morally right or optimal to maximize human well-being, so we judge societies that do a good job of this as ‘better.’ At the very least, moral considerations like this seem to be a part of what we mean by ‘better.’ If you’re trying to bracket that kind of success, then it’s not clear to me what you even mean by ‘better’ or ‘prosperity’ here. Are you asking whether moral fortitude correlates with GDP?
(Some common senses of “moral fortitude” definitely cause GDP, at minimum in the form of trust between businesspeople and less predatory bureaucrats. But this part is equally true of Babyeaters.)