I agree with the thrust and most of the content of the post, but in the interest of strengthening it, I’m looking at your list of problems and wanted to point out what I see as gaps/weaknesses.
For the first one, keep in mind it took centuries from trying to develop a temperature scale to actually having the modern thermodynamic definition of temperature, and reliable thermometers. The definition is kinda weird and unintuitive, and strictly speaking runs from 0 to infinity, then discontinuously jumps to negative infinity (but only for some kinds of finite systems), then rises back towards negative zero (I always found this funny when playing the Sims 3 since it had a” −1K Refrigerator”). Humans knew things got hot and cold for many, many millennia before figuring out temperature in a principled way. Morality could plausibly be similar.
The third and fourth seem easily explainable by bounded rationality, in the same way that “ability to build flying machines and quantum computers” and “ability to identify and explain the fundamental laws of physical reality” vary between individuals, cultures, and societies.
For the fifth, there’s no theoretical requirement that something real should only have a small number of principles that are necessary for human-scale application. Occam’s Razor cuts against anyone suggesting a fundamentally complex thing, but it is possible there is a simple underlying set of principles that is just incredibly complicated to use in practice. I would argue that most attempts to formalize morality, from Kant to Bentham etc., have this problem, and one of the common ways they go wrong is that people try to apply them without recognizing that.
The sixth seems like a complete non-sequitur to me. If moral realism were true, then people should be morally good. But why would they? Even if there were somehow a satisfying answer to the second problem of imposing an obligation, this does not necessarily provide an actual mechanism to compel action or a trend to action to fulfil the obligation. In fact at least some traditional attempts to have moral realist frameworks, like Judeo-Christian God-as-Lawgiver religion, explicitly avoid having such a mechanism.
The whole field of meta-ethics is bogus and produces verbiage that isn’t helpful. The last paragraph here hits the nail on the head, an interlocutor can grant basically anything about morality, as long as there isn’t an enforcement mechanism it’s simply irrelevant.
Any non supernatural enforcement mechanism just turns compliance/non-compliance into a game theoretic problem. Even if there was a supernatural enforcement mechanism—people would just cooperate out of calculation, in which case morality and altruism get divorced and morality loses it’s emotional appeal and just reduces to individual selfishness, if you think long enough about it, which chokes the motivation to care about morality in the first place.
If I’m understanding you correctly, then I strongly disagree about what ethics and meta-ethics are for, as well as what “individual selfishness” means. The questions I care about flow from “What do I care about, and why?” and “How much do I think others should or will care about these things, and why?” Moral realism and amoral nihilism are far from the only options, and neither are ones I’m interested in accepting.
I agree with the thrust and most of the content of the post, but in the interest of strengthening it, I’m looking at your list of problems and wanted to point out what I see as gaps/weaknesses.
For the first one, keep in mind it took centuries from trying to develop a temperature scale to actually having the modern thermodynamic definition of temperature, and reliable thermometers. The definition is kinda weird and unintuitive, and strictly speaking runs from 0 to infinity, then discontinuously jumps to negative infinity (but only for some kinds of finite systems), then rises back towards negative zero (I always found this funny when playing the Sims 3 since it had a” −1K Refrigerator”). Humans knew things got hot and cold for many, many millennia before figuring out temperature in a principled way. Morality could plausibly be similar.
The third and fourth seem easily explainable by bounded rationality, in the same way that “ability to build flying machines and quantum computers” and “ability to identify and explain the fundamental laws of physical reality” vary between individuals, cultures, and societies.
For the fifth, there’s no theoretical requirement that something real should only have a small number of principles that are necessary for human-scale application. Occam’s Razor cuts against anyone suggesting a fundamentally complex thing, but it is possible there is a simple underlying set of principles that is just incredibly complicated to use in practice. I would argue that most attempts to formalize morality, from Kant to Bentham etc., have this problem, and one of the common ways they go wrong is that people try to apply them without recognizing that.
The sixth seems like a complete non-sequitur to me. If moral realism were true, then people should be morally good. But why would they? Even if there were somehow a satisfying answer to the second problem of imposing an obligation, this does not necessarily provide an actual mechanism to compel action or a trend to action to fulfil the obligation. In fact at least some traditional attempts to have moral realist frameworks, like Judeo-Christian God-as-Lawgiver religion, explicitly avoid having such a mechanism.
The whole field of meta-ethics is bogus and produces verbiage that isn’t helpful. The last paragraph here hits the nail on the head, an interlocutor can grant basically anything about morality, as long as there isn’t an enforcement mechanism it’s simply irrelevant. Any non supernatural enforcement mechanism just turns compliance/non-compliance into a game theoretic problem. Even if there was a supernatural enforcement mechanism—people would just cooperate out of calculation, in which case morality and altruism get divorced and morality loses it’s emotional appeal and just reduces to individual selfishness, if you think long enough about it, which chokes the motivation to care about morality in the first place.
If I’m understanding you correctly, then I strongly disagree about what ethics and meta-ethics are for, as well as what “individual selfishness” means. The questions I care about flow from “What do I care about, and why?” and “How much do I think others should or will care about these things, and why?” Moral realism and amoral nihilism are far from the only options, and neither are ones I’m interested in accepting.