The One True Form of Moral Progress (according to me) is using careful philosophical reasoning to figure out what our values should be, what morality consists of, where our current moral beliefs are wrong, or generally, the contents of normativity (what we should and shouldn’t do). Does this still seem wrong to you?
The basic justification for this is that for any moral “progress” or change that is not based on careful philosophical reasoning, how can we know that it’s actually a change for the better? I don’t think I’ve written a post specifically about this, but Morality is Scary is related, in that it complains that most other kinds of moral change seem to be caused by status games amplifying random aspects of human values or motivation.
The One True Form of Moral Progress (according to me) is using careful philosophical reasoning to figure out what our values should be, what morality consists of, where our current moral beliefs are wrong, or generally, the contents of normativity (what we should and shouldn’t do)
Are you interested in hearing other people’s answers to these questions (if they think they have them)?
Yes. I plan to write down my views properly at some point. But roughly I subscribe to non-cognitivism.
Moral questions are not well defined because they are written in ambiguous natural language, so they are not truth apt. Now you could argue that many reasonable questions are also ambiguous in this sense. Eg the question “how many people live in Sweden” is ultimately ambiguous because it is not written in a formal system (ie. the borders of Sweden are not defined down to the atomic level).
But you could in theory define the Sweden question in formal terms. You could define arbitrarily at how many nanoseconds after conception a fetus becomes a person and resolve all other ambiguities until the only work left would be empirical measurement of a well defined quantity.
And technically you could do the same for any moral question. But unlike the Sweden question, it would be hard to pick formal definitions that everyone can agree are reasonable. You could try to formally define the terms in “what should our values be?”. Then the philosophical question becomes “what is the formal definition of ‘should’?”. But this suffers the same ambiguity. So then you must define that question. And so on in an endless recursion. It seems to me that there cannot be any One True resolution to this. At some point you just have to arbitrarily pick some definitions.
The underlying philosophy here is that I think for a question to be one on which you can make progress, it must be one in which some answers can be shown to be correct and others incorrect. ie. questions where two people who disagree in good faith will reliably converge by understanding each other’s view. Questions where two aliens from different civilizations can reliably give the same answer without communicating. And the only questions like this seem to be those defined in formal systems.
Choosing definitions does not seem like such a set of questions. So resolving the ambiguities in moral questions is not something on which progress can be made. So we will never finally arrive at the One True answer to moral questions.
Ok, I see where you’re coming from, but think you’re being overconfident about non-cognitivism. My current position is that non-cognitivism is plausible, but we can’t be very sure that it is true, and making progress on this meta-ethical question also requires careful philosophical reasoning. These two posts of mine are relevant on this topic: Six Plausible Meta-Ethical Alternatives
, Some Thoughts on Metaphilosophy
Have you written about this? This sounds very wrong to me.
The One True Form of Moral Progress (according to me) is using careful philosophical reasoning to figure out what our values should be, what morality consists of, where our current moral beliefs are wrong, or generally, the contents of normativity (what we should and shouldn’t do). Does this still seem wrong to you?
The basic justification for this is that for any moral “progress” or change that is not based on careful philosophical reasoning, how can we know that it’s actually a change for the better? I don’t think I’ve written a post specifically about this, but Morality is Scary is related, in that it complains that most other kinds of moral change seem to be caused by status games amplifying random aspects of human values or motivation.
Are you interested in hearing other people’s answers to these questions (if they think they have them)?
Yes. I plan to write down my views properly at some point. But roughly I subscribe to non-cognitivism.
Moral questions are not well defined because they are written in ambiguous natural language, so they are not truth apt. Now you could argue that many reasonable questions are also ambiguous in this sense. Eg the question “how many people live in Sweden” is ultimately ambiguous because it is not written in a formal system (ie. the borders of Sweden are not defined down to the atomic level).
But you could in theory define the Sweden question in formal terms. You could define arbitrarily at how many nanoseconds after conception a fetus becomes a person and resolve all other ambiguities until the only work left would be empirical measurement of a well defined quantity.
And technically you could do the same for any moral question. But unlike the Sweden question, it would be hard to pick formal definitions that everyone can agree are reasonable. You could try to formally define the terms in “what should our values be?”. Then the philosophical question becomes “what is the formal definition of ‘should’?”. But this suffers the same ambiguity. So then you must define that question. And so on in an endless recursion. It seems to me that there cannot be any One True resolution to this. At some point you just have to arbitrarily pick some definitions.
The underlying philosophy here is that I think for a question to be one on which you can make progress, it must be one in which some answers can be shown to be correct and others incorrect. ie. questions where two people who disagree in good faith will reliably converge by understanding each other’s view. Questions where two aliens from different civilizations can reliably give the same answer without communicating. And the only questions like this seem to be those defined in formal systems.
Choosing definitions does not seem like such a set of questions. So resolving the ambiguities in moral questions is not something on which progress can be made. So we will never finally arrive at the One True answer to moral questions.
Ok, I see where you’re coming from, but think you’re being overconfident about non-cognitivism. My current position is that non-cognitivism is plausible, but we can’t be very sure that it is true, and making progress on this meta-ethical question also requires careful philosophical reasoning. These two posts of mine are relevant on this topic: Six Plausible Meta-Ethical Alternatives , Some Thoughts on Metaphilosophy