I’m not quite sure what is the distinction you’re making. I’m a programmer—if I define a function public int calculateMoralityOf(Behaviour b), what exactly is the definition of that function if not its contents?
There are perhaps a lot of programmers on this site, which might explain why the habit of associating definitions with exhaustive specifications (which seems odd to those of us who (also) have a philosophy background) is so prevalent.
But it is not uniformly valid even in computing: Consider the difference between the definition of a “sort function” and the many ways of implementing sorting.
Consider the difference between the definition of a “sort function” and the many ways of implementing sorting.
That’s a good example you bring—the same function F:X->Y can be specified in different ways, but it’s still the same function if the same X leads to the same Y.
But even so, didn’t what I offer in regards morality come closer to a “definition”, than an “implementation”? I didn’t talk about how the different parts of the brain interact to produce the result (I wouldn’t know): I didn’t talk about the implementation of the function; only about what it is that our moral sense attempts to calculate.
But even so, didn’t what I offer in regards morality come closer to a “definition”, than an “implementation”?
The original point was:
That is not a definition of morality, that is a theory of morality. (It’s one of the better theories of morality I’ve seen, but not a definition). To see that that is not a definition consider that it appears to be a non-trivial statement in the way that a simple statement of definition shouldn’t be.
People offer differing theories of the same X, that is X defined in the same way. That is the essence of a disagreement. If they are not talking about the same X, they are not disagreeing, they are talking past each other.
There might be reasons to think that, in individual cases, people who appear to be disagreeiing are in fact talking past each other, But that is a point that needs to be argued for specific cases.
To claim that anything someone says about X is part of a definition of X , has the implication that in all cases, automatically, without regard to the individual details, there are no real diagreementss about any X but only different definitions. That is surely wrong, for all that it is popular with some on LW
Would a definition of “morality” be something like “An attribute assigned to behaviors depending on how much they trigger a person’s sense of moral approval/support or disapproval/outrage”, much like I could define beauty to mean “An attribute assigned to things that trigger a person’s sense of aesthetics”?
That would be a theory. If falls heavily on the side of subjetivism/non-cognitivism, which many disagree with.
People offer differing theories of the same X, that is X defined in the same way.
People aren’t perfectly self-aware. They don’t often know how to define precisely what it is that they mean. They “know it when they see it” instead.
That would be a theory
Accepting the split between “definition” and “theory” I suppose the definition of “sound” would be something like “that which triggers our sense of hearing”, and a theory of sound would be “sound is the perception of air vibrations”?
In which case I don’t know how it could be that a definition of morality could be different than “that which triggers our moral sense”—in analogy to the definition of sound. In which case I accept that my described opinion (that what triggers our moral sense is a calculation of “what our preferences would be about people’s behaviour if we had no personal stakes on the matter”) is merely a theory of morality.
People aren’t perfectly self-aware. They don’t often know how to define precisely what it is that they mean. They “know it when they see it” instead.
I don’t see how that relates to my point.
In which case I don’t know how it could be that a definition of morality could be different than “that which triggers our moral sense”
You can easily look up definitions that don’t work that way, eg: “Morality (from the Latin moralitas “manner, character, proper behavior”) is the differentiation of intentions, decisions, and actions between those that are “good” (or right) and those that are “bad” (or wrong).”
You said that “people offer differing theories of the same X, that is X defined in the same way”. I’m saying that people disagree on how to define concepts they instinctively feel—such as the concept of morality. So the X isn’t “defined in the same way”.
You can easily look up definitions that don’t work that way, eg: “Morality (from the Latin moralitas “manner, character, proper behavior”) is the differentiation of intentions, decisions, and actions between those that are “good” (or right) and those that are “bad” (or wrong).”
Yeah well, when I’m talking about definition I mean something that helps us logically pinpoint or atleast circumscribe a thing. Circular definitions like jumping from “morality” to “good” or to “what one should do” don’t really work for me, since they can quite easily be defined the opposite way.
To properly define something one ought use terms more fundamental than the thing defined.
What, not ever? By anybody? Even people who have agreed on on an explicit definition?
From wikipedia:
When Plato gave Socrates’ definition of man as “featherless bipeds” and was much praised for the definition, Diogenes plucked a chicken and brought it into Plato’s Academy, saying, ‘Behold! I’ve brought you a man.’ After this incident, ‘with broad flat nails’ was added to Plato’s definition.
Now Plato and his students had an explicit definition they agreed upon, but nonetheless it’s clearly NOT what their minds understood ‘man’ to be, not really what they were discussing when they were discussing ‘man’. Their definition wasn’t really logically pinpointing the concept they had in mind.
It isn’t clearly un-circular to define morality as that which triggers the moral sense.
It attempts to go down a level from the abstract to the biological. It will be of course be circular if someone then proceeds to define “moral sense” as that sense which is triggered by morality, instead of pointing at examples thereof.
So what is the upshot of of this single datum? That no definition ever captures a concept ? That there is some special problem with the concept of morality ?
Is the biological the right place to go? Is it not question begging to builds that theory into a definition?
Hardly. e.g. the definition of a circle perfectly captures the concept of a circle.
My point was that to merely agree on the definition of a concept doesn’t mean our “definition” is correct, that it is properly encapsulating what we wanted it to encapsulate.
That there is some special problem with the concept of morality?
No more of a problem than e.g. the concept of beauty. Our brains makes calculations and produces a result. To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff. This is pretty much tautological.
Is the biological the right place to go?
Since our brains are made of biology, there’s no concept we’re aware of that can’t be reduced to the calculations encoded in our brain’s biology.
it not question begging to builds that theory into a definition?
It was once a mere theory to believe that the human brain is the center of human thought (and therefore all concepts dealt by human thought), but I think it’s been proven beyond all reasonable doubt.
Your example shows it is possible to agree on a bad definition. But there is no arbiter or touchstone of correctness that is not based on further discussion and agreement.
That morality-you is whatever your brain thanks it is, subjectivity, is highly contentious and therefore not tautologous .
Hnever, you seem to have confused subjectivism withreductionism. That my concept of perfect circle is encoded into my brain does not make it subjective.
But if you are only offering reductions without subjectivity, you are offering nothing of interest. “The concept of perfectMorality is a concept encoded in your brain”example tells me nothing.
The question remains open as to whether your it’s-all-in-the-brain is subjectivism or not.
That morality-you is whatever your brain thinks it is, subjectivity, is highly contentious and therefore not tautologous.
What I called tautological was the statement “To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff.”
I think your rephrasing “morality is whatever your brain thinks it is” would only work as a proper rephrase if I believed us perfectly self-aware, which as I’ve said, i don’t.
That my concept of perfect circle is encoded into my brain does not make it subjective.
It’s you who keep calling me a subjectivist. I don’t consider myself one..
The question remains open as to whether your it’s-all-in-the-brain is subjectivism or not
Who is asking that question, and why should I care about asking it? I care to learn about morality, and whether my beliefs about it are true or false—I don’t care to know about whether you would call it “subjectivism” or not.
What I called tautological was the statement “To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff.”
Is there any possibility of our brains being wrong?
Who is asking that question, and why should I care about asking it? I care to learn about morality, and whether my beliefs about it are true or false—I don’t care to know about whether you would call it “subjectivism” or not.
And it’s progress to reject definitions, which we have, in favour of brains cans which we don’t?
Is there any possibility of our brains being wrong?
As I’ve said before I believe that morality is an attempted calculation of our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved. Given this, I believe that we can be wrong about moral matters, when we fail to make this estimation accurately.
But that doesn’t seem to be to what you’re asking. You seem to me to be asking “If our brains’s entire moral mechanism INDEED attempts to calculate our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved, would the mere attempt be somehow epistemically wrong? ”
The answer is obviously no: epistemic errors lies in beliefs, the outcome of calculations. Not in attempted actions, not the attempt of calculations. The question itself is a category error.
If the attempt can go wrong, then we can’t find out what morality is by looking at what our brains do when they make a possibly failed attempt. We would have to look at what they are offering aiming at, what they should be doing. Try as you might, you cannot ignore the normativity of morality (or rationality for that matter).
There are perhaps a lot of programmers on this site, which might explain why the habit of associating definitions with exhaustive specifications (which seems odd to those of us who (also) have a philosophy background) is so prevalent.
But it is not uniformly valid even in computing: Consider the difference between the definition of a “sort function” and the many ways of implementing sorting.
That’s a good example you bring—the same function F:X->Y can be specified in different ways, but it’s still the same function if the same X leads to the same Y.
But even so, didn’t what I offer in regards morality come closer to a “definition”, than an “implementation”? I didn’t talk about how the different parts of the brain interact to produce the result (I wouldn’t know): I didn’t talk about the implementation of the function; only about what it is that our moral sense attempts to calculate.
The original point was:
People offer differing theories of the same X, that is X defined in the same way. That is the essence of a disagreement. If they are not talking about the same X, they are not disagreeing, they are talking past each other.
There might be reasons to think that, in individual cases, people who appear to be disagreeiing are in fact talking past each other, But that is a point that needs to be argued for specific cases.
To claim that anything someone says about X is part of a definition of X , has the implication that in all cases, automatically, without regard to the individual details, there are no real diagreementss about any X but only different definitions. That is surely wrong, for all that it is popular with some on LW
That would be a theory. If falls heavily on the side of subjetivism/non-cognitivism, which many disagree with.
People aren’t perfectly self-aware. They don’t often know how to define precisely what it is that they mean. They “know it when they see it” instead.
Accepting the split between “definition” and “theory” I suppose the definition of “sound” would be something like “that which triggers our sense of hearing”, and a theory of sound would be “sound is the perception of air vibrations”?
In which case I don’t know how it could be that a definition of morality could be different than “that which triggers our moral sense”—in analogy to the definition of sound. In which case I accept that my described opinion (that what triggers our moral sense is a calculation of “what our preferences would be about people’s behaviour if we had no personal stakes on the matter”) is merely a theory of morality.
I don’t see how that relates to my point.
You can easily look up definitions that don’t work that way, eg: “Morality (from the Latin moralitas “manner, character, proper behavior”) is the differentiation of intentions, decisions, and actions between those that are “good” (or right) and those that are “bad” (or wrong).”
You said that “people offer differing theories of the same X, that is X defined in the same way”. I’m saying that people disagree on how to define concepts they instinctively feel—such as the concept of morality. So the X isn’t “defined in the same way”.
Yeah well, when I’m talking about definition I mean something that helps us logically pinpoint or atleast circumscribe a thing. Circular definitions like jumping from “morality” to “good” or to “what one should do” don’t really work for me, since they can quite easily be defined the opposite way.
To properly define something one ought use terms more fundamental than the thing defined.
What, not ever? By anybody? Even people who have agreed on on an explicit definition?
It isn’t clearly un-circular to define morality as that which triggers the moral sense.
Your definition has the further problem of begging the question in favour subjectivism and non-cognitivism.
From wikipedia:
Now Plato and his students had an explicit definition they agreed upon, but nonetheless it’s clearly NOT what their minds understood ‘man’ to be, not really what they were discussing when they were discussing ‘man’. Their definition wasn’t really logically pinpointing the concept they had in mind.
It attempts to go down a level from the abstract to the biological. It will be of course be circular if someone then proceeds to define “moral sense” as that sense which is triggered by morality, instead of pointing at examples thereof.
So what is the upshot of of this single datum? That no definition ever captures a concept ? That there is some special problem with the concept of morality ?
Is the biological the right place to go? Is it not question begging to builds that theory into a definition?
Hardly. e.g. the definition of a circle perfectly captures the concept of a circle.
My point was that to merely agree on the definition of a concept doesn’t mean our “definition” is correct, that it is properly encapsulating what we wanted it to encapsulate.
No more of a problem than e.g. the concept of beauty. Our brains makes calculations and produces a result. To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff. This is pretty much tautological.
Since our brains are made of biology, there’s no concept we’re aware of that can’t be reduced to the calculations encoded in our brain’s biology.
It was once a mere theory to believe that the human brain is the center of human thought (and therefore all concepts dealt by human thought), but I think it’s been proven beyond all reasonable doubt.
Your example shows it is possible to agree on a bad definition. But there is no arbiter or touchstone of correctness that is not based on further discussion and agreement.
That morality-you is whatever your brain thanks it is, subjectivity, is highly contentious and therefore not tautologous .
Hnever, you seem to have confused subjectivism withreductionism. That my concept of perfect circle is encoded into my brain does not make it subjective.
But if you are only offering reductions without subjectivity, you are offering nothing of interest. “The concept of perfectMorality is a concept encoded in your brain”example tells me nothing.
The question remains open as to whether your it’s-all-in-the-brain is subjectivism or not.
What I called tautological was the statement “To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff.”
I think your rephrasing “morality is whatever your brain thinks it is” would only work as a proper rephrase if I believed us perfectly self-aware, which as I’ve said, i don’t.
It’s you who keep calling me a subjectivist. I don’t consider myself one..
Who is asking that question, and why should I care about asking it? I care to learn about morality, and whether my beliefs about it are true or false—I don’t care to know about whether you would call it “subjectivism” or not.
Is there any possibility of our brains being wrong?
And it’s progress to reject definitions, which we have, in favour of brains cans which we don’t?
As I’ve said before I believe that morality is an attempted calculation of our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved. Given this, I believe that we can be wrong about moral matters, when we fail to make this estimation accurately.
But that doesn’t seem to be to what you’re asking. You seem to me to be asking “If our brains’s entire moral mechanism INDEED attempts to calculate our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved, would the mere attempt be somehow epistemically wrong? ”
The answer is obviously no: epistemic errors lies in beliefs, the outcome of calculations. Not in attempted actions, not the attempt of calculations. The question itself is a category error.
If the attempt can go wrong, then we can’t find out what morality is by looking at what our brains do when they make a possibly failed attempt. We would have to look at what they are offering aiming at, what they should be doing. Try as you might, you cannot ignore the normativity of morality (or rationality for that matter).
You didn’t answer my second question.