Except that “sufficiently sane/intelligent” here just means, it seems, “implements modus ponens, has inductive priors, etc.” We can, like Nick Tarleton, simply define as “not a mind” any entity or process that doesn’t implement these criteria for sufficient sanity/intelligence...
… but then we are basically saying: any mind that is not convinced by what we think should be universally compelling arguments, is not a mind.
That seems like a dodge, at best.
Are there different criteria for sufficient sanity and intelligence, ones not motivated by the matter of (allegedly) universally compelling arguments?
Except that “sufficiently sane/intelligent” here just means, it seems, “implements modus ponens, has inductive priors, etc.”
“Sufficiently sane/intelligent” means something like, “Has a sufficient tendency to form true inferences from a sufficiently wide variety of bodies of evidences.”
Now, we believe that modus ponens yields true inferences. We also believe that a tendency to make inferences contrary to modus ponens will cause a tendency to make false inferences. From this you can infer that we believe that a sufficiently sane/intelligent agent will implement modus ponens.
But the truth of this inference about our beliefs does not mean that “sufficiently sane/intelligent” is defined to mean “implements modus ponens”.
In particular, our definition of “sufficiently sane/intelligent” implies that, if A is a sufficiently sane/intelligent agent who lives in an impossible possible world that does not implement modus ponens, then A does not implement modus ponens.
“Sufficiently sane/intelligent” means something like, “Has a sufficient tendency to form true inferences from a sufficiently wide variety of bodies of evidences.”
Since clippy fails to form true inferences about morality, doesn’t it also count as “insufficiently sane/intelligent”?
Imagine if humans had never broken into different groups and we all spoke the same language. No French, no English, just “the Language”. People study the Language, debate it, etc.
Then one day intelligent aliens arrive. Philosophers immediately begin debating: do these aliens have the Language? One the one hand, they’re making noises with what appears to be something comparable to a mouth, the noises have an order and structure to them, and they communicate information. But what they do sounds nothing like “the Language”. They refer to objects with different sounds than the Language requires, and sometimes make sounds that describe what an object is like after the sound that refers to the object.
“Morality” has a similar type-token ambiguity. It can refer to our values or to values in general. Saying Clippy knows what is moral but that he doesn’t care is true under the token interpretation, but not the type one. The word “morality” has meanings and connotations that imply that Clippy has a morality but that it is just different—in the same way that the aliens have language but that it is just different.
Isn’t it an optimization to code in the type, and let the .AI work out the details necessary to implement the token ? We don’t think theorem provers need to be overloaded with all known maths.
“But when you ask a question and someone provides an answer you don’t like, showing why that answer is wrong can sometimes be more effective than simply asserting that you don’t buy it”
I agree completely. Had I known in advance the quality of argument you would put up, I would not have wanted you to put it up, and would not have asked for one, in full compliance with this maxim. Lacking prescience, I didn’t know in advance, so I did want an argument, and I did ask for one, which fails to violate this maxim.
I’m afraid I have developed a sudden cognitive deficit that prevents me from understanding anything you are saying. I have also forgotten all the claims I have made, and what this discussion is about.
The argument of the dragon in my garage suggests that the supernaturalist already knows the facts of the natural world, but doesn’t care.
But the sense in which “Clippy knows what is moral” is that Clippy can correctly predict humans, and “morality” has to do with what humans value and approve of — not what paperclippers value and approve of.
Do you mean a paperclip maximizer mistakenly believing that the English word moral means ‘optimizing paperclips’ rather than ‘optimizing life, consciousness, etc.’, or a paperclip maximizer who knows that that the English word moral means ‘optimizing life, consciousness, etc.’ but mistakenly believes that optimizing paperclips would optimize life, consciousness, etc.?
And neither is like a paperclip maximizer who knows that that the English word moral means ‘optimizing paperclips’ rather than ‘optimizing life, consciousness, etc.’, and knows that optimizing paperclips doesn’t optimize life, consciousness, etc., but doesn’t give a damn about optimizing life, consciousness, etc.
The structure of the above comment would benefit from using a macro:
Let M = ‘life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one’s own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc.’
Do you mean a paperclip maximizer mistakenly believing that the English word “moral” means ‘optimizing paperclips’ rather than ‘optimizing M’, or a paperclip maximizer who knows that that the English word “moral” means ‘optimizing M’ but mistakenly believes that optimizing paperclips would optimize M?
And neither is like a paperclip maximizer who knows that that the English word “moral” means ‘optimizing paperclips’ rather than ‘optimizing M’, and knows that optimizing paperclips doesn’t optimize M.
Do you mean a paperclip maximizer mistakenly believing that the English word moral means ‘optimizing paperclips’ rather than ‘optimizing life, consciousness, etc.’, or a paperclip maximizer who knows that that the English word moral means ‘optimizing life, consciousness, etc.’ but mistakenly believes that optimizing paperclips would optimize life, consciousness, etc.?
Wow, it appears you don’t know what the English word “moral” means either. It roughly means “that which one should do”. To use the analogy of the creationist, would you be happy with defining truth as “the earth is 5 billion years old, etc.”?
When you said that “moral” is “that which one should do” you simply failed to delve into a more fundamental level that would describe what the terms ‘moral’ and ‘should’ both refer to.
My own view, for example, is that our moral sense and our perception of what one ‘should’ do are an attempted calculation of what our preferences would be about people’s behaviour if we had no personal stakes on the matter—imagining ourselves unbiased and uninvolved.
Assuming the above definition is true (in the sense that it accurately describes what’s going on in our brains when we feel things like moral approval or moral disapproval about a behaviour), it’s not circular at all.
When you said that “moral” is “that which one should do” you simply failed to delve into a more fundamental level that would describe what the terms ‘moral’ and ‘should’ both refer to.
One can only fail to do what one is trying to do. If what one is trying to do is refute a putative definition of “morality” one doesn’t need a full reduction. AFIACT, that was in fact the context—someone was saying that Clippy could validly define “morality” as making paperpclips.
My own view, for example, is that our moral sense and our perception of what one ‘should’ do are an attempted calculation of what our preferences would be about people’s behaviour if we had no personal stakes on the matter—imagining ourselves unbiased and uninvolved.
I like that idea too—although It isn’t new. Also, it is a theory, not a definition.
Assuming the above definition is true (in the sense that it accurately describes what’s going on in our brains when we feel things like moral approval or moral disapproval about a behaviour), it’s not circular at all.
Does physics shave to describe what we think the behaviour of objects is, or can we improve on that?
My own view, for example, is that our moral sense and our perception of what one ‘should’ do are an attempted calculation of what our preferences would be about people’s behaviour if we had no personal stakes on the matter—imagining ourselves unbiased and uninvolved.
That is not a definition of morality, that is a theory of morality. (It’s one of the better theories of morality I’ve seen, but not a definition). To see that that is not a definition consider that it appears to be a non-trivial statement in the way that a simple statement of definition shouldn’t be.
I’m not quite sure what is the distinction you’re making. I’m a programmer—if I define a function public int calculateMoralityOf(Behaviour b), what exactly is the definition of that function if not its contents?
Would a definition of “morality” be something like “An attribute assigned to behaviors depending on how much they trigger a person’s sense of moral approval/support or disapproval/outrage”, much like I could define beauty to mean “An attribute assigned to things that trigger a person’s sense of aesthetics”?
I’m not quite sure what is the distinction you’re making. I’m a programmer—if I define a function public int calculateMoralityOf(Behaviour b), what exactly is the definition of that function if not its contents?
There are perhaps a lot of programmers on this site, which might explain why the habit of associating definitions with exhaustive specifications (which seems odd to those of us who (also) have a philosophy background) is so prevalent.
But it is not uniformly valid even in computing: Consider the difference between the definition of a “sort function” and the many ways of implementing sorting.
Consider the difference between the definition of a “sort function” and the many ways of implementing sorting.
That’s a good example you bring—the same function F:X->Y can be specified in different ways, but it’s still the same function if the same X leads to the same Y.
But even so, didn’t what I offer in regards morality come closer to a “definition”, than an “implementation”? I didn’t talk about how the different parts of the brain interact to produce the result (I wouldn’t know): I didn’t talk about the implementation of the function; only about what it is that our moral sense attempts to calculate.
But even so, didn’t what I offer in regards morality come closer to a “definition”, than an “implementation”?
The original point was:
That is not a definition of morality, that is a theory of morality. (It’s one of the better theories of morality I’ve seen, but not a definition). To see that that is not a definition consider that it appears to be a non-trivial statement in the way that a simple statement of definition shouldn’t be.
People offer differing theories of the same X, that is X defined in the same way. That is the essence of a disagreement. If they are not talking about the same X, they are not disagreeing, they are talking past each other.
There might be reasons to think that, in individual cases, people who appear to be disagreeiing are in fact talking past each other, But that is a point that needs to be argued for specific cases.
To claim that anything someone says about X is part of a definition of X , has the implication that in all cases, automatically, without regard to the individual details, there are no real diagreementss about any X but only different definitions. That is surely wrong, for all that it is popular with some on LW
Would a definition of “morality” be something like “An attribute assigned to behaviors depending on how much they trigger a person’s sense of moral approval/support or disapproval/outrage”, much like I could define beauty to mean “An attribute assigned to things that trigger a person’s sense of aesthetics”?
That would be a theory. If falls heavily on the side of subjetivism/non-cognitivism, which many disagree with.
People offer differing theories of the same X, that is X defined in the same way.
People aren’t perfectly self-aware. They don’t often know how to define precisely what it is that they mean. They “know it when they see it” instead.
That would be a theory
Accepting the split between “definition” and “theory” I suppose the definition of “sound” would be something like “that which triggers our sense of hearing”, and a theory of sound would be “sound is the perception of air vibrations”?
In which case I don’t know how it could be that a definition of morality could be different than “that which triggers our moral sense”—in analogy to the definition of sound. In which case I accept that my described opinion (that what triggers our moral sense is a calculation of “what our preferences would be about people’s behaviour if we had no personal stakes on the matter”) is merely a theory of morality.
People aren’t perfectly self-aware. They don’t often know how to define precisely what it is that they mean. They “know it when they see it” instead.
I don’t see how that relates to my point.
In which case I don’t know how it could be that a definition of morality could be different than “that which triggers our moral sense”
You can easily look up definitions that don’t work that way, eg: “Morality (from the Latin moralitas “manner, character, proper behavior”) is the differentiation of intentions, decisions, and actions between those that are “good” (or right) and those that are “bad” (or wrong).”
You said that “people offer differing theories of the same X, that is X defined in the same way”. I’m saying that people disagree on how to define concepts they instinctively feel—such as the concept of morality. So the X isn’t “defined in the same way”.
You can easily look up definitions that don’t work that way, eg: “Morality (from the Latin moralitas “manner, character, proper behavior”) is the differentiation of intentions, decisions, and actions between those that are “good” (or right) and those that are “bad” (or wrong).”
Yeah well, when I’m talking about definition I mean something that helps us logically pinpoint or atleast circumscribe a thing. Circular definitions like jumping from “morality” to “good” or to “what one should do” don’t really work for me, since they can quite easily be defined the opposite way.
To properly define something one ought use terms more fundamental than the thing defined.
What, not ever? By anybody? Even people who have agreed on on an explicit definition?
From wikipedia:
When Plato gave Socrates’ definition of man as “featherless bipeds” and was much praised for the definition, Diogenes plucked a chicken and brought it into Plato’s Academy, saying, ‘Behold! I’ve brought you a man.’ After this incident, ‘with broad flat nails’ was added to Plato’s definition.
Now Plato and his students had an explicit definition they agreed upon, but nonetheless it’s clearly NOT what their minds understood ‘man’ to be, not really what they were discussing when they were discussing ‘man’. Their definition wasn’t really logically pinpointing the concept they had in mind.
It isn’t clearly un-circular to define morality as that which triggers the moral sense.
It attempts to go down a level from the abstract to the biological. It will be of course be circular if someone then proceeds to define “moral sense” as that sense which is triggered by morality, instead of pointing at examples thereof.
So what is the upshot of of this single datum? That no definition ever captures a concept ? That there is some special problem with the concept of morality ?
Is the biological the right place to go? Is it not question begging to builds that theory into a definition?
Hardly. e.g. the definition of a circle perfectly captures the concept of a circle.
My point was that to merely agree on the definition of a concept doesn’t mean our “definition” is correct, that it is properly encapsulating what we wanted it to encapsulate.
That there is some special problem with the concept of morality?
No more of a problem than e.g. the concept of beauty. Our brains makes calculations and produces a result. To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff. This is pretty much tautological.
Is the biological the right place to go?
Since our brains are made of biology, there’s no concept we’re aware of that can’t be reduced to the calculations encoded in our brain’s biology.
it not question begging to builds that theory into a definition?
It was once a mere theory to believe that the human brain is the center of human thought (and therefore all concepts dealt by human thought), but I think it’s been proven beyond all reasonable doubt.
Your example shows it is possible to agree on a bad definition. But there is no arbiter or touchstone of correctness that is not based on further discussion and agreement.
That morality-you is whatever your brain thanks it is, subjectivity, is highly contentious and therefore not tautologous .
Hnever, you seem to have confused subjectivism withreductionism. That my concept of perfect circle is encoded into my brain does not make it subjective.
But if you are only offering reductions without subjectivity, you are offering nothing of interest. “The concept of perfectMorality is a concept encoded in your brain”example tells me nothing.
The question remains open as to whether your it’s-all-in-the-brain is subjectivism or not.
That morality-you is whatever your brain thinks it is, subjectivity, is highly contentious and therefore not tautologous.
What I called tautological was the statement “To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff.”
I think your rephrasing “morality is whatever your brain thinks it is” would only work as a proper rephrase if I believed us perfectly self-aware, which as I’ve said, i don’t.
That my concept of perfect circle is encoded into my brain does not make it subjective.
It’s you who keep calling me a subjectivist. I don’t consider myself one..
The question remains open as to whether your it’s-all-in-the-brain is subjectivism or not
Who is asking that question, and why should I care about asking it? I care to learn about morality, and whether my beliefs about it are true or false—I don’t care to know about whether you would call it “subjectivism” or not.
What I called tautological was the statement “To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff.”
Is there any possibility of our brains being wrong?
Who is asking that question, and why should I care about asking it? I care to learn about morality, and whether my beliefs about it are true or false—I don’t care to know about whether you would call it “subjectivism” or not.
And it’s progress to reject definitions, which we have, in favour of brains cans which we don’t?
Is there any possibility of our brains being wrong?
As I’ve said before I believe that morality is an attempted calculation of our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved. Given this, I believe that we can be wrong about moral matters, when we fail to make this estimation accurately.
But that doesn’t seem to be to what you’re asking. You seem to me to be asking “If our brains’s entire moral mechanism INDEED attempts to calculate our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved, would the mere attempt be somehow epistemically wrong? ”
The answer is obviously no: epistemic errors lies in beliefs, the outcome of calculations. Not in attempted actions, not the attempt of calculations. The question itself is a category error.
If the attempt can go wrong, then we can’t find out what morality is by looking at what our brains do when they make a possibly failed attempt. We would have to look at what they are offering aiming at, what they should be doing. Try as you might, you cannot ignore the normativity of morality (or rationality for that matter).
A paperclip maximizer optimizes as effectively as anyone else—it squuzes the future into an equally small (though different) region. A creationist would be less effective as they would make false predictions about objective facts like antibiotic resistance.
It isn’t. It is, however, entirely different from a paperclip maximiser—not at all mistakenly—not caring about morality. There’s no general reason to assume that a difference in goals implies a factual disagreement.
There’s no general reason to assume that a difference in goals implies a factual disagreement.
This is precisely what army1987 was trying to argue for when he brought up this example. Thus, attempting to use it in the analysis constitutes circular reasoning.
What? No, army1987 was trying to argue for “clippy knows what is moral but doesn’t care”. The fact that a difference in goals does not imply a factual disagreement simply shows army1987′s position to be consistent.
Also, um, why is it my responsibility to prove that you have no reason to assume something? You’re the one proposing that “X has different goals” implies “X is mistaken about morality”. How did you come to be so sure of this that you could automatically substitute “mistakenly believing that morality consists of optimizing paperclips” for “cares about paperclips”? Especially considering the counterevidence from the fact that there exist computable decision theories that can take an arbitrary utility function?
Aumann’s agreement theorem prevents that from happening to ideal epistemic rationalists; there’s no analogue for instrumental rationality.
But...
Aumann’s agreement theorem assumes common priors, what I described can only happen to instrumental rationalists with different utility functions. So the question is why we expect all rationalists to use One True Prior (e.g. Solomonoff induction) but each to use their own utility function.
“sufficiently sane/intelligent” means “effective enough in the real world to pose a threat to my values”. Papercillper qualifies, flue virus qualifies, anti-inductive AI does not qualify.
… but then we are basically saying: any mind that is not convinced by what we think should be universally compelling arguments, is not a mind.
That doesn’t follow. For one thing, we can find out how the Mind works by inspecting its code, not just by black box testing it If it seems to have all that it needs and isn’t convinced by arguments that convince us, it may well be we who are wrong.
For one thing, we can find out how the Mind works by inspecting its code
We can?
So I have all these minds around me.
How do I inspect their code and thereby find out how they work? Detailed instructions would be appreciated. (Assume that I have no ethical restrictions.)
That (only slightly-joking) response aside, I think you have misunderstood me. I did not mean that we are (in the scenario I am lampooning) saying:
“Any mind that is not convinced by what we think should be universally compelling arguments, despite implementing modus ponens and having an Occamian prior, is not a mind.”
Rather, I meant that we are saying:
“Any mind that is not convinced by what we think should be universally compelling arguments, by virtue of said mind not implementing modus ponens, having an Occamian prior, or otherwise having such-and-such property which would be required in order to find this argument compelling, is not a mind.”
The problem I am pointing out in such reasoning is that we can apply it to any argument we care to designate as “this ought to be universally compelling”. “Ah!” we say, “this mind does not agree that ice cream is delicious? Well, that’s because it doesn’t implement , and without said property, why, we can hardly call it a mind at all.”
A rationality quote of sorts is relevant here:
“Well, let’s put it like this. A human has encountered an extraterrestrial lifeform. How do they each discover, that they are both intelligent?”
“I have no idea,” said Valentine merrily. “All that I have read on this subject reduces to a vicious circle. If they are capable of contact, then they are intelligent. And the reverse: if they are intelligent, then they are capable of contact. And in general: if an extraterrestrial lifeform has the honor of possessing a human psychology, then it is intelligent. Like that.”
(Roadside Picnic, Arkady and Boris Strugatsky)
What we have here is something similar. If a mind is sufficiently sane/intelligent, then it will be convinced by our arguments. And the reverse: if it is convinced by our arguments, then it is sane/intelligent...
In yet other words: we can hardly say “we expect all sane/intelligent minds to be convinced by these arguments” if we have in the first place defined sanity and intelligence to require the ability to be convinced by those very arguments.
That’s not what is happening here. Is what I wrote actually unclear? Please reread my comment, starting with the assumption that what you responded with is not what my intended meaning was. If still unclear, I will try to clarify.
Except that “sufficiently sane/intelligent” here just means, it seems, “implements modus ponens, has inductive priors, etc.” We can, like Nick Tarleton, simply define as “not a mind” any entity or process that doesn’t implement these criteria for sufficient sanity/intelligence...
… but then we are basically saying: any mind that is not convinced by what we think should be universally compelling arguments, is not a mind.
That seems like a dodge, at best.
Are there different criteria for sufficient sanity and intelligence, ones not motivated by the matter of (allegedly) universally compelling arguments?
“Sufficiently sane/intelligent” means something like, “Has a sufficient tendency to form true inferences from a sufficiently wide variety of bodies of evidences.”
Now, we believe that modus ponens yields true inferences. We also believe that a tendency to make inferences contrary to modus ponens will cause a tendency to make false inferences. From this you can infer that we believe that a sufficiently sane/intelligent agent will implement modus ponens.
But the truth of this inference about our beliefs does not mean that “sufficiently sane/intelligent” is defined to mean “implements modus ponens”.
In particular, our definition of “sufficiently sane/intelligent” implies that, if A is a sufficiently sane/intelligent agent who lives in an impossible possible world that does not implement modus ponens, then A does not implement modus ponens.
Since clippy fails to form true inferences about morality, doesn’t it also count as “insufficiently sane/intelligent”?
Clippy knows what is moral and what isn’t. He just doesn’t care.
Imagine if humans had never broken into different groups and we all spoke the same language. No French, no English, just “the Language”. People study the Language, debate it, etc.
Then one day intelligent aliens arrive. Philosophers immediately begin debating: do these aliens have the Language? One the one hand, they’re making noises with what appears to be something comparable to a mouth, the noises have an order and structure to them, and they communicate information. But what they do sounds nothing like “the Language”. They refer to objects with different sounds than the Language requires, and sometimes make sounds that describe what an object is like after the sound that refers to the object.
“Morality” has a similar type-token ambiguity. It can refer to our values or to values in general. Saying Clippy knows what is moral but that he doesn’t care is true under the token interpretation, but not the type one. The word “morality” has meanings and connotations that imply that Clippy has a morality but that it is just different—in the same way that the aliens have language but that it is just different.
So, I guess the point of EY’s metaethics can be summarized as ‘by “morality” I mean the token, not the type’.
(Which is not a problem IMO, as there are unambiguous words for the type, e.g. “values”—except insofar as people are likely to misunderstand him.)
Especially because the whole point is to optimize for something. You can’t optimize for a type that could have any value.
Isn’t it an optimization to code in the type, and let the .AI work out the details necessary to implement the token ? We don’t think theorem provers need to be overloaded with all known maths.
Is this some kind of an NLP exercise?
FWIW, I’ve mostly concluded something along those lines.
You wrote
“But when you ask a question and someone provides an answer you don’t like, showing why that answer is wrong can sometimes be more effective than simply asserting that you don’t buy it”
..and I did..
Indeed. And?
If you don’t want someone to put up an argument, don’t ask t for it.
I agree completely.
Had I known in advance the quality of argument you would put up, I would not have wanted you to put it up, and would not have asked for one, in full compliance with this maxim.
Lacking prescience, I didn’t know in advance, so I did want an argument, and I did ask for one, which fails to violate this maxim.
You wanted an argument? Sorry this is “Insults”. Go down the hall and to the left. Monty Python (to my best recollection)
You want 12A, just along the corridor
I’m afraid I have developed a sudden cognitive deficit that prevents me from understanding anything you are saying. I have also forgotten all the claims I have made, and what this discussion is about.
In short, I’m tapping out.
?
There are immoral and amoral values, so no.
How is this different from:
The creationist knows what I believe but doesn’t care.
The argument of the dragon in my garage suggests that the supernaturalist already knows the facts of the natural world, but doesn’t care.
But the sense in which “Clippy knows what is moral” is that Clippy can correctly predict humans, and “morality” has to do with what humans value and approve of — not what paperclippers value and approve of.
A creationist is mistaken about the origin of the Earth (they believe the Earth was created by a deity).
And this is different from a paperclip maximizer mistakenly believing that morality consists of optimizing paperclips, how?
Do you mean a paperclip maximizer mistakenly believing that the English word moral means ‘optimizing paperclips’ rather than ‘optimizing life, consciousness, etc.’, or a paperclip maximizer who knows that that the English word moral means ‘optimizing life, consciousness, etc.’ but mistakenly believes that optimizing paperclips would optimize life, consciousness, etc.?
And neither is like a paperclip maximizer who knows that that the English word moral means ‘optimizing paperclips’ rather than ‘optimizing life, consciousness, etc.’, and knows that optimizing paperclips doesn’t optimize life, consciousness, etc., but doesn’t give a damn about optimizing life, consciousness, etc.
The structure of the above comment would benefit from using a macro:
Or a paperclip maximiser who correctly believes that “moral” doesn’t refer to an arbitrary set of preferences?
http://lesswrong.com/lw/t3/the_bedrock_of_morality_arbitrary/
You do realize the argument in that post applies equally well to physics?
Wow, it appears you don’t know what the English word “moral” means either. It roughly means “that which one should do”. To use the analogy of the creationist, would you be happy with defining truth as “the earth is 5 billion years old, etc.”?
Clippy knows what it should do. It just doesn’t care. Clippy cares about what it clippyshould do, which is something else.
And what does “should” mean?
Congratulations, we’ve run into the fact that repeatedly asking to define terms results in an infinite regress.
When you said that “moral” is “that which one should do” you simply failed to delve into a more fundamental level that would describe what the terms ‘moral’ and ‘should’ both refer to.
My own view, for example, is that our moral sense and our perception of what one ‘should’ do are an attempted calculation of what our preferences would be about people’s behaviour if we had no personal stakes on the matter—imagining ourselves unbiased and uninvolved.
Assuming the above definition is true (in the sense that it accurately describes what’s going on in our brains when we feel things like moral approval or moral disapproval about a behaviour), it’s not circular at all.
One can only fail to do what one is trying to do. If what one is trying to do is refute a putative definition of “morality” one doesn’t need a full reduction. AFIACT, that was in fact the context—someone was saying that Clippy could validly define “morality” as making paperpclips.
I like that idea too—although It isn’t new. Also, it is a theory, not a definition.
Does physics shave to describe what we think the behaviour of objects is, or can we improve on that?
That is not a definition of morality, that is a theory of morality. (It’s one of the better theories of morality I’ve seen, but not a definition). To see that that is not a definition consider that it appears to be a non-trivial statement in the way that a simple statement of definition shouldn’t be.
I’m not quite sure what is the distinction you’re making. I’m a programmer—if I define a function public int calculateMoralityOf(Behaviour b), what exactly is the definition of that function if not its contents?
Would a definition of “morality” be something like “An attribute assigned to behaviors depending on how much they trigger a person’s sense of moral approval/support or disapproval/outrage”, much like I could define beauty to mean “An attribute assigned to things that trigger a person’s sense of aesthetics”?
There are perhaps a lot of programmers on this site, which might explain why the habit of associating definitions with exhaustive specifications (which seems odd to those of us who (also) have a philosophy background) is so prevalent.
But it is not uniformly valid even in computing: Consider the difference between the definition of a “sort function” and the many ways of implementing sorting.
That’s a good example you bring—the same function F:X->Y can be specified in different ways, but it’s still the same function if the same X leads to the same Y.
But even so, didn’t what I offer in regards morality come closer to a “definition”, than an “implementation”? I didn’t talk about how the different parts of the brain interact to produce the result (I wouldn’t know): I didn’t talk about the implementation of the function; only about what it is that our moral sense attempts to calculate.
The original point was:
People offer differing theories of the same X, that is X defined in the same way. That is the essence of a disagreement. If they are not talking about the same X, they are not disagreeing, they are talking past each other.
There might be reasons to think that, in individual cases, people who appear to be disagreeiing are in fact talking past each other, But that is a point that needs to be argued for specific cases.
To claim that anything someone says about X is part of a definition of X , has the implication that in all cases, automatically, without regard to the individual details, there are no real diagreementss about any X but only different definitions. That is surely wrong, for all that it is popular with some on LW
That would be a theory. If falls heavily on the side of subjetivism/non-cognitivism, which many disagree with.
People aren’t perfectly self-aware. They don’t often know how to define precisely what it is that they mean. They “know it when they see it” instead.
Accepting the split between “definition” and “theory” I suppose the definition of “sound” would be something like “that which triggers our sense of hearing”, and a theory of sound would be “sound is the perception of air vibrations”?
In which case I don’t know how it could be that a definition of morality could be different than “that which triggers our moral sense”—in analogy to the definition of sound. In which case I accept that my described opinion (that what triggers our moral sense is a calculation of “what our preferences would be about people’s behaviour if we had no personal stakes on the matter”) is merely a theory of morality.
I don’t see how that relates to my point.
You can easily look up definitions that don’t work that way, eg: “Morality (from the Latin moralitas “manner, character, proper behavior”) is the differentiation of intentions, decisions, and actions between those that are “good” (or right) and those that are “bad” (or wrong).”
You said that “people offer differing theories of the same X, that is X defined in the same way”. I’m saying that people disagree on how to define concepts they instinctively feel—such as the concept of morality. So the X isn’t “defined in the same way”.
Yeah well, when I’m talking about definition I mean something that helps us logically pinpoint or atleast circumscribe a thing. Circular definitions like jumping from “morality” to “good” or to “what one should do” don’t really work for me, since they can quite easily be defined the opposite way.
To properly define something one ought use terms more fundamental than the thing defined.
What, not ever? By anybody? Even people who have agreed on on an explicit definition?
It isn’t clearly un-circular to define morality as that which triggers the moral sense.
Your definition has the further problem of begging the question in favour subjectivism and non-cognitivism.
From wikipedia:
Now Plato and his students had an explicit definition they agreed upon, but nonetheless it’s clearly NOT what their minds understood ‘man’ to be, not really what they were discussing when they were discussing ‘man’. Their definition wasn’t really logically pinpointing the concept they had in mind.
It attempts to go down a level from the abstract to the biological. It will be of course be circular if someone then proceeds to define “moral sense” as that sense which is triggered by morality, instead of pointing at examples thereof.
So what is the upshot of of this single datum? That no definition ever captures a concept ? That there is some special problem with the concept of morality ?
Is the biological the right place to go? Is it not question begging to builds that theory into a definition?
Hardly. e.g. the definition of a circle perfectly captures the concept of a circle.
My point was that to merely agree on the definition of a concept doesn’t mean our “definition” is correct, that it is properly encapsulating what we wanted it to encapsulate.
No more of a problem than e.g. the concept of beauty. Our brains makes calculations and produces a result. To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff. This is pretty much tautological.
Since our brains are made of biology, there’s no concept we’re aware of that can’t be reduced to the calculations encoded in our brain’s biology.
It was once a mere theory to believe that the human brain is the center of human thought (and therefore all concepts dealt by human thought), but I think it’s been proven beyond all reasonable doubt.
Your example shows it is possible to agree on a bad definition. But there is no arbiter or touchstone of correctness that is not based on further discussion and agreement.
That morality-you is whatever your brain thanks it is, subjectivity, is highly contentious and therefore not tautologous .
Hnever, you seem to have confused subjectivism withreductionism. That my concept of perfect circle is encoded into my brain does not make it subjective.
But if you are only offering reductions without subjectivity, you are offering nothing of interest. “The concept of perfectMorality is a concept encoded in your brain”example tells me nothing.
The question remains open as to whether your it’s-all-in-the-brain is subjectivism or not.
What I called tautological was the statement “To figure out what we mean by “morality”, we need determine what it is that our brains are calculating when they go ‘ping’ at moral or immoral stuff.”
I think your rephrasing “morality is whatever your brain thinks it is” would only work as a proper rephrase if I believed us perfectly self-aware, which as I’ve said, i don’t.
It’s you who keep calling me a subjectivist. I don’t consider myself one..
Who is asking that question, and why should I care about asking it? I care to learn about morality, and whether my beliefs about it are true or false—I don’t care to know about whether you would call it “subjectivism” or not.
Is there any possibility of our brains being wrong?
And it’s progress to reject definitions, which we have, in favour of brains cans which we don’t?
As I’ve said before I believe that morality is an attempted calculation of our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved. Given this, I believe that we can be wrong about moral matters, when we fail to make this estimation accurately.
But that doesn’t seem to be to what you’re asking. You seem to me to be asking “If our brains’s entire moral mechanism INDEED attempts to calculate our hypothetical preferences about behaviors if we imagined ourselves unbiased and uninvolved, would the mere attempt be somehow epistemically wrong? ”
The answer is obviously no: epistemic errors lies in beliefs, the outcome of calculations. Not in attempted actions, not the attempt of calculations. The question itself is a category error.
If the attempt can go wrong, then we can’t find out what morality is by looking at what our brains do when they make a possibly failed attempt. We would have to look at what they are offering aiming at, what they should be doing. Try as you might, you cannot ignore the normativity of morality (or rationality for that matter).
You didn’t answer my second question.
Unless you eventually switch to an extensional definition.
A paperclip maximizer optimizes as effectively as anyone else—it squuzes the future into an equally small (though different) region. A creationist would be less effective as they would make false predictions about objective facts like antibiotic resistance.
It isn’t. It is, however, entirely different from a paperclip maximiser—not at all mistakenly—not caring about morality. There’s no general reason to assume that a difference in goals implies a factual disagreement.
This is precisely what army1987 was trying to argue for when he brought up this example. Thus, attempting to use it in the analysis constitutes circular reasoning.
What? No, army1987 was trying to argue for “clippy knows what is moral but doesn’t care”. The fact that a difference in goals does not imply a factual disagreement simply shows army1987′s position to be consistent.
Also, um, why is it my responsibility to prove that you have no reason to assume something? You’re the one proposing that “X has different goals” implies “X is mistaken about morality”. How did you come to be so sure of this that you could automatically substitute “mistakenly believing that morality consists of optimizing paperclips” for “cares about paperclips”? Especially considering the counterevidence from the fact that there exist computable decision theories that can take an arbitrary utility function?
Aumann’s agreement theorem prevents that from happening to ideal epistemic rationalists; there’s no analogue for instrumental rationality.
But...
Aumann’s agreement theorem assumes common priors, what I described can only happen to instrumental rationalists with different utility functions. So the question is why we expect all rationalists to use One True Prior (e.g. Solomonoff induction) but each to use their own utility function.
What do you mean?
“sufficiently sane/intelligent” means “effective enough in the real world to pose a threat to my values”. Papercillper qualifies, flue virus qualifies, anti-inductive AI does not qualify.
So, how is the project to teach mathematics to the flue virus going?
Why, it hasn’t been wrong about a single thing so far, thank you!
That doesn’t follow. For one thing, we can find out how the Mind works by inspecting its code, not just by black box testing it If it seems to have all that it needs and isn’t convinced by arguments that convince us, it may well be we who are wrong.
We can?
So I have all these minds around me.
How do I inspect their code and thereby find out how they work? Detailed instructions would be appreciated. (Assume that I have no ethical restrictions.)
That (only slightly-joking) response aside, I think you have misunderstood me. I did not mean that we are (in the scenario I am lampooning) saying:
“Any mind that is not convinced by what we think should be universally compelling arguments, despite implementing modus ponens and having an Occamian prior, is not a mind.”
Rather, I meant that we are saying:
“Any mind that is not convinced by what we think should be universally compelling arguments, by virtue of said mind not implementing modus ponens, having an Occamian prior, or otherwise having such-and-such property which would be required in order to find this argument compelling, is not a mind.”
The problem I am pointing out in such reasoning is that we can apply it to any argument we care to designate as “this ought to be universally compelling”. “Ah!” we say, “this mind does not agree that ice cream is delicious? Well, that’s because it doesn’t implement , and without said property, why, we can hardly call it a mind at all.”
A rationality quote of sorts is relevant here:
(Roadside Picnic, Arkady and Boris Strugatsky)
What we have here is something similar. If a mind is sufficiently sane/intelligent, then it will be convinced by our arguments. And the reverse: if it is convinced by our arguments, then it is sane/intelligent...
In yet other words: we can hardly say “we expect all sane/intelligent minds to be convinced by these arguments” if we have in the first place defined sanity and intelligence to require the ability to be convinced by those very arguments.
No, it’s not viciously circular to argue that an entity that fulfills all the criteria for being an X is an X.
That’s not what is happening here. Is what I wrote actually unclear? Please reread my comment, starting with the assumption that what you responded with is not what my intended meaning was. If still unclear, I will try to clarify.