So you’ve got these attractive sets and maybe 90% or 99% or 99.9% or 99.99% of humans or humans plus some broader category of conscious/intelligent entities agree. What to do about the exceptions? Pretend they don’t exist?
What does agreement have to do with anything? Anyway such moral attractive sets either include an injuction of what to do with people that disagree with them or they don’t. And even if they do have such moral injuctions, it still doesn’t mean that my preferences would necessarily be to follow said injuctions.
People aren’t physically forced to follow their moral intuitions now, and they aren’t physically forced to follow a universal moral attractive set either.
The question of moral realism is not a factual one
That’s what a non moral-realist would say, definitely.
do you CHOOSE to declare what 99.999% have an intuition towards as binding on the .001% that don’t
What does ‘declaring’ have to do with anything? For all I know this moral attractive set would contain an injuction against people declaring it true or binding. Or it might contain an injuction in favour of such declarations of course.
I don’t think you understood the concepts I was trying to communicate. I suggest you tone down on the outrage.
Moral realism is NOT the idea that you can derive moral imperatives from a mixture of moral imperatives and other non-moral assumptions. Moral realism is NOT the idea that if you study humans you can describe “conventional morality,” make extensive lists of things that humans tend, sometimes overwhelmingly, to consider wrong.
Moral realism IS the idea that there are things that are actually wrong.
If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.
An empirical determination of what are the moral rules of many societies, or most societies, or the moral rules that all societies so far have had in common is NOT an instantiation of a moral realist theory, UNLESS you assert that the rules you are learning about are real, that it is in fact immoral or evil to break them. If you meant something wildly different by “moral attractive sets” than what is incorporated by the idea of where people tend to come down on morality, then please elucidate, otherwise I think for the most part i am working pretty consistently with the attractive set idea in saying these things.
If you think you can be a “moral realist” without agreeing that it is immoral to break or not follow a moral truth, then we are just talking past each other and we might as well stop.
Moral realism IS the idea that there are things that are actually wrong.
Okay, yes. I agree with that statement.
If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.
Well, I guess we can indeed define an “immoral” person as someone who does morally wrong things; though a more useful definition would probably be to define an immoral person as someone who does them more so than average. So?
If you think you can be a “moral realist” without agreeing that it is immoral to break or not follow a moral truth
It’s reasonable to define an action as “immoral” if it breaks or doesn’t follow a moral truth.
But how in the word are you connecting these definitions to all your earlier implications about pretending dissenters don’t exist, or killing them and then pretending they never existed in the first place?
Fine, lots of people do immoral things. Lots of people are immoral. How does this “is” statement by itself, indicate anything about whether we ought ignore said people, execute them, or hug and kiss them? It doesn’t say anything about how we should treat immoral people, or how we should respond to the immoral actions of others.
I’m the moral realist here, but it’s you who seem to be deriving specific “ought” statements from my “is” statements.
At one level, yes, I am implicitly assuming certain moral imperatives. Things like “evildoers should be stopped,” “evildoers should be punished.” The smartest moral realists I have argued with before all proffered a belief in moral realism precisely so I would not think (or they would not have to admit) that their punishing wrongdoers and legislating against “wrong” things was in any way arbitrary or questionable. I think that “evildoers should be stopped” would be among the true statements a moral realist would almost certainly accept, but I was thinking that without stating it. Now it is stated. So my previous statements can be explicitly prefaced: “if morality is real and at some level evildoers should be stopped...”
And indeed the history of the western world, and I think the world as a whole, is that wrongdoers have always been stopped. Usually brutally. So I would ask for some consideration of this implicit connection I had made before you dismiss it as unnecessary.
I think the only meaning of moral realism can be that those things which I conclude are morally real can be enforced on others, indeed must be if “protecting the world from evil” and other such ideas are among the morally real true statements, and all intuition I maintain is that they are. I don’t think you can be a moral realist and then sit back and say “yes I’m immoral, lots of other people are immoral, so what? Where does it say I’m supposed to do anything about that?” Because the essence of something being immoral is you ARE supposed to do something about it, I would maintain, and definitions in which morality is just a matter of taste or labelling I don’t think will live under the label “moral realism.”
I think the only meaning of moral realism can be that those things which I conclude are morally real can be enforced on others,
A moral statement M might perhaps say: “I ought do X.”
Agreeing perfectly in the moral universal validity and reality and absolute truth of M still doesn’t take you one step closer to “I ought force others to do X.”, nor even to “I am allowed to force others to do X.”.
Real-life examples might be better: Surely you can understand that a person might both believe “I oughtn’t do drugs” and also “The government oughtn’t force me not to do drugs.”? And likewise “I ought give money to charity” is a different proposition than “I ought force others to give money to charity”?
That’s just from the libertarian perspective, but even the christian perspective says things like “Bless those who curse you. Pray for those who hurt you.” it doesn’t say “Force others not to curse you, force others not to hurt you”. (Christendom largely abandoned that of course once it achieved political power, but that’s a different issue...)
The pure-pacifist response to violence is likewise pacifism. It isn’t “Force pacifism on others”.
There’s a long history of moral realism that knows how to distinguish between “I ought X” and “I ought force X on others”
“Because the essence of something being immoral is you ARE supposed to do something about it, I would maintain”
The essense of something being immoral is that one oughtn’t do it. Just that.
EDIT TO ADD: Heh, just thinking a bit further about it. Let me mathematize what you said a bit.
You’re effectively thinking of an inference rule which is as follows.
R1: For any statement M(n):”You ought X” present in the morally-real set, the statement M(n+1):”You ought force others to X” is also in the morally real set.
Such a inference rule (which I do not personally accept) would have horrifying repercussions, because of it’s infinitely extending capacity. For example by starting with a supposed morally real statement: M(1): You ought visit your own mother in the hospital. it’d then go a bit like this. M(2): You ought force others to visit their mothers in the hospital. M(3). You ought force others to in turn force others to visit their mothers in the hospital. ...and then... M(10). You ought establish a vast bureaurcracy of forcing others to establish other bureaucracies in charge of forcing people to visit their mothers in the hospital. ...or even M(100). Genocide on those who don’t believe in vast bureaucracy-establishing bureaucracies!
Heh, I can see why treating R1 as an axiom you find horror in the concept of morally real statements—you resolve the problem by thinking the morally real set is empty, so that no further such statements can be added. I just don’t accept R1 as an axiom at all.
I think you put your hand solidly on the dull end of the stick here. Lets consider some other moral examples whos violation does come up in real life.
1) I ought not steal candy from Walmart, but its OK if you do. 2) I ought not steal the entire $500,000 retirement from someone by defrauding them, but its OK if you want to. 3) I ought not pick a child at random, capture them, hold them prisoner in my house, torture them for my sexual gratification, including a final burst where I dismember them thus killing them painfully and somewhat slowly much to my delight, but its your choice if you want to. 4) Out of consideration, I won’t dump toxic wastes over my neighbors stream, but that’s just me.
My point is, the class of “victimless crime” types of morality is a tiny subset surrounded by moral hypotheses that directly speak to harms and costs accruing to others. Even libertarians who are against police (relatively extreme) are not against private body guards. These libertarians try to claim that their bodyguards would not be ordered to do anything “wrong” because 1) morality is real and 2) libertarians can figure out what the rules are with sufficient reliabillity and accuracy to be trusted to have their might unilaterally make right.
So that’s my point about the philosophical basis of moral realism. Does that mean I would NOT enforce rules against dismembering children or stealing? Absolutely not. What it means is I wouldn’t kid myself that the system I supported was the truth and that people that disagreed with me were evil. I would instead examine the rules I was developing in light of what kind of society they would produce. MOST conventional morality survives that test, evolution fine tuned our morality to work pretty economically for smart talkative primates who hunted and gathered in bands of less than a few hundred each.
But the rest of my point about morality not being “real”, not being objectively true independent of the state of the species, is that I wouldn’t have a fetish about the rightness of some moral conclusion I had reached. I would recognize that 1) we have more resources to spend on morality now than than what with being 100s of times richer than those hunter gatherers, 2) we have a significantly different environment to optimize upon, with the landscape of pervasive and inexpensive information and material items a rather new feature that moral intuitions didn’t get to evolve upon.
My point is that morality is an engineering optimization, largely carried out by evolution, but absolutely under significant control of the neocortexes of our species. The moral realists I think will not do as good a job of making up moral systems because they fundamentally miss the point that the thing is plastic and there is in most cases no one “right” answer.
That I rejected the previously implied inference rule
R1: For any X where “I ought X” it also follows “I ought force others to X”,
doesn’t mean at all that I have to add a different inference rule
R2: For any X where “I ought X” it also follows ”...but it’s okay if you don’t X.”
To be perfectly clear to you: I’m rejecting both R1 and R2 as axioms. I’ve never stated them as axioms of moral realism, nor have I implied them to be such, nor do I believe that any theory of moral realism requires either of them.
I’m getting a bit tired of refuting implications you keep reading in my comments but which I never made. I suggest you stop reading more into my comment than what I actually write.
Truth isn’t about making up axioms and throwing away the ones which are inconvenient to your argument. Rather I propose a program of looking at the world and trying to model it.
How successful do you think a sentient species that has evolved rules that allow it to thrive in significant cooperation but which has not thought to enforce those rules? How common is such a hands-off approach in the successful human societies which surround us in time and space? It is not deductively true that if you believe in morality as real that you will have some truths about enforcing morality on those around you, one way or another. Just as it is not deductively true that all electrons have the same charge or that all healthy humans are conscious and sentient or that shoddily made airplanes tend to crash. But what is the point of a map of the territory that leaves out the mountains surrounding the village for the sake of argument?
It seems to me that your moral realism is trivial. You don’t think of morality, it seems to me, as anything other than just another label. Like some things are french others are not,, some are pointillist others are not, and some are moral others are not. Morality like so many other things MEANS something. This meaning has implications for human behavior and human choices.
If you’re tired you’re tired, but if you care to, let me ask you this. What is the difference between morality being real and morality being a “real label,” just a hashtag we attach to statements that use certain words? The difference to me is that if it is just a hashtag, then I don’t ought to enforce on myself or others that moral truths, whereas if it is something real, then the statement “people ought not allow innocent children to be kidnapped and tortured” means exactly what it says, we are obliged to do something about it.
Whether you are done or not, thank you for this exchange. I had not been aware that my belief that morality being real meant it ought to be enforced in some way, now I am aware. In my opinion,a moral realism that does not contain some true statements along those lines is an incomplete one at best, or an insincere or vapid one at worst. But at least I leanred not to assume that others talking about morality have this same opinion until I check.
Truth isn’t about making up axioms and throwing away the ones which are inconvenient to your argument
What argument? You’ve never even remotely understood my argument. All this thread has been about trying to explain that I never said those things that you’re trying to place in my mouth.
If you want further discussion with me, I suggest you first go back and reread everything I said in the initial comment and only what I said, one more time. and then find a single statement which you think is wrong, and then I’ll defend it. I’ll defend it by itself, not whatever implications you’ll add onto it.
I won’t bother responding to anything else you say unless you first do that. I’m not obliged to defend myself against your demonisations of me based on things I never said or implied. Find me one of my statements that you disagree with, not some statement that you need to put to put on my mouth.
A possible example for a morally “real” position might e.g. be “You oughtn’t decrease everyone’s utility in the universe.” or “You oughtn’t do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.”
If you wish to build a map for a real territory, but ignore that the map doesn’t actually follow many details of the territory, it seems fair enough for others who can see the map and the territory to say “this isn’t a very good map, it is missing X, Y, and Z.” As you rightly point out, it would not make sense to say “it isn’t a very good map because it is not internally consistent. The more oversimplified a map is, the more likely it is to be internally consistent.
I like the metaphor of map and territory: morality refers to an observable feature of human life and it is not difficult to look at how it has been practiced and make statements about it on that basis. A system of morality that accepts neither “morality is personal (my morality doesn’t apply to others)” nor “Morality is univeral, the point is it applies to everybody” may fit the wonderful metaphor of a very simple axiomatic mathematical system, but in my opinion it is not a map of the human territory of morality.
If you are self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life, then we are talking about different things. If you believe you are proposing a useful map for the human territory called morality, then you must address concerns of “it doesn’t seem to really fit that well,” and not limit yourself to concerns only of “I said a particular thing that wasn’t true.”
But if you want to play the axiomatic geometry game, then I do disagree that “You oughtn’t do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” is a good possible morally real statement. First off, its negation, which I take to be “It’s OK if you do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” doesn’t seem particularly truer or less true than the statement itself. (And I would hope you can see why I was talking about 99% and 99.99% agreement given your original statement in your original post). Second, if your statement is morally real, objective, “made true by objective features of the world, independent of subjective opinion” then please show me how. (The quote is from http://en.wikipedia.org/wiki/Moral_realism )
tldr; you’re overestimating my patience to read your page of text, especially since previous such pages just kept accusing me of various things, and they were all wrong. (edit to add: And now that I went back and read it, this one was no exception accusing me this time of being “self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life” Sorry mate, am no longer bothering to defend against your various, diverse and constantly changing demonisations of me. If I defend against one false accusation, you’ll just make up another, and you never update on the fact of how wrong all your previous attempts were.)
But since I scanned to the end to find your actual question:
Second, if your statement is morally real, objective, “made true by objective features of the world, independent of subjective opinion” then please show me how
First of all I said my statement “might” be a possible example of something morally real. I didn’t argue that it definitely was such.
Secondly, it would be made e.g. a possible candidate for being morally real because it include all agents capable of relevant subjective opinion inside it. At that point, it’s no longer about subjective opinion, it’s about universal opinion. Subjective opinion indicates something that changes from subject to subject. If it’s the same for all subjects, it’s no longer really subjective.
And I would hope you can see why I was talking about 99% and 99.99%
No, I don’t see why. The very fact that my hypothetical statements specified “everyone” and you kept talking about what to do about the remainder was more like evidence to me that you weren’t really addressing my points and possibly hadn’t even read them.
Perhaps. And you are understimating your need to get the last word. But enough about you.
First of all I said my statement “might” be a possible example
I don’t know how to have a discussion where the answer to the question “show me how it might be” is “First of all I said [it] might be.”
The very fact that my hypothetical statements specified “everyone” and you kept talking about what to do about the remainder was more like evidence to me that you weren’t really addressing my points and possibly hadn’t even read them.
Well you already know there are nihilists in the world and others who don’t believe morality is real. So You already know that there are nos uch statements that “everybody” agrees to. And then you reduce the pool of no statements that every human agrees to even further by bringing in all other sentient life that might exist in the required agreement.
Even if you were to tell the intelligent people who have thought about it, “no, you really DO believe in some morality, you are mistaken about yourself,” can you propose a standard for developing a list or even a single statement that might be a GOOD candidate without attempting to estimate the confidence with which you achieve unanimity, and which does not yield answers like 90% or 99% as the limitations of its accuracy in showing you unanimity?
If you are able to state that you are talking about something which has no connection to the real world, I’ll let you have the last word. Because that is not a discussion I have a lot of energy for.
This also accounts for my constantly throwing things in to the discussion that go outside a narrow axiomatic system. I’m not doing math here.
I don’t know how to have a discussion where the answer to the question “show me how it might be” is “First of all I said [it] might be.”
You didn’t say “show me how [it might be]”, you said “show me how [it is]”
So you already know that there are no such statements that “everybody” agrees to.
Most people that aren’t moral realists still have moral intuitions, you’re confusing the categorization of beliefs about the nature of morality vs the actual moral instinct in people’s brains. The moral instinct doesn’t concern itself with whether morality is real; eyes don’t concern themselves with viewing themselves; few algorithms altogether are are designed to analyze themselves.
As for moral nihilists, assuming they exist, an empty moral set can indeed never be transformed into anything else via is statements, which is why I specified from the very beginning “every person equipped with moral instinct”.
If you are able to state that you are talking about something which has no connection to the real world,
The “connection to the real world” is that the vast majority of seeming differences in human moralities seem to derive from different understandings of the worlds, and different expectations about the consequences. When people share agreement about the “is”, they also tend to converge on the “ought”, and they most definitely converge on lots of things that “oughtn’t”. Seemingly different morality sets gets transformed to look like each other.
That’s sort of like the CEV of humanity that Eliezer talks about, except that I talk about a much more limited set—not the complete volition (which includes things like “I want to have fun”), but just the moral intuition system.
That’s a “connection to the real world” that relates to the whole history of mankind, and to how beliefs and moral injuctions connect to one another; how beliefs are manipulated to produce injuctions, how injuctions lose their power when beliefs fall away.
Now with a proper debater that didn’t just seek to heap insults on people I might discuss further on nuances and details—whether it’s only consequentialists that would get attractive moral sets, whether different species would get mostly different attractive moral sets, whether such attractive moral sets may be said to exist because anything too alien would probably not even be recognizable as morality by us; possible exceptions for deliberately-designed malicious minds, etc...
But you’ve just been a bloody jerk throughout this thread, a horrible horrible person who insults and insults and insults some more. So I’m done with you: feel free to have the last word.
What does agreement have to do with anything? Anyway such moral attractive sets either include an injuction of what to do with people that disagree with them or they don’t. And even if they do have such moral injuctions, it still doesn’t mean that my preferences would necessarily be to follow said injuctions.
People aren’t physically forced to follow their moral intuitions now, and they aren’t physically forced to follow a universal moral attractive set either.
That’s what a non moral-realist would say, definitely.
What does ‘declaring’ have to do with anything? For all I know this moral attractive set would contain an injuction against people declaring it true or binding. Or it might contain an injuction in favour of such declarations of course.
I don’t think you understood the concepts I was trying to communicate. I suggest you tone down on the outrage.
Moral realism is NOT the idea that you can derive moral imperatives from a mixture of moral imperatives and other non-moral assumptions. Moral realism is NOT the idea that if you study humans you can describe “conventional morality,” make extensive lists of things that humans tend, sometimes overwhelmingly, to consider wrong.
Moral realism IS the idea that there are things that are actually wrong.
If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.
An empirical determination of what are the moral rules of many societies, or most societies, or the moral rules that all societies so far have had in common is NOT an instantiation of a moral realist theory, UNLESS you assert that the rules you are learning about are real, that it is in fact immoral or evil to break them. If you meant something wildly different by “moral attractive sets” than what is incorporated by the idea of where people tend to come down on morality, then please elucidate, otherwise I think for the most part i am working pretty consistently with the attractive set idea in saying these things.
If you think you can be a “moral realist” without agreeing that it is immoral to break or not follow a moral truth, then we are just talking past each other and we might as well stop.
Okay, yes. I agree with that statement.
Well, I guess we can indeed define an “immoral” person as someone who does morally wrong things; though a more useful definition would probably be to define an immoral person as someone who does them more so than average. So?
It’s reasonable to define an action as “immoral” if it breaks or doesn’t follow a moral truth.
But how in the word are you connecting these definitions to all your earlier implications about pretending dissenters don’t exist, or killing them and then pretending they never existed in the first place?
Fine, lots of people do immoral things. Lots of people are immoral. How does this “is” statement by itself, indicate anything about whether we ought ignore said people, execute them, or hug and kiss them? It doesn’t say anything about how we should treat immoral people, or how we should respond to the immoral actions of others.
I’m the moral realist here, but it’s you who seem to be deriving specific “ought” statements from my “is” statements.
Very interesting, the disagreement unravels.
At one level, yes, I am implicitly assuming certain moral imperatives. Things like “evildoers should be stopped,” “evildoers should be punished.” The smartest moral realists I have argued with before all proffered a belief in moral realism precisely so I would not think (or they would not have to admit) that their punishing wrongdoers and legislating against “wrong” things was in any way arbitrary or questionable. I think that “evildoers should be stopped” would be among the true statements a moral realist would almost certainly accept, but I was thinking that without stating it. Now it is stated. So my previous statements can be explicitly prefaced: “if morality is real and at some level evildoers should be stopped...”
And indeed the history of the western world, and I think the world as a whole, is that wrongdoers have always been stopped. Usually brutally. So I would ask for some consideration of this implicit connection I had made before you dismiss it as unnecessary.
I think the only meaning of moral realism can be that those things which I conclude are morally real can be enforced on others, indeed must be if “protecting the world from evil” and other such ideas are among the morally real true statements, and all intuition I maintain is that they are. I don’t think you can be a moral realist and then sit back and say “yes I’m immoral, lots of other people are immoral, so what? Where does it say I’m supposed to do anything about that?” Because the essence of something being immoral is you ARE supposed to do something about it, I would maintain, and definitions in which morality is just a matter of taste or labelling I don’t think will live under the label “moral realism.”
A moral statement M might perhaps say: “I ought do X.” Agreeing perfectly in the moral universal validity and reality and absolute truth of M still doesn’t take you one step closer to “I ought force others to do X.”, nor even to “I am allowed to force others to do X.”.
Real-life examples might be better:
Surely you can understand that a person might both believe “I oughtn’t do drugs” and also “The government oughtn’t force me not to do drugs.”?
And likewise “I ought give money to charity” is a different proposition than “I ought force others to give money to charity”?
That’s just from the libertarian perspective, but even the christian perspective says things like “Bless those who curse you. Pray for those who hurt you.” it doesn’t say “Force others not to curse you, force others not to hurt you”. (Christendom largely abandoned that of course once it achieved political power, but that’s a different issue...) The pure-pacifist response to violence is likewise pacifism. It isn’t “Force pacifism on others”.
There’s a long history of moral realism that knows how to distinguish between “I ought X” and “I ought force X on others”
The essense of something being immoral is that one oughtn’t do it. Just that.
EDIT TO ADD: Heh, just thinking a bit further about it. Let me mathematize what you said a bit. You’re effectively thinking of an inference rule which is as follows.
R1: For any statement M(n):”You ought X” present in the morally-real set, the statement M(n+1):”You ought force others to X” is also in the morally real set.
Such a inference rule (which I do not personally accept) would have horrifying repercussions, because of it’s infinitely extending capacity. For example by starting with a supposed morally real statement:
M(1): You ought visit your own mother in the hospital.
it’d then go a bit like this.
M(2): You ought force others to visit their mothers in the hospital.
M(3). You ought force others to in turn force others to visit their mothers in the hospital.
...and then...
M(10). You ought establish a vast bureaurcracy of forcing others to establish other bureaucracies in charge of forcing people to visit their mothers in the hospital.
...or even
M(100). Genocide on those who don’t believe in vast bureaucracy-establishing bureaucracies!
Heh, I can see why treating R1 as an axiom you find horror in the concept of morally real statements—you resolve the problem by thinking the morally real set is empty, so that no further such statements can be added. I just don’t accept R1 as an axiom at all.
I think you put your hand solidly on the dull end of the stick here. Lets consider some other moral examples whos violation does come up in real life.
1) I ought not steal candy from Walmart, but its OK if you do.
2) I ought not steal the entire $500,000 retirement from someone by defrauding them, but its OK if you want to.
3) I ought not pick a child at random, capture them, hold them prisoner in my house, torture them for my sexual gratification, including a final burst where I dismember them thus killing them painfully and somewhat slowly much to my delight, but its your choice if you want to.
4) Out of consideration, I won’t dump toxic wastes over my neighbors stream, but that’s just me.
My point is, the class of “victimless crime” types of morality is a tiny subset surrounded by moral hypotheses that directly speak to harms and costs accruing to others. Even libertarians who are against police (relatively extreme) are not against private body guards. These libertarians try to claim that their bodyguards would not be ordered to do anything “wrong” because 1) morality is real and 2) libertarians can figure out what the rules are with sufficient reliabillity and accuracy to be trusted to have their might unilaterally make right.
So that’s my point about the philosophical basis of moral realism. Does that mean I would NOT enforce rules against dismembering children or stealing? Absolutely not. What it means is I wouldn’t kid myself that the system I supported was the truth and that people that disagreed with me were evil. I would instead examine the rules I was developing in light of what kind of society they would produce. MOST conventional morality survives that test, evolution fine tuned our morality to work pretty economically for smart talkative primates who hunted and gathered in bands of less than a few hundred each.
But the rest of my point about morality not being “real”, not being objectively true independent of the state of the species, is that I wouldn’t have a fetish about the rightness of some moral conclusion I had reached. I would recognize that 1) we have more resources to spend on morality now than than what with being 100s of times richer than those hunter gatherers, 2) we have a significantly different environment to optimize upon, with the landscape of pervasive and inexpensive information and material items a rather new feature that moral intuitions didn’t get to evolve upon.
My point is that morality is an engineering optimization, largely carried out by evolution, but absolutely under significant control of the neocortexes of our species. The moral realists I think will not do as good a job of making up moral systems because they fundamentally miss the point that the thing is plastic and there is in most cases no one “right” answer.
This is getting a bit tiresome.
That I rejected the previously implied inference rule
doesn’t mean at all that I have to add a different inference rule
To be perfectly clear to you: I’m rejecting both R1 and R2 as axioms. I’ve never stated them as axioms of moral realism, nor have I implied them to be such, nor do I believe that any theory of moral realism requires either of them.
I’m getting a bit tired of refuting implications you keep reading in my comments but which I never made. I suggest you stop reading more into my comment than what I actually write.
If you’re tired, you’re tired.
Truth isn’t about making up axioms and throwing away the ones which are inconvenient to your argument. Rather I propose a program of looking at the world and trying to model it.
How successful do you think a sentient species that has evolved rules that allow it to thrive in significant cooperation but which has not thought to enforce those rules? How common is such a hands-off approach in the successful human societies which surround us in time and space? It is not deductively true that if you believe in morality as real that you will have some truths about enforcing morality on those around you, one way or another. Just as it is not deductively true that all electrons have the same charge or that all healthy humans are conscious and sentient or that shoddily made airplanes tend to crash. But what is the point of a map of the territory that leaves out the mountains surrounding the village for the sake of argument?
It seems to me that your moral realism is trivial. You don’t think of morality, it seems to me, as anything other than just another label. Like some things are french others are not,, some are pointillist others are not, and some are moral others are not. Morality like so many other things MEANS something. This meaning has implications for human behavior and human choices.
If you’re tired you’re tired, but if you care to, let me ask you this. What is the difference between morality being real and morality being a “real label,” just a hashtag we attach to statements that use certain words? The difference to me is that if it is just a hashtag, then I don’t ought to enforce on myself or others that moral truths, whereas if it is something real, then the statement “people ought not allow innocent children to be kidnapped and tortured” means exactly what it says, we are obliged to do something about it.
Whether you are done or not, thank you for this exchange. I had not been aware that my belief that morality being real meant it ought to be enforced in some way, now I am aware. In my opinion,a moral realism that does not contain some true statements along those lines is an incomplete one at best, or an insincere or vapid one at worst. But at least I leanred not to assume that others talking about morality have this same opinion until I check.
Cheers, Mike
What argument? You’ve never even remotely understood my argument. All this thread has been about trying to explain that I never said those things that you’re trying to place in my mouth.
If you want further discussion with me, I suggest you first go back and reread everything I said in the initial comment and only what I said, one more time. and then find a single statement which you think is wrong, and then I’ll defend it. I’ll defend it by itself, not whatever implications you’ll add onto it.
I won’t bother responding to anything else you say unless you first do that. I’m not obliged to defend myself against your demonisations of me based on things I never said or implied. Find me one of my statements that you disagree with, not some statement that you need to put to put on my mouth.
In your very first post you write:
If you wish to build a map for a real territory, but ignore that the map doesn’t actually follow many details of the territory, it seems fair enough for others who can see the map and the territory to say “this isn’t a very good map, it is missing X, Y, and Z.” As you rightly point out, it would not make sense to say “it isn’t a very good map because it is not internally consistent. The more oversimplified a map is, the more likely it is to be internally consistent.
I like the metaphor of map and territory: morality refers to an observable feature of human life and it is not difficult to look at how it has been practiced and make statements about it on that basis. A system of morality that accepts neither “morality is personal (my morality doesn’t apply to others)” nor “Morality is univeral, the point is it applies to everybody” may fit the wonderful metaphor of a very simple axiomatic mathematical system, but in my opinion it is not a map of the human territory of morality.
If you are self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life, then we are talking about different things. If you believe you are proposing a useful map for the human territory called morality, then you must address concerns of “it doesn’t seem to really fit that well,” and not limit yourself to concerns only of “I said a particular thing that wasn’t true.”
But if you want to play the axiomatic geometry game, then I do disagree that “You oughtn’t do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” is a good possible morally real statement. First off, its negation, which I take to be “It’s OK if you do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” doesn’t seem particularly truer or less true than the statement itself. (And I would hope you can see why I was talking about 99% and 99.99% agreement given your original statement in your original post). Second, if your statement is morally real, objective, “made true by objective features of the world, independent of subjective opinion” then please show me how. (The quote is from http://en.wikipedia.org/wiki/Moral_realism )
tldr; you’re overestimating my patience to read your page of text, especially since previous such pages just kept accusing me of various things, and they were all wrong. (edit to add: And now that I went back and read it, this one was no exception accusing me this time of being “self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life” Sorry mate, am no longer bothering to defend against your various, diverse and constantly changing demonisations of me. If I defend against one false accusation, you’ll just make up another, and you never update on the fact of how wrong all your previous attempts were.)
But since I scanned to the end to find your actual question:
First of all I said my statement “might” be a possible example of something morally real. I didn’t argue that it definitely was such. Secondly, it would be made e.g. a possible candidate for being morally real because it include all agents capable of relevant subjective opinion inside it. At that point, it’s no longer about subjective opinion, it’s about universal opinion. Subjective opinion indicates something that changes from subject to subject. If it’s the same for all subjects, it’s no longer really subjective.
No, I don’t see why. The very fact that my hypothetical statements specified “everyone” and you kept talking about what to do about the remainder was more like evidence to me that you weren’t really addressing my points and possibly hadn’t even read them.
Perhaps. And you are understimating your need to get the last word. But enough about you.
I don’t know how to have a discussion where the answer to the question “show me how it might be” is “First of all I said [it] might be.”
Well you already know there are nihilists in the world and others who don’t believe morality is real. So You already know that there are nos uch statements that “everybody” agrees to. And then you reduce the pool of no statements that every human agrees to even further by bringing in all other sentient life that might exist in the required agreement.
Even if you were to tell the intelligent people who have thought about it, “no, you really DO believe in some morality, you are mistaken about yourself,” can you propose a standard for developing a list or even a single statement that might be a GOOD candidate without attempting to estimate the confidence with which you achieve unanimity, and which does not yield answers like 90% or 99% as the limitations of its accuracy in showing you unanimity?
If you are able to state that you are talking about something which has no connection to the real world, I’ll let you have the last word. Because that is not a discussion I have a lot of energy for.
This also accounts for my constantly throwing things in to the discussion that go outside a narrow axiomatic system. I’m not doing math here.
You didn’t say “show me how [it might be]”, you said “show me how [it is]”
Most people that aren’t moral realists still have moral intuitions, you’re confusing the categorization of beliefs about the nature of morality vs the actual moral instinct in people’s brains. The moral instinct doesn’t concern itself with whether morality is real; eyes don’t concern themselves with viewing themselves; few algorithms altogether are are designed to analyze themselves.
As for moral nihilists, assuming they exist, an empty moral set can indeed never be transformed into anything else via is statements, which is why I specified from the very beginning “every person equipped with moral instinct”.
The “connection to the real world” is that the vast majority of seeming differences in human moralities seem to derive from different understandings of the worlds, and different expectations about the consequences. When people share agreement about the “is”, they also tend to converge on the “ought”, and they most definitely converge on lots of things that “oughtn’t”. Seemingly different morality sets gets transformed to look like each other.
That’s sort of like the CEV of humanity that Eliezer talks about, except that I talk about a much more limited set—not the complete volition (which includes things like “I want to have fun”), but just the moral intuition system.
That’s a “connection to the real world” that relates to the whole history of mankind, and to how beliefs and moral injuctions connect to one another; how beliefs are manipulated to produce injuctions, how injuctions lose their power when beliefs fall away.
Now with a proper debater that didn’t just seek to heap insults on people I might discuss further on nuances and details—whether it’s only consequentialists that would get attractive moral sets, whether different species would get mostly different attractive moral sets, whether such attractive moral sets may be said to exist because anything too alien would probably not even be recognizable as morality by us; possible exceptions for deliberately-designed malicious minds, etc...
But you’ve just been a bloody jerk throughout this thread, a horrible horrible person who insults and insults and insults some more. So I’m done with you: feel free to have the last word.