A possible example for a morally “real” position might e.g. be “You oughtn’t decrease everyone’s utility in the universe.” or “You oughtn’t do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.”
If you wish to build a map for a real territory, but ignore that the map doesn’t actually follow many details of the territory, it seems fair enough for others who can see the map and the territory to say “this isn’t a very good map, it is missing X, Y, and Z.” As you rightly point out, it would not make sense to say “it isn’t a very good map because it is not internally consistent. The more oversimplified a map is, the more likely it is to be internally consistent.
I like the metaphor of map and territory: morality refers to an observable feature of human life and it is not difficult to look at how it has been practiced and make statements about it on that basis. A system of morality that accepts neither “morality is personal (my morality doesn’t apply to others)” nor “Morality is univeral, the point is it applies to everybody” may fit the wonderful metaphor of a very simple axiomatic mathematical system, but in my opinion it is not a map of the human territory of morality.
If you are self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life, then we are talking about different things. If you believe you are proposing a useful map for the human territory called morality, then you must address concerns of “it doesn’t seem to really fit that well,” and not limit yourself to concerns only of “I said a particular thing that wasn’t true.”
But if you want to play the axiomatic geometry game, then I do disagree that “You oughtn’t do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” is a good possible morally real statement. First off, its negation, which I take to be “It’s OK if you do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” doesn’t seem particularly truer or less true than the statement itself. (And I would hope you can see why I was talking about 99% and 99.99% agreement given your original statement in your original post). Second, if your statement is morally real, objective, “made true by objective features of the world, independent of subjective opinion” then please show me how. (The quote is from http://en.wikipedia.org/wiki/Moral_realism )
tldr; you’re overestimating my patience to read your page of text, especially since previous such pages just kept accusing me of various things, and they were all wrong. (edit to add: And now that I went back and read it, this one was no exception accusing me this time of being “self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life” Sorry mate, am no longer bothering to defend against your various, diverse and constantly changing demonisations of me. If I defend against one false accusation, you’ll just make up another, and you never update on the fact of how wrong all your previous attempts were.)
But since I scanned to the end to find your actual question:
Second, if your statement is morally real, objective, “made true by objective features of the world, independent of subjective opinion” then please show me how
First of all I said my statement “might” be a possible example of something morally real. I didn’t argue that it definitely was such.
Secondly, it would be made e.g. a possible candidate for being morally real because it include all agents capable of relevant subjective opinion inside it. At that point, it’s no longer about subjective opinion, it’s about universal opinion. Subjective opinion indicates something that changes from subject to subject. If it’s the same for all subjects, it’s no longer really subjective.
And I would hope you can see why I was talking about 99% and 99.99%
No, I don’t see why. The very fact that my hypothetical statements specified “everyone” and you kept talking about what to do about the remainder was more like evidence to me that you weren’t really addressing my points and possibly hadn’t even read them.
Perhaps. And you are understimating your need to get the last word. But enough about you.
First of all I said my statement “might” be a possible example
I don’t know how to have a discussion where the answer to the question “show me how it might be” is “First of all I said [it] might be.”
The very fact that my hypothetical statements specified “everyone” and you kept talking about what to do about the remainder was more like evidence to me that you weren’t really addressing my points and possibly hadn’t even read them.
Well you already know there are nihilists in the world and others who don’t believe morality is real. So You already know that there are nos uch statements that “everybody” agrees to. And then you reduce the pool of no statements that every human agrees to even further by bringing in all other sentient life that might exist in the required agreement.
Even if you were to tell the intelligent people who have thought about it, “no, you really DO believe in some morality, you are mistaken about yourself,” can you propose a standard for developing a list or even a single statement that might be a GOOD candidate without attempting to estimate the confidence with which you achieve unanimity, and which does not yield answers like 90% or 99% as the limitations of its accuracy in showing you unanimity?
If you are able to state that you are talking about something which has no connection to the real world, I’ll let you have the last word. Because that is not a discussion I have a lot of energy for.
This also accounts for my constantly throwing things in to the discussion that go outside a narrow axiomatic system. I’m not doing math here.
I don’t know how to have a discussion where the answer to the question “show me how it might be” is “First of all I said [it] might be.”
You didn’t say “show me how [it might be]”, you said “show me how [it is]”
So you already know that there are no such statements that “everybody” agrees to.
Most people that aren’t moral realists still have moral intuitions, you’re confusing the categorization of beliefs about the nature of morality vs the actual moral instinct in people’s brains. The moral instinct doesn’t concern itself with whether morality is real; eyes don’t concern themselves with viewing themselves; few algorithms altogether are are designed to analyze themselves.
As for moral nihilists, assuming they exist, an empty moral set can indeed never be transformed into anything else via is statements, which is why I specified from the very beginning “every person equipped with moral instinct”.
If you are able to state that you are talking about something which has no connection to the real world,
The “connection to the real world” is that the vast majority of seeming differences in human moralities seem to derive from different understandings of the worlds, and different expectations about the consequences. When people share agreement about the “is”, they also tend to converge on the “ought”, and they most definitely converge on lots of things that “oughtn’t”. Seemingly different morality sets gets transformed to look like each other.
That’s sort of like the CEV of humanity that Eliezer talks about, except that I talk about a much more limited set—not the complete volition (which includes things like “I want to have fun”), but just the moral intuition system.
That’s a “connection to the real world” that relates to the whole history of mankind, and to how beliefs and moral injuctions connect to one another; how beliefs are manipulated to produce injuctions, how injuctions lose their power when beliefs fall away.
Now with a proper debater that didn’t just seek to heap insults on people I might discuss further on nuances and details—whether it’s only consequentialists that would get attractive moral sets, whether different species would get mostly different attractive moral sets, whether such attractive moral sets may be said to exist because anything too alien would probably not even be recognizable as morality by us; possible exceptions for deliberately-designed malicious minds, etc...
But you’ve just been a bloody jerk throughout this thread, a horrible horrible person who insults and insults and insults some more. So I’m done with you: feel free to have the last word.
In your very first post you write:
If you wish to build a map for a real territory, but ignore that the map doesn’t actually follow many details of the territory, it seems fair enough for others who can see the map and the territory to say “this isn’t a very good map, it is missing X, Y, and Z.” As you rightly point out, it would not make sense to say “it isn’t a very good map because it is not internally consistent. The more oversimplified a map is, the more likely it is to be internally consistent.
I like the metaphor of map and territory: morality refers to an observable feature of human life and it is not difficult to look at how it has been practiced and make statements about it on that basis. A system of morality that accepts neither “morality is personal (my morality doesn’t apply to others)” nor “Morality is univeral, the point is it applies to everybody” may fit the wonderful metaphor of a very simple axiomatic mathematical system, but in my opinion it is not a map of the human territory of morality.
If you are self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life, then we are talking about different things. If you believe you are proposing a useful map for the human territory called morality, then you must address concerns of “it doesn’t seem to really fit that well,” and not limit yourself to concerns only of “I said a particular thing that wasn’t true.”
But if you want to play the axiomatic geometry game, then I do disagree that “You oughtn’t do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” is a good possible morally real statement. First off, its negation, which I take to be “It’s OK if you do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn’t do.” doesn’t seem particularly truer or less true than the statement itself. (And I would hope you can see why I was talking about 99% and 99.99% agreement given your original statement in your original post). Second, if your statement is morally real, objective, “made true by objective features of the world, independent of subjective opinion” then please show me how. (The quote is from http://en.wikipedia.org/wiki/Moral_realism )
tldr; you’re overestimating my patience to read your page of text, especially since previous such pages just kept accusing me of various things, and they were all wrong. (edit to add: And now that I went back and read it, this one was no exception accusing me this time of being “self-satisfied with an axiomatic system where “moral” is a label that means nothing in real life” Sorry mate, am no longer bothering to defend against your various, diverse and constantly changing demonisations of me. If I defend against one false accusation, you’ll just make up another, and you never update on the fact of how wrong all your previous attempts were.)
But since I scanned to the end to find your actual question:
First of all I said my statement “might” be a possible example of something morally real. I didn’t argue that it definitely was such. Secondly, it would be made e.g. a possible candidate for being morally real because it include all agents capable of relevant subjective opinion inside it. At that point, it’s no longer about subjective opinion, it’s about universal opinion. Subjective opinion indicates something that changes from subject to subject. If it’s the same for all subjects, it’s no longer really subjective.
No, I don’t see why. The very fact that my hypothetical statements specified “everyone” and you kept talking about what to do about the remainder was more like evidence to me that you weren’t really addressing my points and possibly hadn’t even read them.
Perhaps. And you are understimating your need to get the last word. But enough about you.
I don’t know how to have a discussion where the answer to the question “show me how it might be” is “First of all I said [it] might be.”
Well you already know there are nihilists in the world and others who don’t believe morality is real. So You already know that there are nos uch statements that “everybody” agrees to. And then you reduce the pool of no statements that every human agrees to even further by bringing in all other sentient life that might exist in the required agreement.
Even if you were to tell the intelligent people who have thought about it, “no, you really DO believe in some morality, you are mistaken about yourself,” can you propose a standard for developing a list or even a single statement that might be a GOOD candidate without attempting to estimate the confidence with which you achieve unanimity, and which does not yield answers like 90% or 99% as the limitations of its accuracy in showing you unanimity?
If you are able to state that you are talking about something which has no connection to the real world, I’ll let you have the last word. Because that is not a discussion I have a lot of energy for.
This also accounts for my constantly throwing things in to the discussion that go outside a narrow axiomatic system. I’m not doing math here.
You didn’t say “show me how [it might be]”, you said “show me how [it is]”
Most people that aren’t moral realists still have moral intuitions, you’re confusing the categorization of beliefs about the nature of morality vs the actual moral instinct in people’s brains. The moral instinct doesn’t concern itself with whether morality is real; eyes don’t concern themselves with viewing themselves; few algorithms altogether are are designed to analyze themselves.
As for moral nihilists, assuming they exist, an empty moral set can indeed never be transformed into anything else via is statements, which is why I specified from the very beginning “every person equipped with moral instinct”.
The “connection to the real world” is that the vast majority of seeming differences in human moralities seem to derive from different understandings of the worlds, and different expectations about the consequences. When people share agreement about the “is”, they also tend to converge on the “ought”, and they most definitely converge on lots of things that “oughtn’t”. Seemingly different morality sets gets transformed to look like each other.
That’s sort of like the CEV of humanity that Eliezer talks about, except that I talk about a much more limited set—not the complete volition (which includes things like “I want to have fun”), but just the moral intuition system.
That’s a “connection to the real world” that relates to the whole history of mankind, and to how beliefs and moral injuctions connect to one another; how beliefs are manipulated to produce injuctions, how injuctions lose their power when beliefs fall away.
Now with a proper debater that didn’t just seek to heap insults on people I might discuss further on nuances and details—whether it’s only consequentialists that would get attractive moral sets, whether different species would get mostly different attractive moral sets, whether such attractive moral sets may be said to exist because anything too alien would probably not even be recognizable as morality by us; possible exceptions for deliberately-designed malicious minds, etc...
But you’ve just been a bloody jerk throughout this thread, a horrible horrible person who insults and insults and insults some more. So I’m done with you: feel free to have the last word.